added
string
created
string
id
string
metadata
dict
source
string
text
string
version
string
2017-08-24T23:59:27.316Z
2017-08-01T00:00:00.000
3709241
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1660-4601/14/8/939/pdf", "pdf_hash": "83926a082dd5cff3751d7481e0d7adcca080e876", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42496", "s2fieldsofstudy": [ "Psychology" ], "sha1": "2b901d3d99105ad1ff38f89b69ec8b61a1082b0b", "year": 2017 }
pes2o/s2orc
Taking a Stand: The Effects of Standing Desks on Task Performance and Engagement Time spent sitting is associated with negative health outcomes, motivating some individuals to adopt standing desk workstations. This study represents the first investigation of the effects of standing desk use on reading comprehension and creativity. In a counterbalanced, within-subjects design, 96 participants completed reading comprehension and creativity tasks while both sitting and standing. Participants self-reported their mood during the tasks and also responded to measures of expended effort and task difficulty. In addition, participants indicated whether they expected that they would perform better on work-relevant tasks while sitting or standing. Despite participants’ beliefs that they would perform worse on most tasks while standing, body position did not affect reading comprehension or creativity performance, nor did it affect perceptions of effort or difficulty. Mood was also unaffected by position, with a few exceptions: Participants exhibited greater task engagement (i.e., interest, enthusiasm, and alertness) and less comfort while standing rather than sitting. In sum, performance and psychological experience as related to task completion were nearly entirely uninfluenced by acute (~30-min) standing desk use. Introduction Recent studies indicate that employees spend about 62% of the workday sitting [1], and a majority of university students (95.3%) report sitting down for 75% or more of their class time [2]. Occupational sitting time is a well-established public health concern, as it is strongly linked with chronic disease and mortality in a growing number of prospective studies [3]. Furthermore, greater sitting time is closely tied to psychological health outcomes, such as depressive symptoms and anxiety [4]. To reduce sitting time, and in turn, improve health, some individuals have begun to implement standing desks or sit-stand workstations-that is, workstations that can adjust to either a seated or standing position. Indeed, several studies have found that the long-term use of sit-stand workstations produces cardio-metabolic health benefits, such as improved glucose levels at 1 [5] and 12 months [6]; reduced total cholesterol at 8 weeks [7]; increased healthy HDL cholesterol at 3 months [8]; and, improved cardio-metabolic risk scores at 12 months [6]. However, we note that this research is still in its nascent stages and further studies are needed. Finally, studies have also found that a single use of a standing desk significantly increases energy expenditure [9][10][11][12]. For standing desks to be feasible for occupational and academic use, they must not only promote health, but also pose minimal to no detrimental threat to work performance. Previous research has revealed that compared to sitting only, standing or sit-stand workstations do not significantly alter typing or computer mouse performance [9,[13][14][15][16][17][18], accuracy in identifying and correcting misspelled words within text [19], speaking quality [10], X-ray baggage screening [16], call center workers' objective performance [20], or aspects of cognitive functioning such as attention, information processing speed, or short-term memory [21]. Despite these areas of accumulating knowledge, we know of no prior studies examining the impact of standing desks on reading comprehension and creativity. Importantly, the vast majority of occupations seem to necessitate at least a moderate amount of reading comprehension abilities. According to one study, roughly 90% of all jobs require medium to high levels of reading comprehension [22]. In addition, creativity represents another important component of occupational success [23], and it is required in many fields of work [24]. Plucker and colleagues [23] define creativity as, "the interaction among aptitude, process, and environment by which an individual or group produces a perceptible product that is both novel and useful as defined within a social context." There are a number of large industries in which novel products are in high demand and competition is fueled by a search for novelty [25]. Creative industries were first defined as "activities which have their origin in individual creativity, skill, and talent and which have the potential for wealth and job creation through generation and exploitation of intellectual property" [26]. Examples include the advertising, software and computer services, research and development, publishing, architecture, engineering, and film industries [24]. Although many individuals think of creativity as an innate personality trait [27], decades of research have dispelled this myth, instead showing that creativity can in fact be fostered by environmental techniques [23]. In addition to assessing the effects of standing desks on task performance, it is important to examine potential psychological impacts, as workstations that have detrimental psychological effects would likely not be feasible for continued use. Only a few previous studies have evaluated the impact of standing desk use on perceived task effort [10] and workload [16], finding no effects on either outcome after acute use [28]. Furthermore, it is critical to assess individuals' expectations about how standing desks will impact performance, as negative expectations may be an obstacle preventing individuals from utilizing standing desks and realizing their benefits. Research on this topic is limited and inconsistent; in one study employees expressed concerns that intervention strategies to reduce occupational sitting would harm employee focus and productivity [29], whereas in another study only 8% of college students predicted that standing desks would decrease academic performance [2]. Finally, although previous studies have evaluated the impact of standing desks on discomfort [14,17], fatigue [30], and alertness [17], we know of no acute studies to date that have assessed a broader range of self-reported emotions that may plausibly be experienced while completing work tasks (e.g., interest, enthusiasm, stress, distraction). Only one chronic study has examined a larger number of psychological states in relation to standing desks, finding that 4-weeks of use improved employees' mood states, including reduced feelings of tension, confusion, depression, and total mood disturbance [28]. To address the aforementioned gaps in the literature, we conducted the first known experiment to test for differences in reading comprehension and creativity performance as a function of standing versus sitting at a desk. Given that we assessed the effects of a short-term, single use of a standing desk, we based our hypotheses primarily on prior studies in related domains that were also acute in duration [9,10,13,16,17,19]. First, we hypothesized that reading comprehension, creativity, and perceived difficulty and effort would not differ between seated and standing positions. In addition, we predicted that individuals would report experiencing higher levels of positive emotions (and decreased levels of negative emotions) while working in the standing position [28]. Finally, we assessed whether individuals typically expected that they would perform better on work-related tasks while sitting or standing at a desk, and whether those expectations would map on to actual performance. Like other acute performance studies, we utilized a student sample [14,17] in a laboratory setting [9,10,13,14,17,19], rather than intervene in a sample of office workers in their usual work environments [15,16]. The reasons for this approach were twofold: (1) compared to office workers [31], university students spend just as large of a percentage of their required work hours sitting-if not more; in a study of nearly 1000 college students, 82.7% of students spent 100% of their class time sitting [2]; and (2) given that this was the first empirical test of the performance outcomes assessed here, we valued a greater degree of experimental control and internal validity, viewing external validity as a next step for future research to further examine any observed effects in additional populations and settings. Pre-Screening Participants were recruited from a university psychology subject pool for undergraduate students, and course research credit was granted for participation. Individuals interested in participating completed a brief online pre-screening survey (see Measures below). Eligible participants were aged 18 or older, did not report having any current major illness or injury, and reported fluency in the English language. A total of 96 participants were enrolled in the study, and completed the laboratory session as described below. Study Design, Counterbalancing, and Random Assignment The university Human Subjects Institutional Review Board approved all study activities. In this within-subjects design, all participants completed the same four test sections: two different reading comprehension sections (RC-A and RC-B), and two different creativity sections (CR-A and CR-B). The composition of these test sections is described further in the Measures, below. The following is a basic overview of the lab day testing procedure. Before beginning any test sections, participants were instructed to assume their randomly assigned starting body position: sitting or standing. In their respective starting position, participants then completed their first two test sections, which always comprised one reading comprehension section (RC-A or RC-B) and one creativity section (CR-A or CR-B). Then, participants transitioned to their alternate, ending position. In their appropriate ending position, participants then completed their third and fourth test sections, which also always comprised one reading comprehension section (RC-A or RC-B) and one creativity section (CR-A or CR-B). We executed a full counterbalancing scheme to control for possible order effects related to test section or body position. The order in which participants performed the four test sections was counterbalanced between-subjects. To achieve this, we created a list of 32 possible conditions (see Table 1). Conditions comprised two components: (1) the order of the starting and ending body positions (sitting or standing); and (2) one of the 16 unique testing orders in which the two reading comprehension tests and the two creativity tests could be taken. The 16 possible test orders were generated with the stipulation that the first two sections had to always include 1 reading comprehension and 1 creativity test; likewise, the final two sections had to always include 1 reading comprehension and 1 creativity section. This ensured that all participants would complete one reading comprehension and one creativity section using each body position. Next, we compiled a spreadsheet wherein each of the 32 conditions appeared three times (to account for our projected sample size). The order of the conditions was then randomized. As each new participant enrolled in the study, s/he was assigned to the condition appearing next on the randomized condition list. A total of three participants completed each condition, creating a final sample size of 96 participants. Study Cover Story The study was described to participants as two separate research projects combined into one laboratory session: "The Markers of Health Study," and "The Creativity, Cognition, and Behavior Study." Participants read that, "One project aims to understand how posture (sitting and standing) relates to physiology (respiratory rate), and the other project seeks to better understand creativity and cognitive functioning in adults." During the informed consent process, the experimenter told participants, "I'll ask you to sit for part of the study and stand for part of the study while we take some physiological measurements. In order to get an accurate and reliable measurement, we need to take 30 min of physiological readings while you are in each posture. We want to make sure we make use of your time while we're collecting the readings, so while they are being taken, we'll have you complete some cognitive tasks for the second study for another lab's research team." Workstation The sit-stand workstation consisted of a 48" × 30" electronically adjustable standing desk with a digital display and memory preset (Jarvis, Xinchang, China). For the seated test sections, an office chair with back support, a foot rest, wheels, and no arm rests was used (#B1690-CS, Boss Office Products, Commerce, CA, USA). In addition, a 24" × 36" × 3 /4" commercial-grade, anti-fatigue comfort mat (CumulusPRO, Busan, Korea) was placed underneath the chair during seated test sections and also used to stand on during standing test sections. Procedure After participants provided informed consent, the experimenter attached a non-invasive elastic respiration band around the torso. Then, the experimenter left the room for 3 min to simply give participants time to get used to the sensation of the respiration band on the body. During this time, participants completed filler items designed to support the cover story that one of the studies was concerned with markers of health (e.g., typical hours of sleep per night, smoking history, and presence of any current diagnosed psychiatric or cognitive condition). Next, participants assumed their randomly assigned starting body position according to their condition. Participants assigned to be seated for their first two test sections remained seated; those assigned to begin standing were asked to stand. When standing positions were assumed, the experimenter raised the desk height to the number of inches estimated to be appropriate for each particular participant, based on the height that they had reported in pre-screening. These recommended heights for standing desk use were acquired using an online calculator for ergonomic office desks (www.thehumansolution.com). After raising the standing desk, the experimenter showed participants where the desk adjustment controls were and encouraged them to alter the desk height to whatever level felt most comfortable to them. Participants assigned to be sitting for their first two test sections were also oriented to the desk adjustment controls and encouraged to alter the desk height to a comfortable level if needed. After participants had assumed their starting body position, they then completed their first reading comprehension and creativity test sections. Next, participants self-reported how difficult they perceived each of the sections to be, how much effort they put into each section, and their mood during each section (see Measures below). Then, participants switched to their respective ending (alternate) body position; there was no break or period of rest between the two positions. After completing their remaining test sections in their ending position, participants again completed measures of perceived difficulty, effort, and mood-this time in reference to their most recently completed reading comprehension and creativity test section experiences. Participants then completed surveys assessing their physical activity in the past week, as well as their expectations about whether they would perform better on work-related tasks while sitting or standing at a desk. The experimenter then removed the respiration band, measured participants' weight and height, completed a study debriefing, and granted participants their course credit. Demographic Characteristics In the pre-screening survey, participants reported their age, sex, race/ethnicity, height, weight, whether they had a current major illness or injury, whether English was their primary language, and whether they had ever "regularly used a standing desk to perform their work/school duties without an office chair." We entered the final item in exploratory analyses to test whether prior standing desk use would moderate any effects of body position on task performance. Reading Comprehension Reading comprehension was assessed using materials from the Graduate Record Examination General Test (GRE)-a standardized, graduate-level admissions exam, typically taken by individuals who are interested in pursuing a masters degree, business degree, or doctoral degree [32]. At thousands of graduate and business schools [32], admissions and fellowship panels review applicants' GRE scores as a qualification to guide their decision-making processes. From July 2015 to June 2016, a total of 584,677 people took the exam in over 194 countries, with non-U.S. citizens making up 43% of test takers [33]. The GRE is composed of verbal reasoning, quantitative reasoning, and analytic writing measures. Meta-analytic research examining data from 1753 independent samples has found the GRE-including the verbal reasoning section in particular, which contains the reading comprehension items-to be a valid predictor of outcomes such as graduate grade point average, degree attainment, publication citation counts, and faculty ratings [34]. The specific GRE materials administered in the present study were obtained from the official GRE practice exam, provided by Educational Testing Services online (www.ets.org/gre). In the present study, participants completed only the reading comprehension items from the each of the two verbal reasoning sections of the official practice exam. Previous studies assessing reading comprehension have also used GRE reading comprehension items from a sample exam [35,36]. The official GRE test makers state that the purpose of the verbal reasoning measure is to assess the "ability to analyze and draw conclusions from discourse, understand multiple levels of meaning, select important points, and understand the meanings of sentences and entire texts" [32]. The two administered reading comprehension test sections (RC-A and RC-B) each contained six passages (five 1-paragraph passages and one 2-paragraph passage) and multiple-choice questions related to those passages. The 1-paragraph passages contained a range of 65 to 202 words each, and the 2-paragraph passages contained 202 words (RC-A) and 463 words (RC-B), respectively. RC-A contained 12 questions, and RC-B contained 13 questions. Participants were given a total of 20 min to complete each section. Participants were instructed to let the experimenter know if they finished before the 20 min had elapsed; if this occurred, participants were then given the instructions for the next part of the study. The experimenter recorded the total amount of time that each participant spent completing each reading comprehension section. Performance on each section was calculated as the proportion of questions answered correctly. Creativity We assessed creativity using the Wallach and Kogan Creativity Test [37]. Participants were asked to generate original uses for common objects (i.e., bricks and knives), instances of common concepts (i.e., things that are loud and round), and consequences of hypothetical events (i.e., what would happen if people went blind or no longer needed to sleep). CR-A comprised the "knife," "loud," and "blind" tasks; CR-B comprised the "brick," "round," and "sleep" tasks. Participants were specifically instructed to "write down all of the unusual, creative, and uncommon" responses that they could think of, as previous research has shown that giving instructions to be creative improves the validity of divergent thinking scores [38,39]. The experimenter read the instructions aloud to participants, and instructions were also given in print. Participants were given a total of 3 min to complete each task. This amount of time was controlled and invariable. At the end of each task, participants were instructed to "take a moment to evaluate your responses, and draw a circle around the two responses that you think are your most creative ideas". Three independent raters rated each response to each of the six tasks on a scale ranging from 1 (not at all creative) to 5 (highly creative). The raters were undergraduate research assistants who did not have any prior experience with scoring the Wallach and Kogan Creativity Test [37]. All raters completed a training session with the primary investigator (Laura Finch) to review the published scoring instructions [39] and complete practice ratings prior to assigning study response ratings. Ratings were given in accordance with the instructions provided by Silvia et al. [39], which were in turn adopted from Wilson, Guilford, and Christensen [40]. This method considers responses that are uncommon, remote, and/or clever to be creative. Raters considered all three of these qualities while completing their scoring, and allowed strength in one quality to balance weakness in another quality [39]. Raters also adhered to the following techniques endorsed by Silvia et al. (2008) and Harrington, Block, and Block [38]: (1) for the unusual uses tasks, raters gave lower scores for actual uses of the items (e.g., slicing something using a knife); and, (2) for the instances tasks, raters gave lower scores for objects that were present in the testing room that participants were in (e.g., paper towels or chair wheels as instances of things that are round). Across the six creativity tasks, participants generated a total of 4976 responses. Responses were handled and scored following the methodology of Silvia et al. [39]. Participants' written responses were typed into a spreadsheet and sorted alphabetically within each task, which importantly served to blind raters to factors such as participant handwriting, the total number of responses each participant gave for each task, the position of each response within a participant's set, and whether participants circled a response as one their top two best. Before beginning to score responses for a given task, the raters read all responses for that task to form an impression of the range in responses. Inter-rater reliability across the three independent raters was found to be acceptable (intraclass correlation coefficient = 0.73). An average rating was computed for each individual response within each task by calculating the mean across the three raters' scores. We then computed two creativity indices found by Silvia et al. [39] to be valid and reliable: average scoring and Top 2 scoring. To calculate average creativity, first, each participant's average individual response ratings for a given task were summed and then divided by their total number of responses for that task. In this way, this average creativity index represents the overall creativity of a participant's responses for a task and penalizes participants for generating a greater number of uncreative responses. Then, for each creativity section (sitting and standing), the average ratings for each of the three tasks within that section were also averaged, creating a final overall average creativity score for each body position to be used in statistical analyses. In contrast, for the Top 2 creativity index, an average was calculated for each task for the two responses that participants indicated were their best two. Thus, this Top 2 creativity index was not affected by the total number of responses that participants generated for each task (note: for each task, over 96% of participants generated three or more responses). To compute this index, for each creativity section (sitting and standing), the Top 2 creativity averages for each of the three tasks within that section were also averaged, creating a final overall Top 2 creativity score for each body position to be used in hypothesis testing. Perceived Task Difficulty, Effort, and Mood Participants were instructed to think back to their experience while completing each test section, and then were asked to rate how difficult it was (1 = not at all; 7 = extremely), as well as how much effort they put into it (1 = none; 7 = a lot). They were also asked to indicate the extent to which they experienced each of a number of emotions while completing each section (1 = not at all; 7 = extremely). Here, we modified the Positive and Negative Affect Schedule [41] to include an abbreviated list of 16 emotions. Given that this mood survey was to be completed a total of four times throughout the laboratory session (once for each test section), to minimize participant fatigue, we omitted emotions (guilty, scared, strong, hostile, proud, irritable, ashamed, and afraid) from the original 20-item measure that we did not hypothesize as being particularly conceptually relevant to completing work tasks or sitting/standing, and we also omitted emotions (excited, attentive, and active) that were somewhat conceptually related to other emotions already being assessed (interested, enthusiastic, alert). In addition, we added the items "stressed," "tired," "comfortable," "distracted," and "focused," which we expected to be more relevant to the present study's work-related tasks. Other prior standing desk studies have also assessed related constructs such as discomfort [14,17], fatigue, tension, and confusion [17]. Performance Expectations We created four items to assess participants' expectations about how desk position would impact their performance on various work-related tasks. The prompt began, "Imagine that you are about to perform certain work-related tasks and you have the option of either sitting or standing at a desk." Then, participants selected either "sitting" or "standing" in response to each of the four items. For example, participants were asked, "Do you think you would be more creative while sitting at a desk or while standing at a desk?" They also selected the position in which they believed that they would perform better on reading comprehension and typing tasks, as well as whether they thought they would "be a more productive writer" while sitting or standing at a desk. Physical Activity and Sedentary Behavior The short form version of the International Physical Activity Questionnaire (IPAQ) [42] was used to assess participants' usual general level of physical activity and sedentary behavior. This 7-item survey measures how much time participants spent doing various types of physical activity in the past 7 days, as well as how much time they spent sitting down (i.e., sedentary behavior). The following is a sample item: "During the last 7 days, on how many days did you do vigorous physical activities like heavy lifting, digging, aerobics, or fast bicycling?" Following the standard IPAQ scoring protocol [43], we computed total metabolic equivalent (MET) minutes per week and classified participants into one of three physical activity categories (low, moderate, or high). Given that there are no well-established thresholds for creating categories for sitting time [43], we used tertiles to categorize participants as exhibiting low, moderate, or high sitting time [44]; the resulting variable was used as a measure of sedentary behavior. Body Mass Index The experimenter measured participants' weight and height without shoes. Weight was assessed using a Tanita Professional Body Composition Monitor SC-331S, and height was measured using a stadiometer and recorded to the nearest 1/8 inch. Body Mass Index (BMI) was calculated using the standard formula (weight in pounds/height in inches 2 × 703), and the standard BMI ranges were used to characterize each participant's BMI category (underweight: <18.5; normal: 18.5 to 24.99; overweight: 25 to 29.99; obese: 30 or greater). Statistical Analyses As our primary analyses, we conducted paired t-tests to assess any effects of body position on the following dependent variables: GRE proportion correct, average creativity, Top 2 creativity, perceived difficulty of each task, perceived effort expended on each task, and emotions. Thus, we compared sitting and standing values for each of these within-subjects variables. In addition, we compared the average number of minutes that participants spent completing their seated and standing reading comprehension sections, as a significant difference between the two positions could potentially affect the study outcomes. In exploratory analyses, to test whether body position effects on task performance may be moderated by prior standing desk experience, sedentary behavior, physical activity, or BMI, we added each of these categorical variables separately to repeated measures analysis of variance (ANOVA) models and examined the appropriate interaction terms involving body position. In addition to assessing participants' expectations about body position and task performance, we went on to test whether these expectations were indeed associated with actual performance differences between the two positions. We conducted point-biserial correlation tests to examine these relationships, entering expectations as a dichotomous variable and actual performance difference scores as a continuous variable. Expectations of enhanced performance while standing were coded as 1, and expectations of enhanced performance while sitting were coded as 0. We computed the difference in actual performance by body position for each outcome by subtracting sitting scores from standing scores. SPSS software (version 24.0, IBM, Armonk, NY, USA) was used for all statistical testing, and the computer code is available. The level of significance for all tests was set at p < 0.05. Participant Demographics Descriptive characteristics of the 96 study participants are presented in Table 2. We also note that among the 261 individuals who completed the study pre-screening, 8.4% (n = 22) self-reported prior regular experience with standing desks-a similar percentage to the laboratory sample. Note: IQR = interquartile range; MET = metabolic equivalent of task. a Following standard IPAQ scoring protocol [43], four extreme values of >16 h/day were omitted, as it is assumed that individuals spend an average of 8 h/day sleeping. Primary Analyses As shown in Table 3, findings revealed no significant differences in performance, perceived task difficulty, or perceived task effort as a function of standing versus sitting. There was no difference in the number of minutes that participants took to complete their seated and standing reading comprehension sections, (M = 15.83, SD = 3.08 vs. M = 15.45, SD = 2.96, respectively; p = 0.18). The average total amount of time that participants spent completing the tests in each position was~30-31 min (i.e., 15-16 min for reading comprehension, and~15 min for creativity (3 min for each of the 3 tasks; 1 min for the experimenter to read the instructions for each of the 3 tasks; and 1 min for participants to review their responses and circle their best two for each of the 3 tasks)). No other body position effects on mood were found for the reading comprehension or creativity tests for the remaining emotions: focused, inspired, motivated, determined, stressed, anxious, nervous, tired, jittery, distracted, distressed, and upset. Moderation Analyses As shown in Table 4, body position did not interact with the prior standing desk use, physical activity category, sedentary behavior category, BMI, or prior regular standing desk use to predict any of the three performance outcomes. Discussion Study findings revealed that standing at a desk did not impair (or enhance) performance on reading comprehension or creativity tasks relative to sitting at a desk, regardless of participants' level of regular physical activity, sedentary behavior, BMI, or prior experience with standing desks. Although this is the first study to test experimentally how standing versus sitting affects these performance outcomes, these findings converge with other previous work demonstrating that standing desks do not hinder performance on a range of other work-related tasks [18]. Further evidence suggests that the use of sit-stand workstations either does not influence worker productivity, or may even enhance it. In their review of eight prior studies evaluating sit-stand workstations and work productivity, Karakolis and Callaghan [45] reported that four studies found no effect on productivity, three reported improved productivity, and one found mixed results. Additionally, in the present study, body position did not influence participants' perceptions of how difficult the tasks were, nor did it affect the amount of effort that participants reported putting into the tasks. Previous research has similarly found that compared to sitting, acute standing desk use is associated with neither increased perceived effort related to a speech task [10], nor increased perceived workload related to a security screening task [16]. However, in another study, participants reported a significantly higher perceived workload when they stood versus sat while completing tasks [46]. Therefore, perhaps workload effects may depend on the nature of the task at hand, the duration of the relevant task, and/or the method of workload assessment. When reflecting back on their mood during the tasks, participants reported feeling a similar level of most of the assessed emotions. However, there were a few exceptions for which statistically significant differences emerged: Participants reported feeling more interested, enthusiastic, and alert for the reading comprehension section that they stood for. However, we note that the difference in means for these emotions was relatively small (e.g., a difference of 0.28 on a 7-point scale for the interest item). Other studies that have acutely manipulated body position have found that, compared to sitting only, using a standing or sit-stand desk did not impact alertness [17] but reduced feelings of fatigue [30]. In addition, in the present study, participants felt less comfortable while standing for both types of tasks. Previous acute studies have been inconsistent regarding whether acute use of a sit-stand workstation increases [17] or decreases [14] musculoskeletal discomfort when compared to a traditional seated desk. With regard to expectations, participants believed that they would be more creative while standing but believed that they would perform better while sitting in terms of reading comprehension, typing, and productivity of writing. These results differ from that of another study [2], wherein only 4-8% of individuals expected that standing desk use would impair performance. One reason for this discrepancy may be that the present study examined expectations about particular work tasks, whereas Benzo, Gremaud, Jerome, and Carr [2] investigated general performance. Another plausible explanation is that our study presented participants with only the forced choice of indicating whether performance outcomes would be better for standing or for sitting. If participants expected that performance would be no different between the two body positions, there was no means of expressing this opinion on the relevant survey. Study Implications Our findings suggest that adopting a short-term standing position will not impair performance on reading comprehension or creativity tasks. In addition, standing at a desk seems to have minimal effects on acute emotional states, with the exceptions of enhanced interest, enthusiasm, and alertness during task completion. These findings converge with prior survey data indicating that most college students and instructors expect that standing desk use will improve engagement and attention [2]. The three observed mood enhancements appear to hang together to embody the construct of work engagement, defined as a "work-related state of mind that is characterized by vigor, dedication, and absorption" (p. 74) and also characterized by high energy and enthusiasm [47]. Enhanced work engagement has been strongly linked with well-being and performance [48]. Although taking a standing position improved engagement but not performance in the present study, enhanced engagement is valuable in its own right as a promoter of well-being. Given that some individuals associate prolonged sitting with de-motivation [29], future studies should examine whether spreading knowledge about standing desks and improved engagement may therefore promote the use of these workstations. Study results also suggest that individuals may hold unfounded expectations about how standing desks will affect future performance on work-related tasks. Participants believed that compared to a seated position, standing would be more beneficial for creativity but worse in terms of reading comprehension, typing, and productivity of writing. The former two beliefs were not supported by results from this study, and moreover, participants' expectations of differential performance between seated and standing positions were not associated with any actual performance differences between these positions. Furthermore, previous research has shown that typing performance is also typically unaffected by standing desk use [9,13,16,19]. Therefore, an overall theme seems to be emerging, in which individuals' (mostly negative) standing desk performance expectations are not supported by any strong empirical evidence. We recommend that the growing scientific data refuting these expectations be better disseminated to students with the aim of combatting this potential obstacle to standing desk use. Our study findings suggest that if colleges and universities should incorporate sit-stand workstations on their campuses with the aim of promoting student health, the short-term use of these workstations may not incur detriments to students' task performance and psychological well-being. As an alternative to standing desks, some individuals have been implementing treadmill desks in their work environments to promote health. Like standing desks, treadmill desks have also demonstrated positive effects for physiological functioning [18] and do not seem to harm reading comprehension performance [35,49]. There is also some evidence that treadmill desks may improve creativity among college students, relative to sitting [50]. However, compared to standing desks, treadmill desks also carry the disadvantages of a far greater monetary cost, as well as decreased fine motor skills and math problem solving among students [35]-factors that point to standing desks as a more feasible option for this population. Strengths, Limitations, and Future Directions This study was strengthened by its within-subjects, fully counterbalanced, randomized design. Furthermore, a recent review [18] identified eight prior studies investigating standing desk use and work performance, and these studies included a range of 12-60 participants each, with a mean of approximately 31 participants per study. In contrast, the present study included a sample over three times as large, providing increased statistical power. Moreover, to our knowledge, none of these prior studies made use of a study cover story, and thus, the researchers' true study aims were likely extremely transparent to participants, potentially eliciting biased responses. In contrast, our study addressed and minimized this concern by including a study cover story. An additional strength of the study was its inclusion of a relatively diverse sample, with approximately 70% of participants identifying as belonging to a racial/ethnic minority group. It is especially important to examine the impact of standing desks in minority populations, as some minority groups are disproportionally vulnerable to the types of negative health conditions associated with greater sitting time [51], and thus, the feasible utilization of standing desks among these populations may be a potential avenue for mitigating racial/ethnic health disparities. Despite these strengths, no study is without limitations. First, we may not yet infer the generalizability of the present study's findings to populations that were underrepresented in the present study. The majority of participants were young adult university students; thus, future research should examine the present outcomes in a sample of employees from a broader age range and work experience history, ideally in their current regular workplace setting. Nonetheless, the tasks that participants completed were applicable to most work settings, and conducting the study in this student population represented a feasible, highly internally valid initial investigation into these novel performance outcomes. In addition, we observed limited variability in several of the moderator variables, such that the study sample primarily consisted of individuals who were very physically active and had never used a standing desk before. Future studies should include a wider range of individuals, such as those who are less physically active or have had more previous experience with standing desks. Another limitation of the study was its assessment of relatively short-term effects of standing desk use. However, several studies have examined the impact of sit-stand workstations on a longer time scale and similarly found no effects on other aspects of work performance. For example, 5 days of use did not impact data entry performance [14], and 4 weeks of use did not influence workers' self-reported performance [5]. Another study similarly found no effects on self-reported work performance across a 3-month period, yet the sit-stand workstations significantly decreased sitting time by over 2 h/day [8]. Furthermore, 4 [52] to 8 weeks of use [7] effectively reduces sitting time without impacting productivity. In sum, using sit-stand workstations for a month or more appears to reduce sitting time without harming general work performance and productivity. Conclusions This study presents the first evidence that compared to the use of traditional seated desks, the short-term use of a standing desk does not seem to impair reading comprehension or the ability to generate creative ideas. Although future studies should evaluate whether these areas of task performance remain unaffected by longer-term standing desk use in the workplace, these initial findings suggest that if university students choose to use standing desks in an effort to reduce sitting time or promote health, doing so may increase their short-term task engagement without undermining work performance.
v3-fos-license
2018-04-26T23:46:28.703Z
2018-04-12T00:00:00.000
4792980
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.ajnr.org/content/ajnr/39/5/981.full.pdf", "pdf_hash": "24ae81228495a565d18a9cf0f43aa7f2c701b175", "pdf_src": "Highwire", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42497", "s2fieldsofstudy": [ "Medicine" ], "sha1": "b93beb480bc6c80b39a205781101c68527823a68", "year": 2018 }
pes2o/s2orc
Percutaneous CT-Guided Biopsies of the Cervical Spine: Technique, Histopathologic and Microbiologic Yield, and Safety at a Single Academic Institution BACKGROUND AND PURPOSE: Cervical spine biopsies can be challenging due to the anatomy and the adjacent critical structures. Percutaneous image-guided biopsies can obviate the need for an open biopsy, however there have been few studies looking at the approaches, safety, and efficacy of percutaneous cervical spine biopsies. This retrospective study evaluated technical considerations, histopathologic and microbiologic yield, and safety in CT-guided cervical bone biopsies. MATERIALS AND METHODS: A retrospective review of cervical bone and/or bone/disc biopsies performed from January 2010 to January 2017 was included in this study. Clinical diagnosis and indication, patient demographics, biopsy location, biopsy needle type, technical approach, lesion size, dose-length product, conscious sedation details, complications, and diagnostic histopathologic and/or microbiologic yield were recorded for each case and summarized. RESULTS: A total of 73 patients underwent CT-guided cervical bone biopsies. Fifty-three percent (39/73) were for clinical/imaging concern for infection and 47% (34/73) were for primary tumors or metastatic disease. Thirty-four percent (25/73) were of the inferior cervical spine (ie, C6 and C7). A sufficient sample was obtained for histopathologic and microbiologic analyses in 96% (70/73) of the biopsies. Forty-six percent (18/39) of those samples taken for infection had positive cultures. Two intraprocedural complications occurred in which the patients became hypotensive during the procedure without long-term complications. CONCLUSIONS: Percutaneous CT-guided biopsy of the cervical spine is an effective and safe procedure with high diagnostic yield and can obviate open procedures for histopathologic and microbiologic analyses of patients with clinical and imaging findings concerning for infection or primary and metastatic osseous lesions. B iopsies of the cervical spine are more difficult to obtain than those of the thoracic or lumbar regions due to the anatomy and smaller size of the vertebral elements as well as the critical adjacent vascular and neural anatomy. Although the demands for cervical spine biopsies are steadily increasing in our practice, few publications address the safety of such biopsies and the high diagnostic yield they can provide. To date, the largest study was published in 2016 by Cox et al 1 and looked at CT-guided cervical bone biopsies in 43 patients by combining data from 2 large tertiary care hospitals. A diagnosis was obtained in 41 of the 43 patients, for a yield of 95%. In 2008, Rimondi et al 2 published a study on 430 CT-guided spinal biopsies, 10 of which were within the cervical spine (2.3% of the sample). The diagnostic accuracy for this study was 70%. Similarly, in 2004, Lis et al 3 studied the accuracy of CT-guided spinal biopsies. There were 410 cases in all, with only 9 cases being of the cervical spine. Yet, for these 9 cases, there was a 100% diagnostic accuracy reported. Using a minimally invasive percutaneous CT-guided approach to cervical spine biopsies can obviate open biopsies for both infection and primary or metastatic lesions. This technique lends itself to the increasing demands for cultures by infectious disease specialists and for tissue diagnosis of metastatic disease by oncologists. The purpose of this retrospective study was to further evaluate technical considerations, histopathologic and microbiologic yield, and safety in CT-guided cervical bone biopsies. Patient Selection and Study Enrollment Approval from the institutional review board was granted with a waiver of patient informed consent. Correlating data from a single academic institution on cervical bone and/or bone/disc biopsies from January 2010 to January 2017 were then gathered and included in this study. For each case, we recorded and summarized the following: patient demographics, clinical diagnosis and indication, level biopsied, biopsy needle type/gauge, technical approach, number and length of each soft-tissue/bone core biopsy, CT dose-length product, conscious sedation details including sedation time and the amount of fentanyl and/or midazolam (Versed) given, and diagnostic histopathologic and/or microbiologic yield. Procedural complications were documented according to the Society of Interventional Radiology classification. 4 Patients were clinically monitored for at least 1 hour after each procedure for evidence of acute complications such as hematoma formation or neurologic injury. Electronic medical records were also reviewed for evidence of delayed complications within 30 days of the biopsy. Biopsy Procedure All procedures were performed by 1 of 6 board-certified fellowship-trained musculoskeletal radiologists. Most of these procedures were performed with the patient under moderate sedation except for 3 procedures performed with the patient under general anesthesia and 2 procedures in which conscious sedation was not required. In most cases, a 1:1 mixture of bupivacaine 0.25% and lidocaine 1% was administered for subcutaneous and periosteal anesthesia. Average sedation time was 67.5 minutes; average intravenous fentanyl and Versed dosages were 210.1 g and 3.7 mg, respectively. The Arrow OnControl Powered Bone Lesion Biopsy System (Teleflex, Limrick, Pennsylvania) and Bonopty (AprioMed, Londonderry, New Hampshire) bone biopsy system were used for bone access and biopsy of the sites of infection, sclerotic lesions, or CT-occult lesions. In the cases in which there was a soft-tissue biopsy performed in either a lesion or the disc space, Bard (Bard Peripheral Vascular, Tempe, Arizona), Achieve (CareFusion, San Diego, California), Temno Evolution (CareFusion), and Tru-Cut (CareFusion) spring-loaded soft-tissue biopsy needles were used. Of the 73 total biopsy procedures performed, 64 procedures required the use of a bone biopsy needle system. The most commonly used bone biopsy needle system was the Arrow OnControl 11/13 needle system, used in 32 of 64 procedures. Of the 73 total biopsy procedures performed, 22 required the use of a soft-tissue biopsy needle. Nine of the 73 biopsies were performed with only a soft-tissue coaxial needle system. The most commonly used soft-tissue biopsy needle was the Achieve 16-ga biopsy needle. It was used in 6 of the 22 procedures in which a soft-tissue biopsy needle was used. Tables 1 and 2 describe in detail the different biopsy needles used for infection and lesion biopsy. Patient positioning on the table was dependent on the location of the cervical spine lesion and the approach used to perform the biopsy. For the posterolateral and directly posterior approaches, the patient was placed prone on the table. For the directly lateral approach, the patient was placed in the decubitus position with the side of the lesion up. For the anterolateral approach, the patient was placed supine. Posterolateral (60 procedures), directly posterior (1 procedure), directly lateral (3 procedures), or anterolateral (9 procedures) approaches were used to sample the lesions (Fig 1), depending on the location of the pathology. Our most commonly used approach was a posterolateral approach through the lateral mass into the vertebral body. This approach allows avoidance of important neurovascular structures in the neck but is tedious to perform because the needle has the farthest to travel within bone to get to the site of abnormality. A direct posterior approach was used for a single spinous process lesion. The anterolateral approach was used occasionally. This approach is somewhat treacherous because the needle must be threaded among multiple important structures, includ- ing the carotid artery laterally and the trachea and esophagus medially, and it is often difficult to avoid the thyroid for lesions in the lower cervical spine. We commonly used this approach if the patient had a prior radical neck dissection because there were fewer important structures to avoid in these cases. A lateral approach was used in cases in which the mass extended into the lateral soft-tissue or was near midline at a site deemed too difficult to reach by a posterolateral approach. In this approach, the needle needs to be passed between the external and internal carotid arteries and the vertebral artery. The internal jugular vein can sometimes be avoided but not always with this approach. With this approach in the upper cervical spine, it is imperative that the course of the vertebral artery is known because the location is variable. CT angiography was occasionally performed immediately before the biopsy to better delineate the vascular anatomy and plan the safest biopsy path to minimize the risk of vascular injury. Of the 73 patients who underwent biopsy, 6 had corresponding CT angiograms. In these cases, an average of 80.0 mL of ioversol contrast (Optiray 350; Mallinckrodt, St. Louis, Missouri) was given intravenously to help localize vascular structures in relation to the tumor (Figs 2 and 3). All samples were analyzed at the institution where the biopsy was performed. Patient Characteristics A total of 73 patients underwent CTguided cervical bone biopsies. Sixty-two percent of the patients were men and 38% were women. Fifty-three percent (39/73) of the cervical biopsies were for clinical/imaging concern for infection ( Fig 4), and 47% (34/73) were for primary tumors or metastatic disease. A sufficient sample was obtained for histopathologic and microbiologic analyses in 96% (70/73) of the biopsies. Forty-six percent (18/39) of those samples taken for infection had positive culture growth from a sample. Two intraprocedural complications occurred. One patient became hypotensive and did not respond to the IV bolus; the procedure was terminated after 2 samples had already been obtained. The patient subsequently stabilized and had no further complications. The second patient became hypotensive and bradycardic before the biopsy needle could be placed. The patient subsequently stabilized and had no further complications. A cervical spine biopsy was performed 2 days later without complication. To obtain an average size, we measured the longest axis of each lesion with an average lesion size of 18.2 mm, ranging between 5.0 and 40.0 mm. Ten lesions involved the entire vertebral body. Of the 34 biopsies performed for bone lesions in our study, the most common location of lesions, including the 10 cases in which the lesion encompassed the entire vertebral body, was the C7 cervical vertebra, which involved 29.4% (10/34) of the lesions. At this level, 5 were within the vertebral body, 3 were in the transverse process, and 2 were in the lateral mass. The locations of the lesions that made up the remaining 24/34 biopsies were as follows: C1 lesions (2/24), with both lesions in the vertebral body/anterior arch; C2 lesions (7/24), with 4 lesions centered in the vertebral body alone, 2 lesions centered within the lamina, and 1 centered in the lateral mass; C3 lesions (2/24), both of which included the entire vertebral body; C4 lesions (2/34), of which 1 lesion was centered within the vertebral body and 1 lesion was located within the C4 transverse process; C5 lesions (4/24), with 2 lesions centered within the vertebral body, 1 lesion within the C5 spinous process, and 1 lesion centered in the lateral mass; and C6 lesions (7/24), with 6 lesions centered in the vertebral body and 1 lesion centered within the lateral mass. Of the 34 biopsies obtained, 70.6% (24) were malignant. The most common malignant lesions were myeloma and adenocarcinoma, each found in 4 patients (Table 3). CT Radiation Dose Our retrospective study also looked at dose-length product (DLP) in all 73 CT-guided cervical bone biopsies. For the 34 biopsies that were specific to primary tumors or metastatic disease, the range for DLP was 122-1767 mGy ϫ cm, with an average of 712 mGy ϫ cm. For the 39 biopsies specific to clinical/imaging concern for infection, the range for DLP was 163-3223 mGy ϫ cm, with an average of 932.4 mGy ϫ cm. Combining DLPs for all 76 spinal biopsies yielded the same range of 122-3223 mGy ϫ cm, with an overall average DLP of 829.8 mGy ϫ cm. We accounted for the range of DLPs by equating outliers with those patients who received ablation on the same day that the cervical biopsy was performed, therefore increasing the patient's DLP and thus our calculated DLP average. When considering the most recent spinal biopsies, specifically from January 2015 to January 2017, which accounts for 47.9% (35/73) of all biopsies performed, there was a decreased average DLP of 639 mGy ϫ cm. This difference in average DLPs within biopsies of the last 2 years can be associated with the increasing awareness of scanning parameters, advanced scanner technology, as well as institutional emphasis on decreased diagnostic and procedural radiation doses. DISCUSSION Multiple publications have demonstrated the safety and utility of percutaneous bone biopsies with fewer focusing on the spine and even fewer on the cervical spine. To our knowledge, this is the largest study to date on percutaneous CT-guided cervical bone biopsies. Compared with similar studies of spinal bone biopsies, our diagnostic yield is within the average, around 96% for histopathologic and microbiologic analyses of all biopsies. The high diagnostic yield of not just cervical bone biopsies but also biopsies throughout the entire spine is important to consider when looking at past imaging techniques and the nonspecific nature of radiologic imaging alone for yielding a definitive diagnosis. In 2012, a study by Kaltsikis et al 5 on core needle biopsies of spinal lesions similarly emphasized the importance of the CTguided bone biopsies themselves to render more definitive histopathologic diagnoses. For their study, 79 biopsies of the spine were performed, 5 of the cervical spine, with a diagnostic yield of 95% and a diagnostic accuracy of 97.3%. Most spinal biopsies (52%) performed in the study by Kaltsikis et al also followed our technique using the posterior lateral approach during biopsy. Given the relatively smaller spinal anatomy and the ascending importance of the neural anatomy of the cervical spine, choosing the appropriate anatomic approach to safely perform the biopsy is an important consideration. We most commonly use a posterolateral approach for biopsy of the cervical spine, but other approaches to consider include the lateral, anterolateral, and directly posterior approach, depending on the location of the lesion. Of the 73 patients who underwent the procedure at our institution, there were only 2 minor intraprocedural complications and no postprocedural complications. There was no vascular or neurologic injury. In 2013, Huang et al 6 published a 2-year prospective study on the incidence of complications following percutaneous CTguided biopsies of spinal lesions, including soft tissues and bone. Of the 386 patients, the reported rates of bruising, level of discomfort/pain, and fever did not exceed 16.1% in any category. In 2006, Peh 7 demonstrated low complication rates on CT-guided biop- sies of the spine, ranging from 0% to 10%, with Ͻ1% risk of major complication. CONCLUSIONS This study supports the addition of percutaneous cervical spine biopsies to the growing number of studies demonstrating the safety and efficacy of spine biopsies that can obviate open biopsies and their associated risks.
v3-fos-license
2017-10-02T09:56:32.209Z
2017-01-01T00:00:00.000
10585027
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://thesai.org/Downloads/Volume8No9/Paper_52-Design_Patterns_and_General_Video_Game_Level.pdf", "pdf_hash": "e2a7daadf386c774fc4aad9978bafd03e8dff826", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42500", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "e2a7daadf386c774fc4aad9978bafd03e8dff826", "year": 2017 }
pes2o/s2orc
Design Patterns and General Video Game Level Generation Design patterns have become a vital solution for a number of problems in software engineering. In this paper, we have performed rhythmic analysis of General Video Game Level Generation (GVG-LG) framework and have discerned 23 common design patterns. In addition, we have segregated the identified patterns into four unique classes. The categorization is based on the usage of identified patterns in game levels. Our future aim is to employ these patterns as an input for a search based level generator. Keywords—General video game level generation; rhythmic analysis; procedural content generation; design pattern; search based level generator I. INTRODUCTION With the passage of time, digital games have become a large industry.In 2014, the gaming industry generated more than 47 billion dollars worldwide [1].However, with expansion, this industry is also facing a number of problems.The most important aspect in this regard is the total cost and budget that is being utilized for game development.Game content upholds a significant chunk of game development and with technical improvement in devices like smartphones, the content is becoming more complex and demanding.Therefore, the rapid development of game content is vital [10].Procedural Content Generation (PCG) is the algorithmic creation of game content with less human intervention.Procedural content generators capture game rules as an input and then generate essential content for a game.PCG has been used frequently by indie game developers to generate diverse content including characters [3], terrains [3], [9], dungeons [4] and levels [5]- [8], [11], [18]. Level Generation has been the most significant and old problem in PCG domain.Yet, most of the level generation work has been done for specific games [5]- [7], [18].Generating content for a suitable single type of game is important but it undermines the capability and reusability of a generator.On the other hand, a level generator that can generate levels for multiple games can possess considerable challenges.In this regard, an important step has been made by introducing a General Video Game Level Generation (GVG-LG) framework [8].This framework generates levels for multiple games, unlike other level generators.The GVG-LG framework is comprised of Random, Constructive and Search-Based Level Generator (SB-LG).The initial effort was to identify design patterns from the GVG-LG framework and to employ them as objectives for SB-LG. In this study, we have performed rhythmic group analysis for the identification of design patterns.After the analysis of each game presented in the GVG-LG framework, 23 unique design patterns were identified.These patterns were further classified into four different categories.The central aim of this research was to utilize these design patterns as objectives for the SB-LG in the GVG-LG framework.Our effort for pattern identification is inspired by the work done for Super Mario Bros (SMB) [6]. The paper is further divided into five sections.The second section explains the existing knowledge about PCG, the importance of design patterns in level generation and level generation for general video games.The third section of the paper presents the GVG-LG competition and analysis for identification and classification of design patterns.Lastly, we argue about the application of design patterns and their usage as objectives for SB-LG. A. Procedural Content Generation PCG is the algorithmic creation of game content with limited or indirect user input [2].Content includes assets of a game, i.e. maps, quests, textures, characters, rules, terrains, dungeons, levels, and sprites, etc. PCG is not a new domain and had been used in 80's for generating hundreds of stars in Elite [17]. Most of the algorithms that are used for generation of content are constructive and generate-and-test algorithms [17].Constructive algorithms generate the content once and do not iterate upon it for further content improvement.On the other hand, generate-and-test algorithms first generate the content and then iterate upon it to make it of sufficient quality.In literature, these algorithms are referred to as search-based algorithms. Along with advantages, PCG also has some limitations, for example creating a generator for each game may require more time and cost as compared to the manual creation of the content [11].The main reason for creating a general level generator is to overcome such disadvantages.In addition, if we want to create content with ultimate control and with specific details, the best choice is to create the content manually [12].Therefore, the control and evaluation of content in PCG poses challenges. B. Level Generation Level generation is the oldest and complex task in PCG domain.It requires the understanding of all the elements of a game and how to fit them into a level.The procedural generation of levels has witnessed notable attention and various studies have been conducted in this regard.Most of the work has been focused on the generation of levels for specific games [7], [11], [18]. C. Design Patterns and Game Levels Alexander initially developed patterns for problem-solving.It consists of two components: problems and their solution.The problem refers to a common and recurring design element in object-oriented development [13].In a software application, the design patterns give insight to designers about architectural knowledge and provide a template for many situations [20]. In games, patterns are the problems created by designers for players to solve [5].There is a collection of possible design choices in a game that can provide architectural knowledge to a designer.In other words, these design choices are architectural chunks for a game design which can automate game development. Design patterns have been used previously for the generation of levels for specific games.Hullett et al. used design patterns for generating levels in the first-person shooter game [18].Similarly, Dahlskog et al. [6] identified patterns of enemies, gaps, valleys, multiple paths, and stairs to generate levels for the game of SMB.Initially, the author proposed a straightforward way of combining the discovered design patterns into a game level [5].In addition, the author used vertical slices of existing levels as design patterns and generated levels of sufficient quality [6]. In a recent study [7], a multi-level generator was also proposed.In this approach, three layers of abstraction for design patterns (meso, micro, and macro) were proposed and game levels were generated by using SB-LG.The literature review gives a clear indication of the usage of design patterns for generating game levels. D. General Video Games Level Generation To the best of our knowledge, most of the level generation work has been done for specific games like SMB [5]- [7], [14] and Rogue [15].These generators possessed sufficient advantages.However, the problem is in the re-usability, development time and cost of such generators. Preferably, the grand goal of Artificial Intelligence is to model general solutions that can be applied to a particular set of problems.For video games, this can only be done when we have a method to describe the games.Video Game Description Language [9] was developed originally for the Stanford General Video Game Playing.This language has mostly been used to tackle the problem of general games. For the general video game level generation problem, an important step was identified in [11], where a video game descriptive language was used to generate multiple levels for general games (Sokoban, Lava, Block Faker, Gem and Destroy Game).Though the generator possessed notable advantages, it had no framework to compare other generators.Similarly, Neufeld et al. [16] introduced a general video game level generator by using Description Language and Answer Set Programming.The generator was tested against three different games and generated levels had a structure similar to many of the existing levels. In this regard, a significant step has been taken by introducing the GVG-LG framework.The framework is based on GVG-AI framework and allows users to create and test their own level generators against a variety of games [8].Three distinct generators: Random, Constructive and SB-LG were introduced within this framework.After detailed experimentation, SB-LG proved to be the best out of three.The SB-LG is based on an evolutionary algorithm, which takes an array of tiles as input and generates a level for the game. A. Rhythmic Groups Rhythmic groups are short and non-overlapping sets of components that unfold an area of challenge.This approach assists to recognize challenging areas within a game level and provides a way to discover the complication behind such areas [19].Rhythmic groups are quite modular, therefore provide assistance in patterns identification and their re-usability in a game level.In this research, rhythmic analysis was applied on a set of games to investigate design patterns inside a level.For this purpose, a game level is divided into cells.The cell is a section of game-play that ends, where the player can choose a new path.Cells inside a level design helps to analyze the structure and to provide a catalog of several paths through a level.The path may be of diverse difficulties, depending on the structure and dimensions of the cell. B. Search for Patterns The GVG-LG framework is built upon the GVG-AI framework.It consists of 92 different games with 5 levels of each game.Level of each game is divided into small groups to identify the challenging areas through rhythmic group analysis.By analyzing the GVG-LG framework, it is founded that most of the games had common design structure with most common elements.Therefore, primarily focus is based on the underlying structure of game levels for identification of design patterns.Design patterns are categorized into four classes on the basis of their rationale in the level: • Solid Sprites: Blocks the movement of the player. • Collectible Sprites: Can be destroyed by the player on interaction. • Harmful Sprites: Are harmful and can kill the player on interaction. • Enemies: Agents having ammunition and are harmful to player. 1) Analysis of Existing Games for Solid Sprites: In this section, rhythmic group analysis is applied on the GVG-LG games to identify design patterns for solid sprites.In Fig. 1, five cells are highlighted for the recognition of patterns.Cell 1 consists of a squared shaped solid block or sprite.Whenever player meets a solid sprite in his way, he moves up or down and provides a transition to a different path.Cells 2 and 3 are in rectangular shape.These two cells have the same purpose of creating a wall but, here player requires more effort to pass through.The structure of cell 2 shows that it can be obtained by connecting two or more solid sprites vertically, and similarly cell 3 can be obtained by connecting them horizontally.In a similar way, cell 4 represents the boundary of the level which can be established by assembling solid sprites vertically and horizontally without blocking any internal space of the level.Boundary sprites make a dashboard and allow a player to play inside a specified area.To completely block an area or to form a room inside a level, these sprites can be connected in a two-dimensional way.Cell 5 consist of a movable sprite.The Player can use a key to unlock such type of sprite to find a path.The analysis of existing level shows some interesting aspects of level design for games.The structure of existing cells can be obtained by assembling solid sprite by using different patterns.Table 1 shows some common design patterns for placement of solid sprites. Single Single solid sprite at a free space.Boundary Collection of solid sprites to form game dashboard.Wall Two/multiple sprites connected vertically or horizontally to block a path. Room Vertically and horizontally connected sprites to surround an area. Movable Sprites that can move after unlocking it by key. 2) Analysis of Existing Games for Collectible Sprites: Mostly all 2D platform games have collectible sprites in the form of rewards.Collectible sprites are objects in a level that can be destroyed by the player on interaction and provides a reward, such as points, coins or weapons [19].In Fig. 2, five cells are identified for the collectible sprites.Cell 1 consist of a single sprite at a free space, the player requires little effort to deal with it.Cells 2 and 3 shows sprites in a group form, where a player needs more effort for interaction.If the player wants to acquire maximum points in less time, he may choose the path where sprites are in grouped form.Similarly, cell 4 consist of a collectible sprite along with enemy and cell 5 has a collectible sprite between harmful sprites in a hidden form, which creates a challenging environment for the player. Moreover, these sprites may move in a single or multiple lines.Each line of sprites may move in same direction or in opposite direction.Table 2 shows collectible sprites which can be destroyed on player interaction but, the player may require different skills for each. Collectible sprites Single Single sprite at a free space. Group Two or more sprites together. Single line and moving Multiple sprites in a line and moving in same direction. Multi-line and moving Multiple lines of sprites and each line moves in opposite direction to its nearer line. Risk and reward Collectible sprite with an enemy together at a place. Hidden Collectible sprites surrounded by other types of sprites. 3) Analysis of Existing Games for Harmful Sprites: In Fig. 3 cell 1 presents single sprite at a place and cell 2 shows multiple sprites together, while cell 3 consists of two different types of harmful sprites.In addition, a hole presented in cell 4, may also be harmful and if designed using multiple patterns; will pose a challenge for the player. Harmful sprites Single Single sprite at a place. Group Two or more harmful sprites at a place. Multi-type Harmful sprites of multiple types together at a place.Hole Single or multiple holes together. Table 3 shows patterns of harmful sprites for the GVG-LG www.ijacsa.thesai.orgframework.By increasing numbers and types of these sprites, the player may face a difficult environment to play.Levels where goals are surrounded by a group of harmful sprites like in fire game, the player cannot reach his goal without defeating these harmful sprites. 4) Analysis of Existing Games for Enemies: Enemies patterns presentation in Table 4 may give a meaningful difference in the game-play.For example, 2-enemies together at a place can block the player path in an effective way.Similarly, enemies moving in multiple lines and in multiple directions can give a hard-hitting to the player than enemies moving in single line.Player requires different skills to defeat enemies of each type, therefore if enemies of multi-types in multiple lines are placed in a level than the game-play becomes more enhanced to proceed in next level. A. Design Pattern-Based Level Generator The suggested 23 design patterns will give a new experience to players by providing a better and enhanced gameplay.In the proposed technique, it is suggested that these identified patterns would be provided as an input to SB-LG and then it will generate a level of a game by using specified constraints about any game.In PCG, Search-based content generation is a special case of the generate-and-test approach [17].In such type of generation, an evaluation function is used to assign a fitness value to the generated content.Similarly, assigning fitness value to newly generated content depends upon previously generated content.A defined population of content instances is placed in system memory.For each generation, these contents are evaluated and assigned a fitness value.In SMB, SB-LG takes input slices from the first level and that first level is generated by using constructive approach.Though in this case, the SB-LG will take patterns from the available array and will create levels by connecting and rearranging these patterns. To construct a level generator effectively, a developer must understand these two major ideas: firstly, selection of design patterns that make up the level for a game, and secondly the way they fit together to create an entire level that will be playable and well-balanced.Here, it is suggested that a probability value must be assigned to each design pattern on the basis of occurrence in existing games.A comparison between occurrence of design patterns and a set of GVG-LG games is shown in Fig. 4. Similarly, there should be a defined sequence for the selection of design patterns.For example, boundary pattern will be selected first and after its implementation other patterns from same class or distinct class will be placed inside it.Because boundary provides a layout for a level to encompass all other sprites.Fig. 4 shows the occurrence of identified common design patterns in the given set of games.Game play-ability can be changed by increasing the quantity of these patterns inside a level.For this purpose, SB-LG will assign a fitness value to each design pattern.Games such as Aliens and Rogue have a high probability for the presence of enemies.Therefore, changing the fitness value of enemies pattern will enhance the play-ability of the game level.Similarly from the Fig. 4, it is found that to create a level layout boundary patterns must be selected first such that, other sprites can be placed inside it.This approach may give significantly better output by placing the variations of patterns and increasing the length of the game platform. In this section, two patterns are discussed in detail to find the impact of patterns on enhancement of level design.Description of multi-type and randomly moving enemies is given in Table 5.To make a level difficult for the player enemies of multiple types are placed in such a way that they move randomly across their position, which does not allow a player to go through a long jump.Similarly, description of grouped harmful sprites is given in Table 6.By placing multi-type harmful sprites in different places, the player needs good decision-making power to protect himself from them.If a player successfully solves a pattern then he may face next challenge from the same group.A group of multiple harmful sprites can give difficult game-play to a player for survival in a level as compared with single harmful sprite.On the other hand, if the number of sprites in a group are increased for each level then it may provide a sequential play to proceed in next level. V. CONCLUSION AND FUTURE WORK In this paper, we have discussed the ongoing work on design pattern-based level generator.This paper highlights the importance of design patterns and how design patterns can play a significant role in the level generation for general video games.Rhythmic group analysis was applied on a given framework to identify some common design patterns.The level of each game was divided into small challenging areas called cells.This approach assists to identify patterns within a level.For the initial experimentation, 23 distinct design patterns were proposed.Afterwards, these design patterns were divided into four classes: solid sprites, collectible sprites, harmful sprites, and enemies.Each game level has a design chunk composed of above-mentioned sprites.We claim that by the arrangement of these design patterns in a sequence of difficulties and using as an objective for the SB-LG, it will give a new experience to the player.In this proposed method, the SB-LG will take these patterns from the available array and will create levels for a game.It is suggested that selection of the design pattern should be in a sequential way and on the basis of its probability value in existing game level.This technique may give significantly better output by placing the variations of patterns and increasing the length of the game platform.Finally, it is concluded that these design patterns provide a useful and tangible way to generate levels for general video games. Fig. 1 . Fig. 1.Analysis of an existing level for solid sprites. Fig. 2 . Fig. 2. Analysis of an existing level for collectible sprites. Fig. 3 . Fig. 3. Analysis of an existing level for harmful sprites. Fig. 4 . Fig. 4. Comparison of design patterns and a set of GVG-LG games.(Presence of each design pattern in a game is shown by tick mark.Occurrence of Solid sprites and Harmful sprites is high in first 7 games, where as Enemies have high occurrence in last 5 games.) TABLE IV . ENEMY SPRITES FOR GVG-LG TABLE VI . DESCRIPTION OF GROUPED HARMFUL SPRITES
v3-fos-license
2019-11-27T14:04:41.131Z
2019-11-21T00:00:00.000
208300795
{ "extfieldsofstudy": [ "Medicine", "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1999-4923/11/12/623/pdf", "pdf_hash": "a63456a1591aceb239cfd9b0fe77f6fc461fb17d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42501", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "d4694d669c7b62f8480ca0e3a029ee0cf1221f46", "year": 2019 }
pes2o/s2orc
Impact of Different Mucoadhesive Polymeric Nanoparticles Loaded in Thermosensitive Hydrogels on Transcorneal Administration of 5-Fluorouracil In a previous paper a thermosensitive hydrogel formulation based on chitosan or its derivatives (TSOH), containing medicated chitosan nanoparticles (Ch NP) for transcorneal administration of 5-fluorouracil (5-FU) was described. The Ch NP-containing TSOH allowed a time-constant 5-FU concentration in the aqueous for 7 h from instillation. The aim of the present work was to study the impact of the surface characteristics of new NP contained in TSOH on ocular 5-FU bioavailability. The Ch derivatives used to prepare NP were quaternary ammonium-Ch conjugate (QA-Ch), S-protected derivative thereof (QA-Ch-S-pro), and a sulphobutyl chitosan derivative (SB-Ch). All NP types had 300–400 nm size, 16–18% encapsulation efficiency, and retained the entrapped drug for at least 15 h. Drug release from TSOH containing NP based on QA-Ch or QA-Ch-S-pro was virtually equal, whereas with TSOH containing NP based on SB-Ch was significantly slower. Instillation, in rabbit eyes, of NP-containing TSOH based on QA-Ch or SB-Ch led to a plateau in the aqueous concentration vs. time plot in the 1–10 h range with significantly enhanced area under curve (AUC). Negative charges on the NP surface slowed down 5-FU release from TSOH while positive charges increased NP contact with the negatively charged ocular surface. Either results in enhanced ocular bioavailability. Introduction Topical ophthalmic preparations such as eyedrops are the most commonly used ocular drug delivery systems. Unfortunately, the intraocular bioavailability of their active substances is generally less than 5%, due to the anatomical and physiological characteristics of the eyes, including tissue barriers, such as cornea, lens, conjunctiva, and sclera, and various physiological functions, such as lacrimation and consequent dilution, and drug expulsion by tear turnover. As a consequence, to obtain the appropriate intraocular drug concentrations, frequent instillations of eyedrops are needed, which can cause toxic side effects and damage to the ocular tissue [1]. Thus, the traditional ophthalmic preparations and relevant therapeutic protocols cannot provide and maintain effective drug concentrations in the corneal tissue, and this results in a poor ocular bioavailability. Type II mucin from porcine stomach, were purchased from Sigma-Aldrich (Milan, Italy). The QA-Ch conjugate was synthesized from Ch according to Zambito et al., 2013 [19], the thiolated S-protected derivative of QA-Ch was synthesized according to Fabiano et al., 2018 [20]. Reduced molecular weight hyaluronic acid (rHA) (viscosimetric molecular weight 470 kDa) was prepared as described by Zambito et al., 2013 [19]. FITC labelling of QA-Ch, QA-Ch-S-pro, and SB-Ch was carried out as previously described [21]. The QA-Ch50 polymer, used to prepare the TSOH, was synthesized according to Zambito et al., 2013 [19]. In the code, 50 means that it was prepared by maintaining the temperature at 50 °C for the entire duration of the reaction. All aqueous solutions/dispersions were prepared with freshly distilled water. QA-Ch was characterized by 1 H NMR to determine the degree of substitution with the small side chains containing adjacent quaternary ammonium groups and the length of such chains. Protected thiols present on QA-Ch-S-pro chains were determined by polymer reduction and subsequent quantification of the 6-mercaptonicotinamide protecting group. Only polymers that had the same characteristics as previously obtained [20] were used in this work. Synthesis and Characterization of Sulphobutyl Chitosan (SB-Ch) SB-Ch was synthesized from Ch as previously described [22]. Briefly, 1,4-butane sulphone (3 equivalents per N-acetylglucosamine unit) was added to a Ch solution in acidic water (1% w/w Ch, 2% w/w of acid acetic). The mixture was allowed to react at 60 °C for 6 h. The resulting solution was poured into acetone. The precipitated product was resuspended in demineralized water and purified by dialysis 3 days against water. After dialysis, the polymer solution was lyophilized to obtain the purified SB-Ch ( Figure 1). 1 Preparation of Medicated NP FITC-labeled or unlabeled NP based on QA-Ch or QA-Ch-S-pro were prepared by self-assembly upon addition of rHA. In detail, a solution of rHA 0.2 mg/mL in phosphate buffer (0.13 M, PB pH 7.4) containing 6.25 mg of 5-FU was added dropwise (500 μL) to 5 mL of 2 mg/mL polymer solution in the same buffer, under stirring at room temperature. Similarly, to prepare FITC-labeled or unlabeled NP based on SB-Ch, a solution of TPP 2 mg/mL in demineralized water containing 6.25 mg of FU was added dropwise (700 μL) to 5 mL of 2 mg/mL SB-Ch in demineralized water, under stirring at room temperature. The final 5-FU concentration in NP systems based on QA-Ch, or QA-Ch-S-pro, or SB-Ch was 1.25 mg/mL, corresponding to the concentration contained in the commercial 5-FU eyedrops. After their preparation, the NP dispersions were checked for particle size and zeta potential (ζ) at a temperature of 25 °C (Zetasizer Nano ZS, Malvern, UK). The ζ values of medicated NP based on QA-Ch, QA-Ch-S-pro, or SB-Ch were determined after NP centrifugation (2000 rpm for 30 min) and their subsequent sediments re-suspended in 1.9 mL of a 0.08 M HCl solution containing 195 μL of NaOH 1 N and 0.8 g/mL of β-GP. Their drug-entrapment efficiency (EE) was evaluated by subjecting the dispersion to centrifugation (20,000 rpm for 30 min at 4 °C) and analyzing the supernatant spectophotometrically at 266 nm. The EE was calculated as follows, using the appropriate calibration curve: Preparation of Medicated NP FITC-labeled or unlabeled NP based on QA-Ch or QA-Ch-S-pro were prepared by self-assembly upon addition of rHA. In detail, a solution of rHA 0.2 mg/mL in phosphate buffer (0.13 M, PB pH 7.4) containing 6.25 mg of 5-FU was added dropwise (500 µL) to 5 mL of 2 mg/mL polymer solution in the same buffer, under stirring at room temperature. Similarly, to prepare FITC-labeled or unlabeled NP based on SB-Ch, a solution of TPP 2 mg/mL in demineralized water containing 6.25 mg of FU was added dropwise (700 µL) to 5 mL of 2 mg/mL SB-Ch in demineralized water, under stirring at room temperature. The final 5-FU concentration in NP systems based on QA-Ch, or QA-Ch-S-pro, or SB-Ch was 1.25 mg/mL, corresponding to the concentration contained in the commercial 5-FU eyedrops. After their preparation, the NP dispersions were checked for particle size and zeta potential (ζ) at a temperature of 25 • C (Zetasizer Nano ZS, Malvern, UK). The ζ values of medicated NP based on QA-Ch, QA-Ch-S-pro, or SB-Ch were determined after NP centrifugation (2000 rpm for 30 min) and their subsequent sediments re-suspended in 1.9 mL of a 0.08 M HCl solution containing 195 µL of NaOH 1 N and 0.8 g/mL of β-GP. Their drug-entrapment efficiency (EE) was evaluated by subjecting the dispersion to centrifugation (20,000 rpm for 30 min at 4 • C) and analyzing the supernatant spectophotometrically at 266 nm. The EE was calculated as follows, using the appropriate calibration curve: where M t is the total mass of 5-FU used for the preparation of NP and M s is the mass found in the supernatant. Preparation of Thermosensitive Hydrogels (TSOH) Containing NP Medicated with 5-FU Thermosensitive ophthalmic hydrogels (TSOH) were prepared according to Fabiano et al. [13]. Briefly, 400 mg of Ch and 100 mg of QA-Ch50 were dissolved in 18 mL of a 0.08 M HCl solution. The resulting solution was kept under magnetic stirring at 4 • C. Then, 5-FU-medicated NP, based on QA-Ch, QA-Ch-S-pro, or SB-Ch, freshly prepared, were added in the sol state under magnetic stirring at 4 • C, before the addition of 450 µl β-GP (0.8 g/mL) solution to obtain TSOH. Dynamic Dialysis Studies To study the reversible drug binding by NP in fluid dispersion we used already reported procedure and theory [23,24]. Briefly, a porous cellulose membrane (cut-off 12.5 kDa) was used to separate the donor compartment of the dialysis cell from the receiving phase (100 mL of PB pH 7.4 for 5-FU-loaded QA-Ch or QA-Ch-S-pro NP or100 mL of demineralized water for 5-FU-loaded SB-Ch NP). The system was thermostated at 35 • C for 5 h, while maintaining sink conditions. At time t = 0, 5 mL of freshly prepared 5-FU loaded NP, or plain drug solution (control), or this solution containing 2 mg/mL of QA-Ch or QA-Ch-S-pro dissolved in PB pH 7.4, or 2 mg/mL of SB-Ch in water was introduced in the donor compartment of the cell. In all cases, at the end of experiment, the receiving phase was analyzed spectrophotometrically to determine the drug transport. In the case of NP, the receiving phase was also analyzed for particle size. Interrupted-Dialysis Studies The dynamic dialysis experiment, described in Section 2.5, was stopped after 1, 2, 3, 5, 15, or 24 h, from the start. At each time interval, the donor phase was centrifuged (2000 rpm for 30 min at 4 • C) to determine the drug fraction contained in NP matrix, NP dispersion medium, and acceptor medium, according to reference [25]. The results were plotted as drug fraction in each phase versus time. Studies of 5-FU Release from NP-Containing TSOH 5-FU release from TSOH was carried out using a cell and a procedure reported by Fabiano et al. [13]. The gel (0.5 mL) containing medicated NP was introduced in the cylindrical cavity of the cell. A porous cellulose membrane (cut-off 12.5 kDa) was used to separate the gel from the receiving phase (30 mL of PB pH 7.4 for 5-FU-loaded NP based on QA-Ch or QA-Ch-S-pro, or 30 mL of demineralized water for 5-FU-loaded NP based on SB-Ch). At time, t = 0, the cell was introduced in a beaker containing the receiving phase thermostated at 35 • C and stirred at 300 rpm. The receiving phase was analyzed spectrophotometrically at 30-min intervals to determine the drug transport kinetics. Confocal Microscopy and Image Analysis To evaluate the possibility of re-dispersing NP into the hydrogel system, the FITC-labelled, 5-FU medicated NP, based on QA-Ch, QA-Ch-S-pro, or SB-Ch were dispersed in the TSOH at room temperature and observed under a confocal laser-scanning microscope (Zeiss LSM 880 with Airyscan, Carl Zeiss, Jena, Germany). The representative fluorescence confocal micrographs of NP were taken in liquid (21 • C) and gel states (4 • C), using a 63x Apochromat NA = 1.4 oil-immersion objective with the pinhole aperture of the confocal system at 1 Airy unit. The excitation wavelength was set at 488 nm (10-20 µW power emission at objective), whereas emission was in the 500-550 nm range. Pixel dwell time was adjusted to 1.52 µs and 512 × 512 pixel images were collected. Image analysis was carried out by ImageJ v.1.52o (NIH, Bethesda, MD, USA) software. Particle diameters were calculated by tracing an equatorial line over each bead (average of 5-10 beads), collecting the fluorescence profile and fitting it with a Gaussian function. The full width at half maximum (FWHM) of the best-fitting curve was assumed as particle diameter. Micro-Rheological Characterization of NP Mucoadhesive Properties Micro-rheological measurements were carried out using a Zetasizer Nano ZS, Malvern, with a detection angle of 173 • C and a temperature of 25 • C, applying the theory reported by Dodero et al. [26]. The micro-rheological characterization of 5-FU loaded freshly prepared QA-Ch or SB-Ch NP was performed using mucin from porcine stomach, Type II. Ocular mucins are not commercially available, so porcine gastric mucin was used because it has also been applied as a model substance in other studies, investigating ocular mucoadhesion [27]. In order to obtain reliable micro-rheological data, the conditions about the tracer-sample combination were verified as reported by Dodero et al., 2019 [26]. A dispersion of mucin 3 mg/mL in water, was filtered using a cellulose acetate filter (pore size 0.45 µm). A sample was taken from the filtrate and lyophilized to calculate the concentration of dispersed material (1.85 mg/mL). The filtered mucin dispersion was diluted 10 times with a solution of NaCl 0.9%. Then, 5 µL of tracer sample (polystyrene latex particles, diameter 500 nm, Beckman, 5 µl/mL) and 5 µL of NP dispersions prepared as described in Section 2.4 were added to the diluted mucin dispersion. Micro-rheological tests were performed to evaluate viscoelastic properties and assess NP mucoadhesive properties on the basis of the viscosity changes caused by NP addition to a mucin solution. In Vivo Studies For the in vivo studies we used male New Zealand albino rabbits weighing 3-3.5 kg treated as prescribed in the guidelines from the European Community Council Directive 2010/63, approved by the Animal Care Committee of the University of Pisa (D.L. 2014/26, 12 March 2019). Fifty µL (one drop) of the following two types of ophthalmic formulations were instilled in the lower conjunctival sac: (1) a dispersion of QA-Ch-based NP medicated with 1.25 mg/mL of 5-FU in TSOH and (2) a dispersion of SB-Ch-based NP medicated with 1.25 mg/mL 5-FU in TSOH. For the entire duration of the experiments each rabbit eye was checked for signs of conjunctival/cornea edema and/or hyperemia [28]. Before the aspiration of aqueous humor (~60 µL) from the anterior chamber of the eye, the rabbits were anesthetized with one drop of Novesina ® . The 5-FU concentration in aqueous humor was determined by high-performance liquid chromatography (HPLC) using the apparatus and the mobile phase described by Fabiano et al. [13]. An Aeris 3.6 µm, PEPTIDE XB-C18 Å, 250 × 4.6 mm column, equilibrated at 30 • C was used and UV detection was set at 266 nm. Standard curves were obtained analyzing six standard drug solutions (concentration range 0.3-1.25 µg/mL) in acetonitrile mixed with aqueous humor (2:1). The resulting mixtures were centrifuged, and the acetonitrile was removed by evaporation at 50 • C. The resulting aqueous product was lyophilized and re-dispersed in a volume of mobile phase corresponding to the initial volume of standard solution. Standard curves were linear (r 2 > 0.995, limit of detection 0.2 µg/mL). The retention time was 8.2 min. The concentration of each unknown sample was determined as described above, using a standard curve produced on the same day. Data Treatment Linear plots were obtained by linear regression analysis of data from in vitro experiments. The relevant slope, intercept, and coefficient of determination (r 2 ) were calculated. The significance of differences was evaluated by Student's t-test (p < 0.05). For the in vivo experiments, the linear trapezoidal rule between 0 and 10 h was used to calculate the area under curve (AUC) and the statistical differences were evaluated using the method reported by Schoenwald et al. [29]. Characteristics of Medicated NP The size, polydispersity index, zeta potential (ζ) and encapsulation efficiency (EE) for medicated NP based on QA-Ch, QA-Ch-S-pro, or SB-Ch are found in Table 1. The size and polydispersity index values were not significantly different from each other or from those for NP based on Ch prepared in our preceding paper Fabiano et al. [13] (342.5 ± 15.2). The ζ values for medicated NP based on QA-Ch or QA-Ch-S-pro were positive in agreement with the presence of ammonium quaternary ions on their surface, whereas for medicated NP based on SB-Ch the ζ value was negative in agreement with the presence of superficial sulfonic groups. The EE was in the range 18%-15% with no significant difference between the three cases. The prepared NP only differing for their external charge allowed us to evaluate the impact of this property on the system ability to promote the intraocular absorption of 5-FU. Characteristics of Medicated NP The size, polydispersity index, zeta potential (ζ) and encapsulation efficiency (EE) for medicated NP based on QA-Ch, QA-Ch-S-pro, or SB-Ch are found in Table 1. The size and polydispersity index values were not significantly different from each other or from those for NP based on Ch prepared in our preceding paper Fabiano et al. [13] (342.5 ± 15.2). The ζ values for medicated NP based on QA-Ch or QA-Ch-S-pro were positive in agreement with the presence of ammonium quaternary ions on their surface, whereas for medicated NP based on SB-Ch the ζ value was negative in agreement with the presence of superficial sulfonic groups. The EE was in the range 18-15% with no significant difference between the three cases. The prepared NP only differing for their external charge allowed us to evaluate the impact of this property on the system ability to promote the intraocular absorption of 5-FU. Dynamic Dialysis Studies Dynamic dialysis data were used to compare reversible drug interactions with the NP in fluid dispersions. According to [23,24], the data obtained were plotted as ln (C d /C d0 ) × 100 versus t using the following Equation (2): where C d is the drug concentration in the donor phase, C d0 is the drug concentration in donor phase at t = 0, F f is the non-interacting drug fraction in donor phase, and K m is the dialysis-rate constant. All plots, reported in Figure 3 were significantly linear (r 2 values, 0.90-0.98) which indicated that in all cases (plain 5-FU, control, NP based on QA-Ch, QA-Ch-S-pro, or SB-Ch) Equation (2) was obeyed and for plain 5-FU (control) the slope of the relative log-linear plot, used to calculate F f was equal to K m . The slopes of the straight lines reported in Table 2 indicated a 5-FU binding with all polymers in solution higher than the binding with NP dispersions. From here, it was deduced that the drug fraction reversibly interacting with the polymers in solution was significantly higher than that adsorbed on NP surface. This was probably due to an interaction between the 5-FU and the polycationic polysaccharide Ch. In the case of NP, the positive groups of Ch could be bound to the crosslinker and, hence, less available for 5-FU binding. This, nevertheless, should not impair the NP effectiveness in vivo, in fact, nanoparticles made of mucoahesive polymers are themselves more mucoahesive than the corresponding parent polymers, hence, likely to strongly adhere to the ocular surface [10,20,30]. Dynamic Dialysis Studies Dynamic dialysis data were used to compare reversible drug interactions with the NP in fluid dispersions. According to [23,24], the data obtained were plotted as ln (Cd/Cd0) × 100 versus t using the following equation (2): where Cd is the drug concentration in the donor phase, Cd0 is the drug concentration in donor phase at t = 0, Ff is the non-interacting drug fraction in donor phase, and Km is the dialysis-rate constant. All plots, reported in Figure 3 were significantly linear (r 2 values, 0.90-0.98) which indicated that in all cases (plain 5-FU, control, NP based on QA-Ch, QA-Ch-S-pro, or SB-Ch) Equation (2) was obeyed and for plain 5-FU (control) the slope of the relative log-linear plot, used to calculate Ff was equal to Km. The slopes of the straight lines reported in Table 2 indicated a 5-FU binding with all polymers in solution higher than the binding with NP dispersions. From here, it was deduced that the drug fraction reversibly interacting with the polymers in solution was significantly higher than that adsorbed on NP surface. This was probably due to an interaction between the 5-FU and the polycationic polysaccharide Ch. In the case of NP, the positive groups of Ch could be bound to the crosslinker and, hence, less available for 5-FU binding. This, nevertheless, should not impair the NP effectiveness in vivo, in fact, nanoparticles made of mucoahesive polymers are themselves more mucoahesive than the corresponding parent polymers, hence, likely to strongly adhere to the ocular surface [10,20,30]. Table 2. Results of dynamic dialysis data plotted in Figure 3 and obtained according to Equation (2). Figure 3 and obtained according to Equation (2). 5-FU Release from NP The procedure used to study 5-FU release from NP was based on interrupted dialysis, as described in Section 2.6. A dispersion of each freshly prepared NP type, loaded with 5-FU and not separated from the non-entrapped drug was introduced in the donor compartment of the dialysis cell. Hence, not more than 15-18% of the whole 5-FU amount contained in each dispersion was associated with the NP phase, as shown in Table 1. From knowledge of the drug amount used for each NP preparation, the drug amount determined for the NP dispersion medium in donor compartment (DM phase) at each interruption time, the cumulative amount determined for the drug transferred into the receiving medium (RM phase) during each interruption time, and the relevant % 5-FU contained in the NP matrix, i.e., the NPM data reported in Figure 4, could readily be calculated. This data indicated that the 5-FU fraction immobilized in the NP matrix remained virtually constant at the initial value of 15-18% over the first 15 h of experiment, while it seemed to fade after 24 h probably due to some degradation of the NP. Therefore, it is understood that all NP types are able to retain the 5-FU load for a term sufficient for NP to be internalized by corneal cells. It should be noted that the present NP, prepared from Ch derivatives, were able to retain 5-FU for longer than those prepared by Fabiano et al., from Ch (15 vs. 5 h) [13]. Drug Release from NP-Containing TSOH Data on drug release from NP-containing TSOH, obtained as described in Section 2.7 and plotted as percentage of drug released versus t or √ t are reported in Figure 5a,b, respectively. The release study lasted 5 h since the 5-FU release study for all NP types had shown that the 5-FU % entrapped in the NP was virtually constant for at least 15 h. The drug amount released vs. √ t was in all cases linear with comparatively small ordinate intercepts (between −0.99% and 2.88%) and high r 2 values (between 0.96 and 0.99, n = 3). This pattern in all cases fitted a well-known model assuming that the release was entirely governed by drug diffusion in the releasing vehicle. The slope of each straight line allowed comparison between the different cases on the basis of drug diffusivity in the hydrogel. The data listed in Table 3 shows that 5-FU release from TSOH containing NP based on QA-Ch or QA-Ch-S-pro was not significantly different from drug release from TSOH containing NP based on Ch, whereas drug release from TSOH containing NP based on SB-Ch was significantly slowed down, with respect to the above, reasonably by the negative charges on the NP surface dispersed in this TSOH. Indeed, TSOH was prepared using a chitosan derivative containing fixed positive charges that could electrostatically interact with the negative charges of SB-Ch, and this interaction could slow down the 5-FU release from TSOH. This data indicated that the 5-FU fraction immobilized in the NP matrix remained virtually constant at the initial value of 15%-18% over the first 15 h of experiment, while it seemed to fade after 24 h probably due to some degradation of the NP. Therefore, it is understood that all NP types are able to retain the 5-FU load for a term sufficient for NP to be internalized by corneal cells. It should be noted that the present NP, prepared from Ch derivatives, were able to retain 5-FU for longer than those prepared by Fabiano et al., from Ch (15 vs. 5 h) [13]. Drug Release from NP-Containing TSOH Data on drug release from NP-containing TSOH, obtained as described in Section 2.7 and plotted as percentage of drug released versus t or √t are reported in Figures 5a,b, respectively. The release study lasted 5 h since the 5-FU release study for all NP types had shown that the 5-FU % entrapped Table 3 shows that 5-FU release from TSOH containing NP based on QA-Ch or QA-Ch-S-pro was not significantly different from drug release from TSOH containing NP based on Ch, whereas drug release from TSOH containing NP based on SB-Ch was significantly slowed down, with respect to the above, reasonably by the negative charges on the NP surface dispersed in this TSOH. Indeed, TSOH was prepared using a chitosan derivative containing fixed positive charges that could electrostatically interact with the negative charges of SB-Ch, and this interaction could slow down the 5-FU release from TSOH. [13]. * Significantly different from all the others (P < 0.05). [13]. * Significantly different from all the others (p < 0.05). Confocal Microscopy and Image Analysis Particle fluorescence allowed their visualization in both the sol and gel states by means of fluorescence confocal microscopy [13]. From particle images, their size could be inferred (Table 4, Figure 6) affording interesting information on the role of gelification on their aggregation status. Gelation did not affect the size of SB-Ch NP (Table 4, entries 1,2; Figure 6a,b) which remained close to the optical diffraction limit of the microscopy apparatus (~0.2 µm). Conversely, QA-Ch NP in the gel phase showed the presence of two main particle populations: one, with a size comparable to that of SB-Ch NP, the other twice as large (Table 4, entry 3; Figure 6c). A bimodal pattern was observed also with QA-Ch-S-pro NP, although in this case the smaller peak was about 0.6 µm and the larger was, on the average, more than 4 µm with a rather large dispersion as indicated by the relevant standard error (SE) values (Table 4, entry 4; Figure 6d). These findings strongly suggest a moderate aggregation of QA-Ch NP, each aggregate possibly consisting of two particles sticking together, in the sol state, and a much larger aggregate state for QA-Ch-S-pro NP in the gel state. These data encouraged us to continue the study only with NP based on QA-Ch and SB-Ch. Micro-Rheological Characterization of NP Mucoadhesive Properties The elastic (or storage) modulus, G', the viscous (or loss) modulus, G'', and the complex viscosity, η*, are reported in Figure 7. As can be seen, there was an increase in η* of mucin dispersion in the presence of NP based on either QA-Ch or SB-Ch, compared to plain mucin dispersion. The increase of η* was reflected in the increase of both G' and G'' moduli. In particular, the increase in G' is indicative of the development of an inter-connected microstructure between mucin Micro-Rheological Characterization of NP Mucoadhesive Properties The elastic (or storage) modulus, G', the viscous (or loss) modulus, G", and the complex viscosity, η*, are reported in Figure 7. As can be seen, there was an increase in η* of mucin dispersion in the presence of NP based on either QA-Ch or SB-Ch, compared to plain mucin dispersion. The increase of η* was reflected in the increase of both G' and G" moduli. In particular, the increase in G' is indicative of the development of an inter-connected microstructure between mucin macromolecules and NP based on QA-Ch, resulting in a stronger mucoadhesivity of these NP with respect to the NP based on SB-Ch. However, G' for NP based on SB-Ch was higher than G' for mucin, due to the intrinsic mucoadhesivity of Ch, i.e., the pristine material used to prepare SB-Ch. These data demonstrate that the sign of the NP surface charge can actually influence their mucoadhesivity. Moreover, it is known that the mucus glycoproteins bear negatively charged sialic moieties that are capable of forming ionic bonds with oppositely charged chemical species [31]. In Vivo Tests During each experiment, we observed that all the ophthalmic drops instilled in rabbit eyes caused no conjunctival/corneal edema and/or hyperemia. The pharmacokinetic profiles in the aqueous and the relative AUC values are reported in Figure 8 and Table 5, respectively. The results of the in vivo tests shown in Figure 8 demonstrate the ability of the TSOH containing NP based on Figure 7. Elastic modulus, G' (a), viscous modulus, G" (b), and complex viscosity, η* (c), of QA-Ch and SB-Ch NP with respect to mucin dispersion. In Vivo Tests During each experiment, we observed that all the ophthalmic drops instilled in rabbit eyes caused no conjunctival/corneal edema and/or hyperemia. The pharmacokinetic profiles in the aqueous and the relative AUC values are reported in Figure 8 and Table 5, respectively. The results of the in vivo tests shown in Figure 8 demonstrate the ability of the TSOH containing NP based on QA-Ch or SB-Ch to increase the 5-FU bioavailability with respect to the control, TSOH, and Ch NP-containing TSOH [13]. Indeed, the AUC values relative to QA-Ch NP+TSOH and SB-Ch NP+TSOH listed in Table 5, are significantly higher than those relative to the control, TSOH, and Ch NP+TSOH, with a concentration plateau in the range 1-10 h. These results demonstrate that the QA-Ch NP+TSOH and SB-Ch NP+TSOH systems have much more ability to prolong the drug precorneal residence time than the control, TSOH, or Ch NP-containing TSOH. In view of the ability of NP to retain the drug for longer, the plateau in the 1-10 h range is in-keeping with the hypothesis of an intraocular drug absorption controlled by gel erosion in pre-corneal area accompanied by release of drug-loaded NP that are then internalized in corneal cells. The data altogether demonstrate the importance of NP mucoadhesion properties and of their ability to interact with the vehicle. In fact, QA-Ch NP-containing TSOH showed an AUC value higher than that for TSOH or Ch NP+ TSOH probably thanks to their fixed positive charges that prolong the drug retention time and increase the drug contact with the anterior surface of the eyes and thereby, enhance ocular absorption via paracellular transport through the tight junctions of corneal epithelia [16]. On the other hand, SB-Ch NP+TSOH showed an AUC value similar to that of QA-Ch NP+TSOH probably due to its ability to slow down the 5-FU release from the vehicle, as demonstrated in Section 3.5. It is interesting to note that no differences were seen, in Figure 8, in the concentration in the aqueous vs. time profiles between any of the three NP-containing TSOH formulations. This can be ascribed to the presence in all formulations of a significant 5-FU dose fraction not entrapped in NP, but free to permeate across the cornea by passive diffusion. Such a drug fraction is unaffected by the different NP ability to be internalized in corneal cells, which can indeed be influenced by the NP surface characteristics. The data altogether demonstrate the importance of NP mucoadhesion properties and of their ability to interact with the vehicle. In fact, QA-Ch NP-containing TSOH showed an AUC value higher than that for TSOH or Ch NP+ TSOH probably thanks to their fixed positive charges that prolong the drug retention time and increase the drug contact with the anterior surface of the eyes and thereby, enhance ocular absorption via paracellular transport through the tight junctions of corneal epithelia [16]. On the other hand, SB-Ch NP+TSOH showed an AUC value similar to that of QA-Ch NP+TSOH probably due to its ability to slow down the 5-FU release from the vehicle, as demonstrated in Section 3.5. These present results are in agreement with those shown in a preceding paper where it was found that the more effective NP were able to concurrently adhere to the ocular surface and strongly interact with the drug molecules in solution [10]. After all, although less mucoadhesive than QA-Ch NP, the SB-Ch NP also showed some mucoadhesivity due to the intrinsic mucoadhesivity of the Ch backbone. Conclusions These results indicate that the present SB-Ch NP-containing thermosensitive hydrogels are able to prolong 5-FU ocular residence thanks to the synergistic effect of negative charges on NP surface and positive charges present in the TSOH. Furthermore, in the case of QA-Ch NP-containing thermosensitive hydrogels the presence of positive charges on NP surface prolongs their contact with corneal and conjunctival surfaces that are negatively charged. As a result, both NP-containing QA-Ch-based and SB-Ch-based TSOH were able to increase the ocular 5-FU bioavailability. NP-containing thermosensitive hydrogels could be administered as conventional eyedrops and still represent an alternative, more effective formulation than the commercial 5-FU eyedrops, with reduced 5-FU applied dose and instillation frequency. However stability studies of the formulations must be carried out in the future to understand if they can really be commercialized.
v3-fos-license
2018-04-03T02:59:07.569Z
1997-01-10T00:00:00.000
19815431
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.jbc.org/content/272/2/1297.full.pdf", "pdf_hash": "93888a438a36556f24bb7671b46ce33444050b13", "pdf_src": "Highwire", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42502", "s2fieldsofstudy": [ "Biology" ], "sha1": "c79ce44b7df6e58d7226202370c60f9dfd87e9d8", "year": 1997 }
pes2o/s2orc
A lymphocyte-specific Ltk tyrosine kinase isoform is retained in the endoplasmic reticulum in association with calnexin. A lymphocyte-specific murine Ltk tyrosine kinase isoform was previously found to reside in the endoplasmic reticulum and to be potently activated upon treatment of cells with alkylating or thiol-oxidizing agents. Based on these observations, a unique role for Ltk was proposed as an endoplasmic reticulum-resident transmembrane kinase regulated by redox changes (Bauskin, A. R., Alkalay, I., and Ben-Neriah, Y. (1991) Cell 66, 685-696). To analyze why this Ltk isoform is retained in the endoplasmic reticulum, we investigated its behavior in over-expressing cells. Our results indicate that lymphoid Ltk exhibits a dual Nexo/Ccyt and Ncyt/Cexo transmembrane topology in transfected cells. This unusual behavior may be responsible for retention in the endoplasmic reticulum since mutants with an increased number of positive amino acids downstream of the transmembrane segment exhibit a conventional Nexo/Ccyt orientation and proceed to the cell surface. Endoplasmic reticulum-retained Ltk forms a prominent complex with the chaperone calnexin, suggesting that Ltk may be retained by the mechanism that prevents surface expression of inappropriately folded proteins or incompletely assembled protein complexes. A lymphocyte-specific murine Ltk tyrosine kinase isoform was previously found to reside in the endoplasmic reticulum and to be potently activated upon treatment of cells with alkylating or thiol-oxidizing agents. Based on these observations, a unique role for Ltk was proposed as an endoplasmic reticulum-resident transmembrane kinase regulated by redox changes (Bauskin, A. R., Alkalay, I., and Ben-Neriah, Y. (1991) Cell 66, 685-696). To analyze why this Ltk isoform is retained in the endoplasmic reticulum, we investigated its behavior in over-expressing cells. Our results indicate that lymphoid Ltk exhibits a dual N exo /C cyt and N cyt /C exo transmembrane topology in transfected cells. This unusual behavior may be responsible for retention in the endoplasmic reticulum since mutants with an increased number of positive amino acids downstream of the transmembrane segment exhibit a conventional N exo /C cyt orientation and proceed to the cell surface. Endoplasmic reticulumretained Ltk forms a prominent complex with the chaperone calnexin, suggesting that Ltk may be retained by the mechanism that prevents surface expression of inappropriately folded proteins or incompletely assembled protein complexes. Ltk is a membrane-spanning tyrosine kinase that is most closely related to the product of the human non-Hodgkin's lymphoma ALK proto-oncogene (1,2). The restricted expression pattern of murine Ltk in subsets of lymphoid and neuronal cells initially suggested a role as a receptor for an unknown lymphoid or neuronal growth or differentiation factor (3). However, this hypothesis became less likely when one of the most widely expressed Ltk isoforms was found to reside in the endoplasmic reticulum (ER) 1 where the protein showed extensive disulfide-linked oligomerization concomitant with potent activation of its kinase activity upon treatment of cells with alkylating or thiol-oxidizing agents. Based on this observation, a unique function for Ltk was proposed, as a ligand-independent ER-resident kinase regulated by redox changes (4). Several observations complicated this hypothesis, however. First, its low expression level makes endogenous Ltk protein hard to detect, even by such sensitive methods as immune complex kinase labeling (3). The most convincing results arguing for ER residence were thus obtained with Ltk over-expressing transfected cells (4). In these cells, Ltk undergoes extensive oligomerization, which may have contributed to its ER retention. Second, ER retention and redox regulation was demonstrated for only one of at least four murine Ltk isoforms. The analyzed protein is unusual for harboring only 109 amino acids upstream of its transmembrane (TM) segment and is encoded by a lymphocyte-specific mRNA that includes an unconventional CUG translational start codon (3). A subsequently identified human LTK cDNA, extended further upstream, included a regular AUG start codon and predicted a receptor kinase with a considerably larger putative ligand-binding segment (5). Although the identification of the larger human LTK gene product shed some doubt on the significance of the ER-retained murine isoform, we subsequently found that mRNAs predicting analogous larger murine Ltk proteins also existed but had very restricted neuronal expression patterns (6). Because transport to the cell surface is believed to occur by default once mammalian proteins enter the ER (7,8), Ltk must be retained either because it contains an ER retention/retrieval signal (9) or because it is recognized by chaperones that prevent transport of incorrectly folded proteins or partially assembled protein complexes (10). To distinguish between these possibilities, we analyzed the intracellular trafficking of the lymphoid Ltk isoform in over-expressing transfected cells. Our results argue that ER retention may not be the normal fate of Ltk and provide support for the hypothesis that positively charged amino acids adjacent to membrane spanning segments play important roles in determining the TM topology of some integral membrane proteins (11,12). EXPERIMENTAL PROCEDURES Site-directed Mutagenesis-To generate mutants, we transferred the insert of the pltk2.2 cDNA (3) to the pALTER vector and used the Altered Sites mutagenesis system (Promega) to change the asparagines of five N-X-S/T potential glycosylation acceptor sites into serines. The same strategy was used to change uncharged juxtamembrane residues into positively charged ones. Oligonucleotides used to mutate glycosylation sites were: NS1, AAG CAT CCC AGC TGC AGT CAC TG; NS2, AGC TGT GGA TAG TGT CAC TTG CAT; NS3, TCA CCA GCC AGT GTC ACT CTA C; NS4, CAG GCT GGG AAG CAG GTC ACT G; and NS5, GAA CAG TGC AGT GGC TCC TCC TCA. The juxtamembrane mutants were made using primers Ltk-JM-Pos1, CTA GTG AAC CAG AAG CGT AAG GGC CTG TGG GGG ACC, and Ltk-JM-Pos2, AAC CAG AAG TGT CAG CGC CTG CGG GGG ACC AGG CTG CCA. All mutations were confirmed by sequence analysis. Wild-type and mutant cDNAs were expressed using the SV40 promoter containing expression vector pJ3 (13). Generation of Hybrid cDNAs-Ltk-human epidermal growth factor receptor (EGFR) hybrids were generated by PCR amplifying human EGFR cDNA segments with hybrid Ltk-EGFR oligonucleotides. The Ltk-TM-EGFR hybrid was made using oligonucleotides (5Ј) AAA GAG CTA GCT GGG CCA GGT CTT GAA GGC TG and (3Ј) AAA CGA AGC * This work was supported by Public Health Services Grants NS28735 (to A. B.) and AI33507 (to S. P.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. ‡ TTA AGC TCC CTC TCC TGC AGC AG. Unique NheI and HindIII sites (underlined) were used to subclone the amplified fragment into the Ltk expression vector. The Ltk-JM-EGFR construct was made using 5Јprimer ATG TGT GCA GTC CTG ATT CTA GTG AAG CGC CAC ATC GTT CGG AAG CGC ACG and the 3Ј Ltk-TM-EGFR primer. Subcloning was by means of BsgI and HindIII sites. Transfection and Immunoprecipitation-Subconfluent NIH 3T3 cells were transfected as described (14) using 10 g of Ltk expression vector and 1 g of pSV2neo as a selectable marker. Ltk expression in G418 (0.5 mg/ml) resistant clones was analyzed by RNA blots and immunoprecipitations. Transient expression of Ltk in COS cells was achieved by DEAEdextran transfection as described (6). Briefly, approximately 10 g of plasmid was transfected per subconfluent 9-cm culture dish, and cells were labeled for 4 h with 0.3 mCi of 35 S-labeled amino acids (Translabel; DuPont NEN) 2 days after transfection. In pulse-chase experiments, cells were labeled for 30 min before adding an excess (5 mM) of nonradioactive methionine to the culture medium. Cells were lysed in Triton buffer (1% Triton X-100, 100 mM NaCl, 2 mM MgCl 2 , and 10 mM Tris HCl, pH 8.0) containing 1 mM phenylmethylsulfonyl fluoride and 10 g/ml each of antipain, aprotinin, and chymostatin. Cell lysates were precleared for 1 h with 10 g of rabbit IgG conjugated to protein A-Sepharose and subsequently immunoprecipitated with 2 l of rabbit anti-Ltk serum (3) for 16 h at 4°C. Immune complexes collected with protein A-Sepharose were washed six times with radioimmune precipitation buffer (15). Samples were analyzed by electrophoresis on 12.5% SDS-polyacrylamide gels or digested first with endoglycosidase H (endo H) or N-glycanase using conditions recommended by the manufacturer (Genzyme). Immunofluorescent Staining-To detect Ltk on the cell surface, live transfected COS cells grown on glass coverslips were placed on ice and incubated with either polyclonal Ltk antiserum or preimmune serum for 1 h. After five washes in ice-cold PBS, the cells were incubated on ice with a fluorescein-conjugated F(ab) 2 goat anti-rabbit IgG (Caltag Laboratories, San Francisco, CA) for 30 min and washed as before. Dilutions of the antisera in PBS containing 0.1% sodium azide were chosen so that untransfected cells exhibited minimal background fluorescence. The cells were then fixed in 2% paraformaldehyde in PBS and mounted in glycerol containing 2.3% (w/v) DABCO (1,4-diazobicyclo-(2,2,2)-octane, Sigma), 10% (v/v) 0.2 M Tris, pH 8.0, and 0.02% NaN 3 . Staining of permeabilized cells was performed at room temperature. Cells were first fixed in 2% paraformaldehyde in PBS for 30 min, then permeabilized with 0.05% saponin in PBS for 5 min, and subsequently stained and mounted as described above. The antisera were diluted in PBS with 0.05% saponin. Stained cells were viewed with an Olympus BH2-RFL fluorescence microscope at ϫ 600 magnification using oil immersion and photographed using Kodak T-MAX 400 film and the Olympus PM-10ADS photomicrographic system. ER-retained Ltk Isoform A Exhibits a Dual TM Topology- Four Ltk proteins that differ upstream of their TM segment are encoded by tissue-specific murine mRNAs that arise by alternative splicing and alternative promoter usage. A pair of alternatively spliced transcripts that we have only detected in the C1300/Neuro-2A neuronal cell line (6) includes standard AUG start codons and predicts proteins with 360 and 421 amino acids upstream of their TM domains (Fig. 1, Isoforms C and D). Two mRNAs expressed in several lymphoid cell lines and in brain include CUG start codons and predict proteins with 109 and 170 amino acid upstream segments (Fig. 1, isoforms A and B). The lymphocyte-specific isoform A protein was previously found to be retained in the ER (4). COS cells transfected with isoform A vectors produce two 72and 78-kDa anti-Ltk immunoreactive glycoproteins (Fig. 2, lane 2). Endoglycosidase H, which specifically removes asparagine-linked sugar groups that have not undergone modification by Golgi-resident ␣-mannosidase (16), trims both glycoproteins to a single 69-kDa species (Fig. 2, lane 3). Since asparagine-linked glycosylation occurs in the lumen of the ER (18) and since Ltk contains only two N-X-S/T potential glycosylation acceptor sites upstream of its TM segment, we and others previously suggested that differential glycosylation of either one or both of these sites may explain the two observed glycoproteins (3,4). However, since one of the upstream glycosylation sites (N2 in the diagram in Fig. 2) maps only 13 residues upstream of the TM segment, and since recent studies suggest that glycosylation sites close to TM segments may not be recognized by oligosaccharyl transferase (17), we further analyzed the nature of the 72/78-kDa doublet. To this end, we transfected COS cells with isoform A mutants in which either one or both of the upstream glycosylation acceptor sites had been disrupted by targeted asparagine to serine substitutions. Fig. 2, lanes 4 -6, shows proteins precipitated from cells transfected with glycosylation site mutants NS2, NS1, and NS1ϩ2 (NS refers to the sequence change; the numbers identify the sites mutated; see diagram in Fig. 2). As shown in lane 4, disrupting the N-X-S/T motif closest to the TM segment of Ltk did not alter the glycosylation of isoform A. Since a mutant FIG. 1. Structure of four murine ltk gene products. The diagram shows sequence features of the proteins, including the location of alternative AUG and CUG start codons and the approximate position of a glycine-rich segment encoded in part by an alternatively spliced exon (AE). Also indicated are an EGF-like domain (EGF) immediately upstream of the TM segment and the kinase domain (K D ). The lines below the diagram indicate the extent of the four Ltk isoforms. Both long and short ltk mRNAs are made in two versions that differ by a single alternatively spliced exon that inserts a glycine-rich 61-amino acid segment just upstream of the TM segment (6,20). This alternative exon is included in the major mouse brain ltk mRNA but is absent from the predominant lymphoid transcript (20). Transcripts predicting Ltk isoforms C and D have only been detected in a single murine neuronal cell line, but similar proteins may be more widely expressed in human cells. with a single glycosylation site upstream of its TM segment thus continues to incorporate more than one endo H-sensitive oligosaccharide, sites elsewhere in Ltk must also be modified. Indeed, although disruption of the most upstream N-X-S/T motif resulted in the conversion of the 72-kDa protein into a 69-kDa form (lane 5), which was not further reduced in size upon endo H digestion (not shown), the 78-kDa glycoprotein was unaffected by this mutation. The 78-kDa glycoprotein also remained when both upstream N-X-S/T motifs were mutated together (lane 6). The part of Ltk downstream of its TM segment, which we had assumed to reside in the cytoplasm, harbors three additional N-X-S/T motifs (see diagram in Fig. 2). To test whether glycosylation of these sites accounted for the 78-kDa species, we disrupted all three motifs by asparagine to serine substitutions. Fig. 2, lane 7, shows that disrupting the downstream motifs did not affect the 72-kDa glycoprotein, but resulted in the conversion of the 78-kDa species into a 69-kDa protein that comigrates with unglycosylated Ltk. Mutating individual downstream sites resulted in intermediate size reductions, sug-gesting that all three sites are glycosylated (not shown). Disrupting all five glycosylation sites at either side of the TM segment of the Ltk prevented all glycosylation (lane 8). Because addition of asparagine-linked core sugars occurs exclusively in the lumen of the ER (18), isoform A must be inserted into the ER membrane of transfected cells in both an N exo /C cyt (type I) and an inverted N cyt /C exo (type II) orientation (Fig. 3). To rule out that this unusual behavior is caused by over-expression, we generated stably transfected NIH 3T3 clones expressing approximately 100-fold less protein. Similar ratios of the 72-and 78-kDa glycoproteins were detected in all analyzed stable transfectants (e.g. Fig. 2, lane 9). Since a similar 72/78-kDa doublet also was previously detected in a murine B lymphocyte cell line (4) and upon in vitro translation of isoform A mRNA in the presence of microsomes (3), we conclude that isoform A Ltk exhibits a mixed TM topology in vitro and in several cell types expressing widely varying protein levels. ER-retained Isoform A Binds Calnexin-ER retention of several proteins, including unassembled components of integral membrane protein complexes, is mediated by the ER resident chaperone calnexin (19). To examine whether calnexin or other chaperones play a role in the ER retention of Ltk, we precipitated potential Ltk-containing complexes after lysing metabolically labeled transfected COS cells in 0.6% CHAPS. A 90-kDa 2-4). The precipitating antibody is indicated above the lanes (p.i., pre-immune rabbit serum; cnx, anti-calnexin). Calnexin was detected using antibody SPA-860 (Stressgen, Victoria, B. C., Canada). protein, identical in size to calnexin, co-precipitated with isoform A Ltk under these conditions (Fig. 4, lane 3). The 90-kDa protein was not detected in precipitates from mock transfected cells (lane 1) and was unambiguously identified as calnexin in sequential immunoprecipitation-immunoblotting experiments, using anti-Ltk to precipitate and anti-calnexin to detect the blotted protein or vice versa (not shown). Interestingly, the 78-kDa glycoprotein was the predominant calnexin-associated species in these experiments (Fig. 4, lane 4). Ltk Mutants with a Type I TM Topology Proceed to the Cell Surface-Since calnexin preferentially binds the 78-kDa inverted Ltk protein, we tested whether Ltk mutants with a predominant type I TM topology escaped ER retention. It is important to note in this respect that isoform A lacks an obvious N-terminal signal sequence (3) and shows no obvious bias of positive charges flanking its TM segment. Since the TM topology of integral membrane proteins without cleavable signal sequences is believed to be determined by the ratio of charged amino acids flanking their TM segments, with the most positive side in the cytoplasm (11,12), we analyzed the trafficking of several Ltk mutants that were engineered to contain extra positive amino acids downstream of the TM segment. Initial mutants were made by replacing membraneflanking segments of Ltk with the corresponding parts of the EGFR. Because the cytoplasmic juxtamembrane segment of the EGFR is highly positively charged (Fig. 5), Ltk hybrids containing this segment would be expected to exhibit an altered TM topology. Indeed, both Ltk-TM-EGFR and Ltk-JM-EGFR hybrids that harbor the EGFR juxtamembrane segment (see map in Fig. 5, and see "Experimental Procedures" for details), did not show the glycoprotein doublet indicative of a mixed TM topology. Rather, the predominant species observed in cells transfected with these mutants resembled type I oriented isoform A both in its size (ϳ72 kDa) and in the modest size reduction upon endo H digestion (Fig. 5, lanes 1-4). To directly test whether the increased number of positive charges in the juxtamembrane region was responsible for the altered behavior of the Ltk-EGFR hybrids, we made two additional mutants (Ltk-JM-Pos1 and Ltk-JM-Pos2) by site-directed mutagenesis. Each mutant harbored two additional lysine or arginine residues at the positions indicated in Fig. 5. COS cells transfected with both mutants predominantly expressed a 72-kDa endo H-sensitive (shown for the Ltk-JM-Pos1 mutant in Fig. 5, lanes 5 and 6). We infer from these results that in a manner similar to that observed for other proteins (11,12), the TM topology of isoform A Ltk can be modified by changing the number of charged residues downstream of its TM segment. All Ltk mutants that exhibit a predominant type I TM topology show a diffuse higher molecular weight species in immunoprecipitates (arrow in Fig. 5). We argued that this slower migrating species, which is not seen upon transfection of wildtype isoform A, might represent protein that has undergone Golgi-specific oligosaccharide modifications. To test this, we performed pulse-chase labeling and endoglycosidase digestion experiments. In pulse-chase experiments, cells transfected with the Ltk-TM-EGFR hybrid did not show the slower migrating species at early time points. At later time points, this species became progressively more prominent, consistent with it representing a processed end product (Fig. 6, left panel). The higher molecular weight species was resistant to endo H, but sensitive to N-glycanase digestion, suggesting the presence of Golgi-specific oligosaccharide modifications (Figs. 5 and 6, right panel). We conclude that Ltk mutants with a predominant type I TM topology proceed beyond the ER to the Golgi. To analyze whether Ltk juxtamembrane mutants proceed beyond the Golgi to the cell surface, we stained live, nonpermeabilized transfected cells by indirect immunofluorescence. In contrast to COS cells transfected with a wild-type isoform A construct, 20 -30% of cells transfected with the two Ltk-EGFR hybrids or with both site-directed mutants showed unambiguous surface fluorescence (Fig. 7). A type I TM topology thus correlates with cell surface expression of Ltk mutants. However, staining of saponin-permeabilized transfected cells showed that most mutant protein remained localized to the ER, indicating that surface transport of the mutant proteins remains inefficient (Fig. 7). DISCUSSION A unique role for Ltk as a redox sensing tyrosine kinase was proposed, when a widely expressed murine Ltk isoform was found to reside in the ER, and to undergo extensive disulfidelinked oligomerization concomitant with a 20-fold activation of its kinase activity upon treatment of cells with alkylating or thiol-oxidizing agents (4). Because transport to the cell surface is believed to occur by default when mammalian proteins enter the ER, we analyzed in this study why isoform A Ltk is ER retained. Our results indicate that isoform A Ltk exhibits a highly unusual dual TM topology in transfected cells and forms a prominent complex with the ER resident chaperone calnexin. The association with calnexin suggests that isoform A Ltk may be ER retained by the mechanism that prevents surface expression of incompletely folded proteins or partially assembled protein complexes. Its unusual dual TM topology may reflect the fact that isoform A lacks a functional N-terminal signal sequence. Indeed, as has also been observed for other integral membrane proteins without cleavable signal peptides, the TM topology of isoform A can be altered by increasing the number of positively charged amino acids downstream of the TM segment. Interestingly, all mutants that exhibit a predominant type I TM topology show evidence of proceeding beyond the ER to the cell surface. However, transport to the cell surface remains inefficient since most mutant protein remains endo H-sensitive and remains localized to the ER in immunofluorescent staining experiments. Several other points regarding the unusual behavior of Ltk are worth making. First, although calnexin predominantly associates with the 78-kDa inverted isoform A species, both topological forms of Ltk are ER-retained when co-expressed. This may be explained by the fact that isoform A undergoes exten-sive oligomerization in transfected cells (4). Second, the potent stimulation of the kinase activity of isoform A upon treatment of cells with thiol oxidizing agents may relate to its inverted TM topology and reflect oxidation of amino acids within the catalytic segment of the protein. Finally, although we have not extensively analyzed the trafficking of other Ltk isoforms, we previously noted that isoform C Ltk is also largely ER-retained (6). This ER retention cannot be attributed to an inverted TM topology of this protein since isoform C Ltk harbors a typical N-terminal signal peptide and shows no evidence of a dual TM topology when over-expressed. 2 Different mechanisms may thus contribute to the ER retention of different Ltk isoforms.
v3-fos-license
2023-07-16T06:17:07.560Z
2023-07-14T00:00:00.000
259903991
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "CLOSED", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/advs.202301084", "pdf_hash": "6a9c94daada122bf95ff15690aac9fc645149bef", "pdf_src": "Wiley", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42503", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "sha1": "fea5f0c8d3a0a60417a966d93e32662ad2730e41", "year": 2023 }
pes2o/s2orc
Villi Inspired Mechanical Interlocking for Intestinal Retentive Devices Abstract Intestinal retentive devices have applications ranging from sustained oral drug delivery systems to indwelling ingestible medical devices. Current strategies to retain devices in the small intestine primarily focus on chemical anchoring using mucoadhesives or mechanical coupling using expandable devices or structures that pierce the intestinal epithelium. Here, the feasibility of intestinal retention using devices containing villi‐inspired structures that mechanically interlock with natural villi of the small intestine is evaluated. First the viability of mechanical interlocking as an intestinal retention strategy is estimated by estimating the resistance to peristaltic shear between simulated natural villi and devices with various micropost geometries and parameters. Simulations are validated in vitro by fabricating micropost array patches via multistep replica molding and performing lap‐shear tests to evaluate the interlocking performance of the fabricated microposts with artificial villi. Finally, the optimal material and design parameters of the patches that can successfully achieve retention in vivo are predicted. This study represents a proof‐of‐concept for the viability of micropost‐villi mechanical interlocking strategy to develop nonpenetrative multifunctional intestinal retentive devices for the future. Introduction 3][4][5][6][7][8][9][10][11] However, there are several challenges DOI: 10.1002/advs.202301084 to the design and deployment of structures within the small intestine.Persistent gastrointestinal motility subject devices to periodic compressive pressures of ≈20 mmHg and shear stresses of ≈1 N cm −2 at frequencies of 0.125-0.3[20] Chemo-and mechano-adhesive approaches have been commonly used to overcome the aforementioned unique constraints and inform designs for intestinal retentive devices (Table S1, Supporting Information).[25][26] Chemo-adhesive technologies however are susceptible to fouling overtime and are rapidly eliminated due to mucus overturn.Alternatively, mechanoadhesives comprise predominantly of bio-inspired variations of penetrating microneedles such as hookworm inspired tissue attachment mechanisms, [27] Theragrippers, [28] barbed microneedles which are inspired by proboscis of spiny-head worms [29] and biphasic swellable microneedles inspired by proboscis of endoparasite Pomphorhynchus laevis. [30]While these techniques show some promise in achieving retention in small intestine, deployment of such tissue piercing structures for prolonged periods may lead to bacterial infection. [31]nother mechano-adhesive approach includes friction enhancement between device substrates and intestinal tissue using elastomeric microstructures of aspect ratio ≈1:1.The microstructures were assessed for their ability to clamp endoscopic capsules in the small intestine by Kwon et al. [32][33][34] In this study, we evaluate the potential of mechanical interlocking between of highaspect-ratio (≈5:1) elastomeric microposts and the intestinal villi to resist peristaltic shear (Figure 1).Compared to low aspect ratio microstructured devices, we expect high-aspect-ratio microposts, with comparable geometry to that of the villi, to induce interlocking with the villi and resist peristaltic shear in the small intestine thereby enhancing its retention time (Figure S1 and Movie S1, Supporting Information).Shear interlocking, a bio-inspired technique observed in cuticular hairs of beetle wings, has inspired devices capable of reversible detachment. [35]While in beetle wings Van der Waals interactions lead to adhesion, in our study we evaluate the ability of collisions to retard movement of microposts and further mechanically interlock within the intestinal villi. Modeling and Mechanical Simulations of Micropost-Villi Interlocking First, we demonstrate the feasibility of mechanical interlocking strategy to prepare intestinal retentive devices through finite element analysis .Porcine intestinal tissue explants rapidly lose mechanical integrity due to lack of blood pressure in the capillaries of the villi (Figure S2, Supporting Information).Hence, as a substitute to ex vivo lap-shear tests with bio-mimetic microposts, mechanical simulations were used to predict the behavior of mechanically interlocked devices.Moreover, the use of computational models allowed us to assess the interlocking phenomenon under simulated peristalsis and scan the effects of multiple micropost array patch (MAP) design parameters on interlocking in an economic and timely manner. Two systems were modeled-a) MP-V Model: Head-on collision between a singular micropost and villus under peristaltic shear (Figure 2a,b) and b) MAP-VP Model: Interaction between MAPs and villi patch (VP) under peristaltic shear and contractile pressure (Figure 2d).Design features of MAPs were varied as followed: flat-tipped microposts with cubic arrangement (Flat-Cub), flat-tipped microposts with hexagonal arrangement (Flat-Hex), round-tipped microposts with cubic arrangement (Roun-Cub), and round-tipped microposts with hexagonal arrangement (Roun-Hex) (Figure 2e,g,f).Intestinal peristaltic shears range from 2×10 −8 to 3.5×10 −4 N cm −2 [36][37][38][39] while the contact forces varies along the gut due to its variable diameter and is estimated to be between 0.9 and 2.9 N cm −2 . [40,41]In the view of this variability, in our simulations we utilized the knowledge of estimated shear stress experienced by bolus, which is ≈1 N cm −2 [17,42] as our reference and subjected MP-V and MAP-VP simulation models to a peristaltic shear of 0.01 and 0.1 N cm −2 , respectively.Reduced shear stresses were applied to the models to ensure stability of the simulation environments, allowing us to gather significant and relevant information through them. In the MP-V model, the Euler-Bernoulli beam theory for large deflections (Equation ( 1)) was used to determine the maximum resistive force (F SimPer ) a villus may exert on the mobile micropost (Figure 2c), whereas in the MAP-VP model, interlocking efficacy of MAPs was quantified by estimating the maximum displacement of MAPs within the VP in the direction of peristaltic shear post definitive consolidation and under sustained peristaltic conditions (D SimPer ) (see the Experimental Section for details). Simulation Results The effect of microposts' Young's moduli (E m ) on maximum resistive force at varying overlapping extents () is summarized in Figure 3a.With an increase in E m and , F SimPer ranged from 80 μN ( 0.25 , E m50kPa )-1600 μN ( 0.95 , E m3MPa ).At 0.25 , microposts with E m ≥ 900 kPa bent negligibly (5-0.4 μm) while achieving an F SimPer comparable to microposts with lower stiffnesses that significantly deformed (500-10 μm).These observations suggest that stiffer microposts produce higher resistive forces through micropost-villus collision with minimum deformation compared to compliant microposts with lower stiffnesses.Further, as anticipated, it was observed that and F SimPer are proportional since with the distance to the contact point decreases and bending angle of the micropost increases. D SimPer of E m50kPa MAPs containing 81 microposts per patch (P 81 ) and with varying design features decreased by ≈62% and ≈79% on an average when MAP stiffness increased by 10x and 40x, respectively.Increasing MAP stiffness also induced an underdamped harmonic oscillatory behavior in the MAP-VP interlocking (Figure 3b).Upon increasing the areal density of the MAP, D SimPer (E m2.05 MPa ) decreased-an anticipated result based on the increased micropost-villi collisions (Figure 3c; and Figure S3 and Movie S2, Supporting Information). Further, MAPs with flat-tipped microposts in P 64 and P 36 designs consistently displayed lower D SimPer compared to their round-tipped counterparts (Movies S2 and S3, Supporting Information), i.e., D SimPer Flat-cub < Roun-cub and D SimPer Flat-Hex < Roun-Hex.This behavior was attributed to the additional surface area provided by the flat-tips which increases the frictional resistance and bolsters the footing of the microposts onto the substrate of the villi patch.The impact of this design difference on MAP-VP interlocking is patent in P 36 interlocking systems (Figure S3, Supporting Information) as round-tipped MAPs (E m2.05 MPa ) compared to their flat counterparts cease to portray the underdamped oscillatory behavior and flat-tipped systems display a decrease of D SimPer of 78% and 81% in the cubic and hexagonal arrangements, respectively.It was also observed that hexagonal arrangement of microposts in P 64 and P 36 designs compared to cubic consistently delivered reduced D SimPer .This observation was associated with the increased intra-micropost interactions that may be induced by the hexagonal arrangement of microposts and cubic arrangement of villi. In case of P 81 MAP designs, the spacing between the microposts (50 μm) is comparable to pitch of villi patch (40 μm).Hence factors like ease of penetration into the villi patch and overlap-ping extent may also influence interlocking efficacy of the MAPs.Overall, here the difference within the D SimPer values for various design parameters lied within ±0.01 mm and, Flat-Cub and Roun-Hex MAPs (E m 2.05 MPa) were least displaced under the applied shear (70 and 80 μm, respectively) (Figure 3d).The ratio of damping constants of Roun-Hex: Flat-Cub was estimated to be 1.03 signifying that Roun-Hex MAPs achieve their equilibrium position faster than Flat-Cub MAPs and can thereby consolidate more quickly within the villi under constant peristaltic shear. Overall, these observations indicate that MAP design features may influence mechanical interlocking of MAPs within the villi.Additionally, all MAPs achieved an overlapping extent of >90% within the villi solely due to the simulated jejunal contractile pressure.These results suggest that compliant micropost arrays with a size and spacing comparable to natural villi can fully integrate within villi thereby obviating the need for supplementary insertion mechanisms. [34,43] In Vitro Analysis of Mechanical Interlocking between MAPs and Artificial Villi We experimentally validate the in silico models by microfabricating micropost array patches via a robust, cost-effective, and adaptable multistep replica molding technique which employs digital light processing (DLP) 3D printed molds and further conducting in vitro lap-shear experiments between artificial villi and MAPs on customized equipment.Through these studies, we gain insight into the influence of material and design parameters of MAPs on their efficiency to interlock within the villi under shear. Using fabricated MAPs/villi patches, work of interlocking (W I ) and maximum interlocking force of MAPs-Villi interlocking (F Exp ) were measured using a customized lap-shear test bed (Figure 5a; and Figure S4, Supporting Information).Slip-stick events were observed from force-distance measurements as MAP microposts moved through the villi thus supporting the anticipated mechanical interlocking mechanism (Figure 5b). [52,53]Shear adhesion of flat patches on villi amplified upon the introduction of microposts due to induced interlocking (Roun-Hex, P 64 ); depending on the moduli of the patch, increase in adhesion work (W A ) due to mechanical interlocking ranged from 40% to 300% under a constant preload of 2.4 mN mm −2 (Figure 5c).This preload value of 2.4 mN mm −2 (F Per ) was selected as it corresponds to the in vivo intestinal contractile pressure of 20 mmHg. [54] I of MAPs (Roun-Hex, P 64 ) were measured as a function of moduli and micropost-villi overlap (Figure 6a).Experimental trends in E m and were consistent with mechanical simu-lations (Figure 6b).Further, the extent of micropost-villi overlap scaled with preload (Figure S5, Supporting Information) and tests showed that larger preload increases W I (Figure 6ci).Preload-dependency on interlocking has been reported in literature for bio-inspired systems that rely on Van der Waals forces (beetle-wings) [35] and in microhooks (climbing plants). [55]It was also noted that stiffer MAPs (E m2.05 MPa ) require preload >F Per to achieve >95%, while more complaint MAPs (E m50/500 kPa ) obtain >100% under F Per.Under F Per, compliant MAPs E m50kPa and E m500kPa display ≈400% and ≈100% increase in W I , respectively, compared to 0.95 (Figure 6cii).This indicates that stiffness of microposts affects not only the resistive force from micropostvillus collisions but also the ability for MAPs to penetrate with villi under a given preload. We compared the max resistive forces per micropost obtained from in silico and in vitro analyses (e.g., F SimPer and F Exp/micropost ).F Exp/micropost : F SimPer varied from 0.85 ± 0.467 (E m50kPa ) to 2.82 ± 1.32 (E m2.05 Mpa ) with a general trend of increased deviation with greater E m and (Figure 6d).This difference can be attributed to the contribution of frictional resistance during in vitro analyses which cannot be eliminated due to the microtextured surface of the microposts (R a = 25μm) (Figure 4c).Frictionenhancement with micropatterned structures for intestinal adhesion has proved beneficial. [33]We hypothesize that texture induced friction will only enhance mechanical interlocking and improve resistance to movement due to peristalsis in vivo.Other factors that contribute to this nonlinear interlocking-assisted adhesion by MAPs, which are unaccounted in MP-V models, include intra-micropost collisions and the effect of multiple simultaneous micropost-villi collisions.The synergistic influence of these factors is also noted in MAP-VP simulation models where MAPs, irrespective of the moduli and design, can counteract 10 times the shear applied to the MP-V models (0.01 N cm −2 ) via interlocking and immobilize within the villi (Movie S4, Supporting Information), whereas a single micropost-villus collision cannot resist the peristaltic shear (Movie S5, Supporting Information).We also observed an increase in the interlocking work with greater MAP pitch, an anticipated result which was also learned via simulations (Figure 6e).Overall, results from in silico modeling of mechanical interlocking were consistent with the in vitro experimental results.Accurate computational simulations substantiate this tool to design and optimize the device for in vivo retention of MAP-based devices under peristaltic conditions. MAP Design and Material Analysis to Optimize In Vivo Retention We use in silico modeling to predict the optimal MAP parameters for in vivo device retention.Through in silico analysis of mechanical interlocking, we learned that MAP stiffness and design parameters such as tip-geometry, pitch, micropost layout, etc., affect MAP's resistance to movement when mechanically interlocked with the villi.In this substudy, we aimed to maximize F SimPer by virtue of these parameters and thereby optimize in vivo residence time of MAP-based devices. Using the MP-V model, we observed that F SimPer peaks when E m is 9 MPa ( 0.85 ).Deformation of microposts with E m > 9 MPa is negligible (6 μm-6 nm) since their movement is unobstructed by presence of the villus (Figure 7a).As established previously, F SimPer increases with overlapping extent between microposts and villi.The villi of our small intestine are densely arranged in a close-packed geometry with ≥40 villi per mm 2 . [56]We anticipate jejunal contractions to insert the micropost arrays into the villi and achieve an overlapping extent of >100%.A wider distribution of microposts on MAP may enhance its penetration but it will result in reduced micropost-villi collisions thereby decreasing the overall resistance to peristaltic shear via mechanical interlocking.To study the influence of MAP design parameters on the extent of penetration under constant jejunal contractile pressure, we modeled villi patches (asp.ratio ≈6:1) to maximize the pitch of the villi (edge-to-edge spacing = 9 μm) (Figure 7bi).We observed that an increase in spacing between the microposts led to greater overlap, and round-tipped microposts achieved >95% overlap irrespective of the arrangement.MAPs with round-tipped microposts arranged in hexagonal pattern with edge-to-edge spacing of 400 μm maximized the number of microposts and achieved >100% overlap (Figure 7bii). While this research serves as a proof-of-concept for the mechanical interlocking between villi and synthetic microposts to combat constant peristaltic shears, influence of factors such as intestinal fluid volumes and periodic peristaltic contractions have not been explored in this study.Notably, studies performed by Mosgaard et al., where they examined the adhesion of cylindrical microcontainers (with a diameter of 300 μm and aspect ra-tio ≈1) in ex vivo perfusion models under a perfusion rate of 1.55 mL min −1 , indicated that microcontainers exhibit good adhesion properties and also reported that taller cylindrical microcontainers travel lesser distance within the intestinal mucosa compared to shorter cylinders. [57]This suggests that our devices, given the size of the cylinders is similar to the size of our device microposts, may also be effective in countering intestinal fluid volumes.We further explored the effect of intestinal mucosa on mucoadhesion of our devices.Ex vivo lap-shear tests conducted between the microposts (E m = 2.05 MPa, P 81 , and preload of 2.4 mN mm −2 ) and intestinal tissues revealed a frictional force of 1.375±0.07mN mm −2 , which is comparable to other intestinal retention technologies (Table S1, Supporting Information).This implies that, following device consolidation, the mucus layer may contribute to an enhanced resistance to shear due to the frictional drag force experienced by the microposts as they traverse the mucus layer. [33]Although, the mucus may create slippery contacts between the microposts and villi, our simulations, which take this factor into account, confirm the feasibility of mechanical interlocking even in such conditions. It should be noted that in these ex vivo tests, the effect of mechanical interlocking between the villi and microposts do not contribute to adhesion with the tissue due to the loss of mechanical integrity of villi ex vivo.Nonetheless, the ex vivo lap-shear tests suggest the possibility of synergistic retention capabilities in vivo and therefore, to fully explore the potential of interlocking systems, it is crucial to conduct in vivo studies in the future.Overall, our findings highlight the promising nature of mechanical interlocking systems and emphasize the need for further exploration in this field. Conclusion This study confirms the viability of using mechanical interlocking between high-aspect-ratio synthetic elastomeric microposts and intestinal villi as a mechanism for constructing intestinal retentive devices.This was demonstrated through a combination of mechanical simulations and in vitro validation via lap-shear tests using microfabricated biomimetic microposts.The results of these experiments offer valuable insights into the impact of design and material parameters on interlocking and demonstrate the usefulness of computational simulations as a tool for investigating soft interlocking mechanisms.Elastomers with E ≈9 MPa provide optimal resistance to peristaltic shear.The implementation of round-tipped microposts, with an aspect ratio of 4.2: 1 and diameter of 355 μm arranged in a hexagonal pattern with a 400 μm interpost spacing, has the potential to result in over 100% overlapping with natural villi during jejunal contraction, thus minimizing the need for complex active actuating systems for in vivo deployment. This research provides a crucial step toward development of customized interlocking devices using in silico modeling. [58]61] The use of biodegradable and mucoadhesive materials [21,62,63] to create these microposts can be explored to regulate the residence time of these devices for use in various potential applications including: oral drug delivery of macromolecules; [13,64,65] real-time gut monitoring; diagnostic devices [66][67][68][69] ; neural modulation devices. [70] Experimental Section Design and Modeling of Mechanical Simulations: Dynamic event simulations were performed on Autodesk Fusion 360.Micropost and villus were designed as octagonal prisms with equivalent dimensions and aspect ratio of 4.2:1.For MP-V model, the micropost (length 1.5 mm) was joined to a device fragment (3×3 mm 2 ) and constrained to move in the direction of applied shear of 0.01 N cm −2 , whereas the villus was completely fixed.Separation contact model was applied and coefficient of friction between points of contact was set to 0.04 to mimic the slippery nature of the gut.Since the modulus of villus is not reported in literature, modulus of the intestine was referenced and set it as 50 kPa . [71]Moduli of microposts (E m ) was varied from 50 kPa to 1 GPa depending on the analyses.Poisson's ratio was set to 0.49 since microposts and villi were held as elastomeric entities.25 552 parabolic mesh elements were used for calculations with a fine mesh size of 0.05 mm where convergence was achieved.Each study was performed for a total event duration (TED) of 0.01 s (Figure S6a, Supporting Information). For MAP-VP model, microposts and villi were designed 0.2 times the size of MP-V model to optimize the simulation computation time.Size of the VP (4×4 mm 2 ) was set 4 times that of MAPs (1 mm 2 ).Villi were designed with rounded tips, arranged in a cubic fashion with ≈81 villi per mm 2 [edge-to-edge spacing = 40 μm (S 40 ), P 1296 ].MAP designs were permutations of the following features-pitch per patch (P n ), i.e., P 81/64/36 corresponding to S 50/70/100 , arrangement of microposts on the patch, i.e., cubic, or hexagonal and, tip-geometry-round or flat.Material characteristics of the MAPs and Villi were equivalent to the MP-V model.Here, MAPs were constrained to move in the direction of the applied shear (0.1 N cm −2 ) and contractile pressure (20 mmHg).To optimize computation time, in this model 263 052 linear mesh elements were used with a mesh size of 0.0857 mm and performed simulations for a TED of 0.0025 s.Observed trends remained the same with varying mesh size (Figure S6b, Supporting Information). Models for in vivo optimization were designed using MAPs (3×3 mm 2 ) with micropost features analogous to the MP-V model and modulus of 500 kPa.Villi patches (4.5×4.5 mm 2 ) with round-tipped villi with aspect ratio of ≈6:1 (length 1.5 mm) were arranged in a close-packed cubic fashion (S 9 , P 324 ).MAPs with Flat-Cub, Flat-Hex, Roun-Cub, Flat-Hex features, and S 200-1000 were modeled.They were allowed to move vertically in the direction of applied contractile pressure (20 mmHg), while the villi patch was fixed.Other properties of this model were the same as the former models.Here, 463 809 linear mesh elements with a mesh size of 0.214 mm were used and simulations were performed for a TED of 0.01s. Mathematical Modeling to Determine Maximum Resistive Force Using MP-V Models: To estimate the maximum resistive force exerted by the villus on the micropost during peristaltic shear (Figure 2b,c), the theory was employed of large deflection of cantilever beams developed by Bisshop and Drucker. [72,73]he decision to utilize the theory of large deflections was based on the simulations which demonstrated that the deformation of the micropost (in the direction of the applied force) resulting from the collision was comparable to the length of the micropost. In order to employ this theory and estimate F SimPer , the following assumptions were made: 1) The modeled micropost was considered to be a linear-elastic cylinder, 2) The micropost was assumed to be anchored to a rigid substrate, 3) The maximum resistive force F SimPer was estimated at the time step of the collision just before the micropost slid past the villus and lost contact (Movie S5, Supporting Information), 4) F SimPer was assumed to be a concentrated force applied at the tip of the free end and perpendicular to the micropost, resulting in a displacement equivalent to the maximum deformation "l" of the tip in the direction parallel to the micropost.The value of "l" was obtained from the simulation output (Figure 2c).The following relationship was used Here, " m " represents the maximum angle of rotation of the deflection curve at the free end, obtained from the simulation output.E m is the Young's modulus of the micropost, and " I" is the moment of inertia of the cross-sectional area of the micropost about the axis of bending.It is important to note that due to the assumption of the micropost being anchored to a rigid substrate, Equation (1) overestimates the value of the re-sistive force.This solution does not consider the deformation of the elastic substrate to which the elastomeric microposts are attached.Such factors include shear contributions and base tilting contributions. [74]Therefore, the calculated F SimPer provides an approximate value of the maximum resistive force. Design of Fabricated Microposts Array Patches and Artificial Villi: Mucosa of the gut comprises villi and lubricious mucus gel that coats its lumen to enable passage of the bolus.To mimic mechanical integrity of living villi and eliminate unwanted contributions of frictional drag forces due to mucus-microposts interactions, lap-shear studies were conducted with artificial villi.Dimensions of human intestinal villus vary dramatically across the gut (200 μm < L < 1500 μm and 60 μm < d < 500 μm). [56,75]In view of this variability, an aspect ratio of 4.2:1 (L = 1.5 mm) was set to prepare the microposts of MAPs and artificial villi.≈3 villi per mm 2 (round-tipped, E v50kPa ) were cubically arranged with an edge-to-edge spacing of 200 μm and cross-sectional area of 2×2 cm 2 (P 1296 ) to prepare the artificial villi patch (Figure S7a, Supporting Information).MAPs with cross sectional area of 5×5 mm 2 were fabricated with varying edge-to-edge spacing (250, 350, and 500 μm, i.e., P 81 , P 64 , and P 36 ), combinations of cubic/hexagonal arrangement and tip geometry (Figure S7a, Supporting Information). DLP Facilitated Multistep Replica Molding Process: Positive PDMS based master molds with desired design parameters were 3D printed with Direct Light Processing (DLP) technique with the Microfluidics 3D printer Mii Ultra100 (CADWorks3D) using master mold resin for PDMS (resin-works3D).Autodesk Fusion360 was used to design and prepare the STL files required for 3D printing.Dimensions of the microposts were optimized to address shrinkage post UV curing and achieve desired aspect ratio of ≈4.2:1 (L = ≈1.5 mm, d = ≈355 μm) (Tables S2 and S3, Supporting Information).STL files were sliced with Utility software (layer thickness = 30 μm, curing time = 3.5 s).Post printing, molds were carefully placed in a crystallizing dish containing isopropyl alcohol (IPA) (Pharmco, Greenfield Global, CT) and triple washed (20 min each) under ultrasonification (Branson 5800 ultrasonic cleaner, 40 kHz) until clear IPA was achieved.Molds were then air-dried overnight, and further UV cured (Professional Cure Zone, Creative CADWorks Preset C) for a total of 40 min on each side (Figure S7b, Supporting Information). Soft PDMS based negative molds were then replica molded with the cured 3D printed molds.Standard 10:1 mixture of Sylgard-184 (Dow Corning, MI) was poured into the positive molds and then degassed in a vacuum desiccator.The mixture was cured at 75 °C for 90 min in an oven (VWR, Symphony vacuum oven) and the negative mold was further demolded.Post demolding, the molds were further cured for 30 min at 100 °C.The negative molds were silanized using the procedure described by Deng et al. [76] Briefly, the negative molds were first treated with oxygen plasma (Plasma cleaner, Harrick plasma) for 90 s under 260 mTorr.The treated molds were then promptly placed in a vacuum desiccator and silanized with trichloro (1H,1H,2H,2H-perfluorooctyl) silane at 60 mTorr for 24 h.Silanized molds were then baked at 100 °C in the oven for 45 min and then stabilized at room temperature for 60 min (Figure S7c, Supporting Information). [79] The mixture was poured into the negative mold which was secured in an aluminum boat and degassed at 60 mTorr in a vacuum oven for 30 min.PDMS was then cured in the oven.Post curing, the assembly was immediately placed in a −20 °C freezer (VWR) for 5 min, and then in a 75 °C oven for 5 min.This cycle was repeated twice before letting the assembly reach room temperature.MAPs (Roun-Hex/Flat-Cub) and artificial villi patches were then demolded from negative molds.To prepare the villi, Ecoflex-0010 (Dow Corning, MI) with a moduli of 50 kPa was used to mimic mechanical properties of the intestinal villi. [79]reatment of MAPs and Artificial Villi for In Vitro Analysis: Fabricated items were washed in isopropyl alcohol for 5 min under ultrasonication and then air-dried.Dried patches were secured to a glass slide.Microbumps (R a = 25 μm) were observed on the surface of the microposts suggesting that frictional contribution to adhesion cannot be eliminated in vitro (Figure 4c).A light coating of silicone mold release (Smooth-On mold release spray, PA) was sprayed onto the patches prior to the in vitro tests to minimize the factor of frictional resistance in adhesion and to even out the effects of tackiness on adhesion imparted by soft PDMS materials.The spray was applied as per the instructions from the manufacturers.Briefly, the spray was held 30 cm away from the patches and a single light mist coating was sprayed onto the patches.The patches were then allowed to sit for 10 min to let the spray dry on the surface of the patches. SEM Characterization of Microposts: Samples were mounted onto aluminum stubs and coated with an 8 nm layer of platinum using a sputter coater (EMS Q300T D plus).Surface topography images were obtained with Quanta 600 GEF scanning electron microscope at an accelerating voltage of 20 kV. Measurement of Interlocking Properties In Vitro: Lap-shear tests were performed on a custom-built test bed equipped with a six-axis force/torque transducer (Nano-17 Titanium, ATI Industries).Lap-shear tests with MAPs, instead of singular micropost-villus systems, were performed due to the limitation of the equipped transducer to accurately measure forces below 1 mN (force resolution of ≈3 mN).The measuring system comprised a DAQ device with multiple BNC inputs (DAQ PS/IFPS box), BNC interface box (9105-BNC-2), and terminal block with BNC inputs connected to a computer running the DAQ software (Figure S4, Supporting Information).ATI DAQ automation server was used to set the data acquisition frequency to 10 5 Hz at an averaging level of 2000 for data collection.Moving platforms comprised of horizontal and vertical motorized stages (MFA-CC, Newport) coordinated by a motion controller (ESP301, National Instruments).Custom LabVIEW VI was developed to set interaction velocity and to record displacement of MAPs within the villi.Custom made aluminum parts were attached to the motorized stages to fix MAPs (L-angled attachment on the vertical axis) and fabricated villi (horizontal axis).Platforms were further equipped with ±5°goniometers (M-GON40-L Newport) to ensure planarity prior to testing.Ultraviolet lap and Dinolite microscope were utilized to assess interlocking in real time and set the overlapping extent prior to testing. MAPs and artificial villi were secured on their respective platforms with double sided foam tape.MAPs (top-fixture) were initially lowered to align microposts and villi tips and, the z-axis displacement was set to zero.MAPs were further lowered at a speed of 0.325 mm s −1 until desired overlapping extent/preload was achieved.Every test began with MAPs situated outside the villi patch to ensure load biasing removed false forces.Villi (bottom fixture) was then moved at a speed of 0.2 mm s −1 .Maximum interlocking force (F Exp ) and work of interlocking (W I ) were obtained from force-displacement curves. Statistical Analysis: Simulation results were reported as is and in vitro lap-shear test results were averaged over 3 cycles and reported as mean ± standard deviation. Figure 1 . Figure 1.Concept schematic-Mechanical interlocking of intestinal villi and villi-mimicking synthetic elastomeric microposts.a) Macroscopic device with elastomeric micropost arrays (MAP).b) Villi-mimicking microposts consolidate within the villi network due to peristaltic contractile pressure.c) Mechanical interlocking of villi and microposts counteracts peristaltic shear and anchors MAP in the small intestine. Figure 2 . Figure 2. Simulation models-a) MP-V model.Initial positions of villus and micropost.b) Head-on micropost-villus collision under peristaltic shear of 0.01 N cm −2 .c) Euler-Bernoulli beam theory for large deflection of cantilever beams was used to determine the maximum resistive force (F SimPer ) applied by villus onto the mobile micropost.d) MAP-VP model.Micropost array patch interlocking with villi under peristaltic contraction (20 mmHg) and shear (0.1 N cm −2 ) conditions.Here, to optimize the computation time, MAP, and villi models are 0.2x the size of MP-V model.Villi patch is 4x the size of MAPs to mimic the anatomical environment encountered by MAPs in the gut.e) Different tip geometries.f) Arrangement types (P 81 ).g) Varying pitch of the microposts [edge-to-edge spacing varies as 100 μm (P 36 ), 70 μm (P 64 ), and 50 μm (P 81 )]. Figure 3 . Figure 3. Simulations show that stiffer elastomeric microposts produce greater resistive forces with minimum deformation compared to more compliant elastomers.F SimPer increases with the overlap between the micropost and villus.Influence of micropost moduli and array pitch is stronger than the effect of design parameters when microposts dimensions and pitch are comparable to the villi.Such MAPs attain >90% overlap within the villi with jejunal contractive pressure eliminating the need for external insertion mechanisms.a) MP-V model-Influence of overlapping extent () and Young's moduli of microposts (E m ) on maximum resistive force applied by the villus onto the micropost (F SimPer ).b) MAP-VP-Influence of MAP design and material parameters (P 81 ).c) Influence of varying pitch on D SimPer (E m2.05 MPa ).d) Influence of micropost arrangement and tip-geometry on maximum displacement in the direction of applied shear (E m2.05 MPa , P 81 ), Roun-Hex and Flat-Cub travel the least within the villi. Figure 5 . Figure 5.In vitro lap-shear tests a) schematic of lap-shear testing procedure.b) Force-displacement curve of MAP with E m2.05 MPa interlocking with artificial villi at overlap of 0.5 .Interlocking was characterized by measuring work of interlocking [W I (shaded area)] and maximum interlocking force (F Exp ).Slip-stick phenomenon confirmed the occurrence of mechanical interlocking between the artificial villi and microposts.c) Experimental work of adhesion (W A ) of flat patches increases due to mechanical interlocking of microposts with the villi (n = 3). Figure 6 . Figure 6.In vitro analyses were consistent with simulation outcomes thereby substantiating computational simulations as a tool for parametric studies to predict the in vivo mechanical interlocking of MAP-based devices.a) Micrographs of the experimental test-bed indicating varying overlaps between microposts and artificial villi (scale bar = 1 cm).b) Increased work of interlocking was observed with increased MAP stiffness and overlapping extent.c) Linear relationship between preload and work of interlocking is observed.d) Maximum interlocking force per micropost from experiment and simulation showed similar trends.e) Increasing MAP pitch increased interlocking work, as expected and observed in simulations (n = 3). Figure 7 . Figure 7. MAP material and design parameters to optimize in vivo residence time of fabricated devices.a) Maximum resistive force peaks when E m = 9 MPa ( 0.85 ).b) (i) Schematic of MAP (3×3 mm 2 ) consolidating within villi under constant peristaltic contraction.(ii) Increasing spacing between the microposts increases the extent of overlap.MAPs with round-tipped microposts arranged in hexagonal pattern with edge-to-edge spacing of 400 μm maximize the number of microposts while attaining >100% thereby optimizing the cumulative resistive force.
v3-fos-license
2021-09-01T15:12:37.666Z
2021-06-22T00:00:00.000
237394105
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://periodicos.ufmg.br/index.php/textolivre/article/download/34260/27494", "pdf_hash": "fefe794c11b61c7e49ae43815ee4b87b9c5ff4f1", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42506", "s2fieldsofstudy": [ "Education" ], "sha1": "9a5e854c6bfcc9034869da7d517daa009511fb6b", "year": 2021 }
pes2o/s2orc
Effectiveness of flipped learning and augmented reality in the new educational normality of the Covid-19 era Flipped learning and augmented reality have become two emerging didactic proposals today in the field of education. This study analyzes the effectiveness of flipped learning and augmented reality in various dimensions related to the learning process. A quasi-experimental design has been carried out in a sample of 116 students from Spain, of the third level of Secondary Education. A questionnaire has been used to collect the research data. The results show that there is a high appreciation by students of both educational experiences, although differences in various dimensions are present. Those who have received teaching based on flipped learning show significance in the dimensions teacher-student, autonomy, deepening and classtime. On the other hand, those who have developed the experience with augmented reality show significance in the dimensions of motivation, interrelation with content and students, and resolution. In conclusion, both the application of an emerging methodology based on flipped learning and the use of educational technology with augmented reality contributes positively to the optimization of learning processes in the Mathematics classroom. Introduction Today's education is subjected to many changes and transformations and there are multiple factors that affect the effectiveness of the training process (MURILLO, 2016). Many of them are caused by the inclusion of information and communication technologies (ICT), have led education in recent years (RODRÍGUEZ; CÁCERES; ALONSO, 2018), causing great advances and new educational models (VIÑALS; CUENCA, 2016). This transformation process has been carried out in all areas and areas of the educational field (AREA; HERNÁNDEZ; SOSA, 2016; PEREIRA; FILLOL; MOURA, 2019), as well as at all levels of educational systems (LARIONOVA et al., 2018). The inclusion of these technologies has led to the emergence of original and new educational models (LI et al., 2019) and new ways of learning by students (GARROTE; ARENAS; JIMÉNEZ-FERNÁNDEZ, 2018). ICTs also increase teaching-learning options, due to the possibility of making use of them in any space and time and their ergonomics to the educational situations that may arise PASCUAL, 2017). For all this, new spaces and learning scenarios have been created, with digital and mobile content (RADU, 2014), thus achieving great motivation in terms of the student's predisposition to learning (VILLALUSTRE; DEL MORAL, 2017), which leads to an improvement in the performance of students (MARÍN; MUÑOZ, 2018). Even so, on some occasions, this immersion of the educational field in the technological field is slow and not progressive, causing a lack of innovative action, which does not allow investing the spaces, times and actions of traditional teaching (LLANOS; BRAVO, 2017). Literature review 2.1 Flipped Learning in the Didactic Processes One of the main tools found in the educational vanguard is flipped learning (SÁNCHEZ; SÁNCHEZ; RUIZ, 2019). Its importance is such that it has become, in a very short time, a very successful methodology (SEERY, 2015;HABIBURRAHIM, et al., 2019). Its implementation is based on the use of ICT in education (LÓPEZ BELMONTE; POZO SÁNCHEZ; FUENTES CABRERA; LÓPEZ NÚÑEZ, 2019), investing space and time, as well as leaving the leading role to students (FROEHLICH, 2018;MCLAUGHLIN et al., 2014). Furthermore, all this is subject to extrinsic factors that increase the chances of success of this model (MENGUAL-ANDRÉS et al., 2020). Flipped learning arises from the work of specialists such as A. Sams and J. Bergmann, who in 2012 produced audiovisual material so that students who could not attend class (BERGMANN; SAMS, 2012). Since those first actions, many transformations have been suffered by flipped learning, always keeping its essence and basic principles (SOLA et al., 2019). For all this, flipped learning has become one of the tools most used by teachers in the world, obtaining very good results with its implementation (AWIDI; PAYNTER, 2019; YOSHIDA, 2015) and an increase in positive variables for the development of the student body (FUENTES CABRERA et al., 2020). Flipped learning is a trend, with online and face-to-face teaching intervention (LEE; LIM; KIM, 2017;NORTVIG;PETERSEN;BALLE, 2018), in which the protagonism of the educational act goes from being the teacher to the student's (JENSEN et al., 2018;KWAN;FOON, 2017). For this reason, it requires reversing traditional educational processes (BAUER et al., 2016), investing the spaces and times in the educational act (LÓPEZ BELMONTE; MORENO-GUERRERO, et al., 2019). These processes favor the autonomy of the students, their learning, and the better use of school time (ABEYSEKERA; DAWSON, 2014;BORAO;PALAU, 2016;CUMMINS;WAUGH, 2017;SCHMIDT;RALPH, 2016). With regard to the teaching of mathematics, high levels of success are also obtained with the implementation of flipped learning. Its implementation gives rise to new educational experiences (HODGES; CONNER, 2011), with better acquisition of skills by students (CRUZ; PUENTES, 2012), and development of all the potentialities previously described, but in the field of mathematics (DE ARAUJO;OTTEN;BIRISCI, 2017;BISHOP;VERLEGER, 2013). Therefore, its implementation leads to an improvement in the participation, performance and grades of students in the area of mathematics (ADAMS; DOVE, 2018; AMSTELVEEN, 2019; SUN; XIE; ANDERMAN, 2018). Augmented Reality (AR) as Educational Technology Among all the tools that ICT offer us, Augmented Reality stands out, which has been considered one of the innovations with the most projection of the last decades (LORENZO; SCAGLIARINI, 2018), particularly in the educational field (CABERO; ROIG, 2019), where it has acquired a great protagonist. This tool allows the development of unique experiences, promoting new ways of teaching, as well as innovative ways of learning (CHENG, 2017). Augmented reality is conceived as a technology that "allows the combination of digital information and physical information in real time through different technological devices" (BARROSO et al., 2017). This technology, associated with different mobile devices that can develop it, makes it possible to expand the knowledge that can be offered to students, as well as a generating source of extra multimedia and digital information for students (GÓMEZ et al., 2018). The technology that gives rise to augmented reality, can offer its multiple functions and benefits to the different educational stages for which it is available, being a useful and effective tool both in Early Childhood Education (LÓPEZ BELMONTE; POZO SÁNCHEZ; LÓPEZ BELMONTE, 2019), and in Higher Education (GARAY; TEJADA; CASTAÑO, 2017), passing through Primary Education (LÓPEZ BELMONTE; POZO SÁNCHEZ; FUENTES CABRERA; ROMERO RODRÍGUEZ, 2020) and Secondary Education (MORENO-GUERRERO et al., 2020). The benefits that derive from the implementation of AR in the educational field include the improvement of the student's involvement as an active agent of their own learning (CABERO; LLORENTE; GUTIÉRREZ, 2017), the reinforcement of digital competence in the student body (TOLEDO; SÁNCHEZ; UNIVERSIDAD DE SEVILLA, 2017), the increase in motivation (BACCA et al., 2014), the enhancement of their autonomy and attention to content (MARÍN; CABERO; GAL-LEGO, 2018), attention to the task to be carried out (CHENG, 2017) and the exploration of learning (FOMBONA; VÁZQUEZ, 2017). This implies that various educational modalities can be developed that require the development of all these areas already mentioned, promoting collaborative learning, based on constructivism, of a meaningful nature and by discovery (CABERO; LLORENTE; MARÍN, 2017). All of the above, favors the learning climate in the classroom, and increases the options for success in the development of teaching and learning (PRENDES ESPINOSA, 2015). Therefore, it is not surprising that AR has made a niche for itself in the current educational landscape, and that its use is increasing until it is one of the tools most used by teachers and students (RODRÍGUEZ; HINOJO; ÁGREDA, 2019), as well as one of the most studied and object of experimentation in the educational scientific literature (CAMPOS; RAMOS NAVAS-PAREJO; MORENO GUERRERO, 2019). As for the field of mathematics, AR has become one of the tools with which the best benefits can be obtained. Numerous studies show the success of its implementation in the area of mathematics (GARZÓN; ACEVEDO, 2019), with a notable increase in the improvement of educational practices in this area and in other related areas (BOWER et al., 2014). The increase in potential and abilities in students is notable with its implementation (CAI et al., 2019), being a very useful tool in learning mathematics (CAHYONO et al., 2020). The union of the two tools selected for this experimentation, can suppose a wide success for the development of the teachers' teaching, as well as an increase in the probabilities of success in the learning of the students (LÓPEZ BELMONTE; POZO SÁNCHEZ; FUENTES CABRERA; ROMERO RODRÍGUEZ, 2020). Research purpose and questions According to studies in recent years, within the innovative methodological spectrum is flipped learning (MORENO-GUERRERO et al., 2020;PARRA-GONZÁLEZ et al., 2020) and educational technolo-gies such as augmented reality (CABERO; ROIG, 2019; LÓPEZ BELMONTE; POZO SÁNCHEZ; FUENTES CABRERA; PARRA GONZÁLEZ, 2019). For this reason, the present study aims to analyze the efficacy achieved by both flipped learning and augmented reality in various dimensions of a socio-educational nature. All this after putting into practice the training action of a didactic unit. To specify the investigation, the different questions are presented • Does the type of instructional process affect student motivation? • Does the type of instructional process affect the interaction between the student and the teacher? • Does the type of instructional process affect the interaction between students? • Does the type of instructional process affect the interaction between the student and the contents? • Does the type of instructional process affect student collaboration? • Does the type of instructional process affect the autonomy of the students? • Does the type of instructional process affect the deepening of the didactic content? • Does the type of instructional process affect problem solving by students? • Does the type of instructional process affect class time? • Does the type of instructional process affect the evaluations obtained by the students? Research design and data analysis The present study acquired funding from the I + D + i OTRI project, called: Active methodologies for learning through technological resources for the development of society, with code CNT-4315, belonging to the University of Granada (Spain). In turn, this research is derived from the Doctoral Thesis entitled: Correlational analysis of incident factors in teachers during the implementation of flipped learning. This study has been based on a quantitative methodology, conducted by a quasi-experimental design, descriptive and correlational (HERNÁNDEZ; FERNÁNDEZ; BAPTISTA, 2014; RODRÍGUEZ, 2011). Similarly, the guidelines and procedures developed by previous impact studies have been followed (LÓPEZ NÚÑEZ et al., 2020). All this to develop an investigation based on an analytical model validated by the scientific community. In this case, the experimentation has been developed at the instructive level (MORENO-GUERRERO et al., 2020). For this, two groups of analysis have been used (1-Emerging methodology; 2-Educative technology). Therefore, the type of training action used (flipped learning or augmented reality) has been set as an independent variable. The scope reached in the different study dimensions has been taken as a dependent variable (HINOJO LUCENA et al., 2019). The collected data were analyzed with the SPSS program. Various statistics such as mean (M), standard deviation (SD), skewness tests (Skw) and kurtosis (Kme) were performed. More specific tests were also carried out to find the comparison between the means obtained, through the Student's t-test. The size of the effect of the instructive action was obtained by means of Cohen's d and the biserial correlation (rxy). A p < 0.05 was established to define significance. Participants For this formative contrast, a sample of 116 Spanish third-level secondary school students from the Autonomous City of Ceuta has been chosen. This city is characterized by being a region in which four cultures coexist (Christian, Muslim, Hebrew and Hindu). This coexistence generates a peculiar and unique framework at the educational level, where respect and tolerance is the basis of all educational action (LÓPEZ BELMONTE; MORENO-GUERRERO, et al., 2019). The students were chosen by convenience sampling, due to the researchers' ease of accessing subjects from the educational center. Regarding the size of the sample, the experts consider that in this type of study the number of participants established does not represent any impediment to its conduct (CHOU; FENG, 2019; YILMAZ; SOYER, 2018). Therefore, the number of selected participants is sufficient to carry out the research. The chosen students articulate a sample composed of 37.5% men and 62.5% women, with a mean age of 15 years (SD = 1.38). The students were divided into two groups to be analyzed and a different training action was carried out in each group (Table 1). Instrument The data has been collected through a questionnaire. This tool has been prepared based on previous studies (POZO SÁNCHEZ; . The questionnaire is composed of a sociodemographic dimension to categorize the subjects according to sex, age, nationality, religion and school year. As well as different socio-educational dimensions that are exposed and detailed below: • Motivation achieved by students in teaching and learning actions. • Interactions of students with the teacher, the content and among peers. • Collaboration of students during the training process. • Autonomy developed by students during the proposed tasks. • Deepening obtained in the didactic contents during the instructive process. • Problem solving by the students themselves in carrying out the tasks. • Use of class time by educational agents. • Ratings obtained by students in the evaluation test. In total, the questionnaire is made up of 35 items that mostly follow a 4-point Likert rating scale (1-most negative value and 4-most positive value). In addition, the instrument contains other closed-ended questions. The validation of the tool was produced using a Delphi method, to qualitatively validate the instrument. Six experts in educational innovation participated in this process. These judges gave the questionnaire a score of 4.83 (1-6) (SD = 0.47). The valuations were also analyzed using Fleiss' Kappa (0.87) and Kendall's W (0.85), obtaining relevant and consistent values in the opinions of the experts. The resulting feedback was applied to improve and optimize the questionnaire. Next, an exploratory factor analysis was performed that involved the performance of various tests. The Bartlett's sphericity test showed that the variables were dependent (2476.43; p < 0.001). With the Kaiser-Meyer-Olkin test, sample adequacy was determined (KMO = 0.85). Also, statistics such as Cronbach's Alpha (0.86), Composite Reliability (0.83) and the Average Variance Extracted (0.81) were performed to establish the reliability of the instrument. Procedure The development of the research and the selection of the educational center was not complex due to the teaching performance of the researchers in said institution. Likewise, the researchers themselves were the ones who carried out the training contrast. Both those responsible for the school institution and the participants were aware of the purpose of the study. Informed consent was also obtained from the students to take part in this study. At the formative level, to carry out this research, a training intervention of eight sessions was implemented in the subject of Mathematics. In this unit, contents related to geometric figures (areas of prisms and volume of geometric bodies) were worked on. At the organizational level, the students were divided into four groups. It was not necessary to separate or modify the configuration of the students from their natural group since the educational center contains four training lines. Two of the groups developed the didactic unit through flipped learning. In this modality, the teacher designed multimedia resources where he explained the contents to be taught during the course of the unit. The videos were hosted on a digital platform so that the students had access and were always available to be viewed by the students before going to the school (Figure 1). In this way, the students came to the classroom with the contents viewed and familiarized. Thus, the teacher could dedicate the time in the classroom (55 minutes) to the practice of the contents, promoting group work and the resolution of doubts, as well as to go deeper into the subject by having more time. The other two remaining groups developed the different sessions through augmented reality, through the use of tablets to read different templates provided by the teacher in the classroom with QR codes. These templates allowed access to information about the content to be taught (Figure 2). In turn, they contained theoretical information so that students could consult, exercise demonstrations and extra information for advanced students. In addition, a specific application was used on said contents, called Geometry. This application allows you to interact with the figures, view all their faces and solve questions and problems (Figure 3). In this way, the students were able to get in touch with their own content in 3 dimensions (3D) thanks to augmented reality technology. The application was used in class time as it allowed the explanation of content, the completion of tasks and their correction and automatic feedback. In the different groups, the training intervention was developed by a single teacher, in order to avoid possible biases in the instructional action of different teachers. The assignment of the treatment (type of training action) to the group was carried out randomly. After carrying out the educational intervention, the data was collected using the questionnaire. Then they were analyzed with the statistical program and conclusions were drawn. Results In general, the data collected in table two shows the descriptive statistics developed in this study, show normal distribution parameters. The reason for choosing parametric tests is that the kurtosis and skewness are between -1.96 and +1.96, according to Jöreskog (2001). Students in Secondary Education have shown a different response tendency, depending on the didactic method applied. Students who have developed an emerging methodology show averages ranging from 2.5 to 3, which shows a medium to high evaluation of this teaching procedure. The highest average is given by the teacher/learner ratio dimension, while the lowest average is given by the student/content ratio dimension. In contrast, students who have received training based on educative technology have also shown average values similar to those of the other group. In this case, there are four dimensions that exceed the average of 3. It can be determined that this method is generally valued by students as medium-high. The dimension with the highest average is motivation. On the other hand, among the various dimensions, class time is the one with the lowest value. Both groups developing the MS method and the group applying the ET method, no dispersion of response is shown. This is due to the fact that the standard deviation values are below 1. Also, it must be taken into account that kurtosis in both groups is mainly platykurtic, although occasionally, mesokurtic and leptokurtic kurtosis are observed (Table 2). Source: from the authors. Note: Established grade group (None: 1-4.9; Few: 5-5.9; Enough: 6-8.9; Completely: 9-10). The comparison of means shows differences in most study dimensions when taken individually. If the overall mean for each of the groups is taken into account, the ratings are similar, with the group that developed the MS method being much higher. The motivation, student-content, student-student and resolution dimensions were rated better by the ET group than by the MS group. Moreover, the autonomy, teacher-student, classtime, deepening and ratings dimensions were better evaluated by the EM group than by the ET group (Figure 4). The results presented in Table 3 show the degree of independence in both pedagogical methods. These data are obtained after applying the Student's t-statistic method, specifically the independent samples method. Significant relationships are present in these data, in particular in motivation and teachers-student dimensions, although with different trends. The first one is in favour of the ET group and the second one in favour of the EM group. The dimensions resolution, student-content, deepening, student-student and rating show significant relationships. In this case, the student-student, student-content and resolution dimensions are in favour of the ET group. In contrast, deepening is in favour of the EM group. The biserial correlation indicates a medium-low associative strength in all study dimensions. Furthermore, for all dimensions of the study, the effect size is very low. Discussion and Conclusions This study has comparatively analyzed the effectiveness of flipped learning and AR in Mathematics class. The need to analyze both elements within the training actions is based on the importance of optimizing and renewing current education. Updating education is essential for teachers to carry out methodologies adapted to current socio-educational needs ( The results obtained in this study coincide with those obtained in other similar investigations that analyze the same topic (Emerging methodology) in the mathematics classroom. In this way, it has been obtained that this emerging methodology generates an improvement in the participation, performance and grades of students in the mathematics area (ADAMS; DOVE, 2018; AMSTELVEEN, 2019; SUN; XIE; ANDERMAN, 2018). Likewise, this study coincides with other investigations in the verification of an improvement in the autonomy of the student (SALAS; LUGO, 2019; TOURÓN; SANTIAGO, 2015) and a greater use of time in the classroom (LONG; CUMMINS; WAUGH, 2017). Likewise, the results obtained regarding the use of AR (Educative technology) are similar to those found in the scientific literature. In this way, it has been obtained that the use of AR generates a significant increase in student motivation (CABERO; LLORENTE; GUTIÉRREZ, 2017), and facilitates access to the didactic contents (BARROSO et al., 2017;MARÍN;GALLEGO, 2018). On the other hand, there has also been an improvement in the interrelationships of the students within the classroom (CABERO; LLORENTE; MARÍN, 2017) and an increase in their ability to explore learning in their search for solutions to the problems raised (FOMBONA; VÁZQUEZ, 2017). In conclusion, both the application of an emerging methodology based on flipped learning and the use of educational technology with AR contributes positively to the optimization of teaching and learning processes. The use of the flipped learning model as an emerging methodology has contributed positively to the interrelation of students with teachers, to the improvement of their degree of autonomy, to the deepening of learning, to the use of time in the classroom and to significant improvement in ratings. On the other hand, the use of AR as a techno-educational resource has led to significant improvements in student motivation, in access to didactic content, in the interrelationships between students and in their ability to solve problems. For all the above, this research presents an interesting added value for the scientific community and for the teaching community, since they open the way for research in a field in need of this type of study. Analyzing in a comparative way the effects of active methodologies in general, and of flipped learning and AR in particular, allows increasing the information field of its results in learning process. The updating of current teaching must be nourished by these analyzes in order to implement the didactic methodologies that best adapt to the needs of the students. A limitation of this research is related to the scope of the results obtained, which should be taken with caution. This is an exploratory study with a small sample size, so the results are not generalizable. Likewise, the results obtained do not reflect the specific contribution percentage of the turning over of the learning moments, the active role of the student, the use of techno-pedagogical resources or the use of AR. In addition, the discussion of the results obtained with the scientific literature is complex, due to the scarcity of studies that comparatively analyze the incidence of flipped learning and AR in the teaching and learning processes. An interesting future line of research is to carry out an analysis of the effects of an optimization plan of the flipped learning model and of the application of AR in the Mathematics class, based on the development of a holistic analysis (strengths, weaknesses, opportunities and threats). Likewise, it is proposed to analyze in a combined way the effect of other active methodologies within the learning process in the Mathematics class. Finally, in relation to the new socio-educational context starring covid-19, it is proposed to investigate the possibility of applying AR and flipped learning methodology with limited availability of the face-to-face classroom.
v3-fos-license
2016-05-12T22:15:10.714Z
2016-03-28T00:00:00.000
7813725
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0151971&type=printable", "pdf_hash": "305cc655b5a0ed06653efa010f7960c200d11617", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42510", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "sha1": "305cc655b5a0ed06653efa010f7960c200d11617", "year": 2016 }
pes2o/s2orc
Reduction in Malaria Incidence following Indoor Residual Spraying with Actellic 300 CS in a Setting with Pyrethroid Resistance: Mutasa District, Zimbabwe Background More than half of malaria cases in Zimbabwe are concentrated in Manicaland Province, where seasonal malaria epidemics occur despite intensified control strategies. Recently, high levels of pyrethroid and carbamate resistance were detected in Anopheles funestus, the major malaria vector in eastern Zimbabwe. In response, a single round of indoor residual spraying (IRS) using pirimiphos-methyl (an organophosphate) was implemented in four high burden districts of Manicaland Province from November 1, 2014 to December 19, 2014. The objective of this study was to evaluate the effect of this programmatic switch in insecticides on malaria morbidity reported from health care facilities in Mutasa District, one of the worst affected districts in Manicaland Province. Methods The number of weekly malaria cases for each health facility 24 months prior to the 2014 IRS campaign and in the subsequent high transmission season were obtained from passive case surveillance. Environmental variables were extracted from remote-sensing data sources and linked to each health care facility. Negative binomial regression was used to model the weekly number of malaria cases, adjusted for seasonality and environmental variables. Results From December 2012 to May 2015, 124,206 malaria cases were reported from 42 health care facilities in Mutasa District. Based on a higher burden of malaria, 20 out of 31 municipal wards were sprayed in the district. Overall, 87.3% of target structures were sprayed and 92.1% of the target population protected. During the 6 months after the 2014 IRS campaign, a period when transmission would have otherwise peaked, the incidence of malaria was 38% lower than the preceding 24 months at health facilities in the sprayed wards. Conclusions Pirimiphos-methyl had a measurable impact on malaria incidence and is an effective insecticide for the control of An. funestus in eastern Zimbabwe. Introduction Malaria is a major public health problem in Zimbabwe, which lies in the southern fringe of malaria transmission in sub-Saharan Africa [1]. Approximately half of the population of 12.9 million is at risk for malaria [2,3]. During 2013, there were 377,872 cases and 351 deaths attributed to malaria, with the greatest burden among children younger than five years of age, pregnant women and people living with HIV/AIDS [3][4][5]. Plasmodium falciparum accounts for 98% of all reported malaria cases and Anopheles arabiensis is the major malaria vector in much of the country. The epidemiological pattern of malaria transmission varies spatially and temporally in Zimbabwe, and is largely driven by elevation and rainfall patterns [5,6]. The rainy season spans November to April, while peak malaria transmission usually occurs between February and May as a result of the preceding rains. Malaria control in Zimbabwe relies on case management, insecticide-treated nets (ITNs) and indoor residual spraying (IRS), which have successfully reduced malaria transmission in many parts of the country [3,7]. However, the success of the malaria control program has been challenged by resurgence, particularly in Manicaland Province that continues to have a high burden of malaria. In 2013, Manicaland Province accounted for 51% of the malaria morbidity and 35% of the malaria mortality burden, despite encompassing less than 14% of the national population [2,3]. One potential reason for the resurgence of malaria is the development of insecticide resistance in An. funestus, the major malaria vector in the region [8][9][10]. Recent (2013-14) insecticide resistance monitoring in Mutasa District using standard World Health Organization (WHO) testing methods showed that An. funestus from Manicaland was highly resistant to pyrethroids and carbamates [11]. Vector susceptibility tests conducted with organochlorines (DDT and dieldrin) and organophosphates (malathion, fenitrothion and pirimiphos-methyl) showed 100% mortality 24 hours post-exposure [11], suggesting that a change in IRS strategy may be more effective. Insecticide resistance has important implications for malaria control because pyrethroids have been the primary insecticides used by the Zimbabwe National Malaria Control Program (NMCP) for IRS. In response to the emerging insecticide resistance, the President's Malaria Initiative (PMI) through the United States Agency for International Development (USAID) and in collaboration with the Ministry of Health and Child Care, switched insecticide classes for IRS from pyrethroids to an organophosphate in four high transmission districts in Manicaland Province (Chimanimani, Mutasa, Mutare and Nyanga) during the 2014 IRS campaign. The goal of the present study was to evaluate the population-level impact of the switch to an organophosphate insecticide on malaria morbidity in Mutasa District. The underlying study hypothesis was that IRS using an organophosphate insecticide would result in a reduction in malaria case incidence at health facilities in sprayed areas during the subsequent high transmission season (December 2014-May 2015), adjusting for environmental and climatic variables that could impact malaria transmission. Study Area Mutasa District is situated in the north-east of Zimbabwe bordering Mozambique, and encompasses an area of 622 km 2 that stretches from 18.20°to 18.58°S latitude and from 32.71°to 33.06°E longitude. Elevation rises from 600 meters in the valleys to 2,500 meters in the mountains. The human population was estimated to be 169,756 residents representing 42,479 households at the time of the 2012 census [2]. The district is irrigated by two major rivers, the Honde and the Pungwe, and the major economic activity is agriculture. The average daily temperature is 21.5°C; November is the hottest month with an average daily temperature of 24.5°C and July is the coolest month with an average daily temperature of 16.3°C. Malaria transmission is characterized as seasonal and unstable with major outbreaks during the rainy season which usually runs from November to April each year. In 2014, Mutasa District received 2,352 millimeters of rainfall between November and April and 96 millimeters during the dry season as measured at the Southern Africa ICEMR station in Hauna, the main town in the lower valley. The population covered by pyrethroid-based IRS in Mutasa District was 88% in 2012 and 91% in 2013. ITNs were not distributed in 2012 but were distributed to the general population in 2013 and to boarding schools in 2014. ITN distribution increased household ITN ownership from 64% to 87% between 2012 and 2013. Reported ITN usage averaged 52% during the study period [12]. Indoor Residual Spraying with Pirimiphos-Methyl An organophosphate insecticide, pirimiphos-methyl (Acetellic 300 CS, Syngenta, Sweden), was selected for the 2014 IRS campaign in Mutasa District. Trained and experienced spray operators conducted the IRS operations between November 1, 2014 and December 19, 2014. The spraying was done to attain the recommended dose for malaria control of 1.0 g active ingredient/m 2 [13]. Each of the wards in the district encompasses one or more entire health facility catchment areas. From among the district's 31 municipal wards, 20 malaria-prone wards were selected, which included 43,103 structures and covered a population of 87,275 (Fig 1). According to PMI, 87.3% of targeted structures were found and sprayed, while 92.1% of the target population resided in houses sprayed [14]. Malaria Case Data The health management information system (HMIS) of the Ministry of Health and Child Care routinely collects weekly malaria-related data at the health-facility level in Mutasa District. Indicators include the number of positive rapid diagnostic tests (RDT) using SD Bioline Ag. Pf., patients clinically diagnosed with malaria, and malaria deaths, stratified by age (< 5 years and 5 years). This passive surveillance system has previously been used to monitor secular trends in malaria morbidity and assess the impact of malaria interventions [15]. The system is operational at 43 health care facilities offering malaria diagnosis and treatment. Two-thirds of these facilities are government funded. Geographic coordinates of the health care facilities and the size of catchment area population for each health care facility were acquired through the district health team or field visits. Catchment area population size was adjusted for population growth by assuming linear growth during monthly intervals, summing to an annual population growth of 4% (as projected by the 2012 census) [2]. The primary dependent variable was the weekly number of malaria cases confirmed by RDT at each health care facility. Data prior to the completion of the 2014 IRS campaign included 105 epidemiological weeks (2012 week 50-2014 week 50) and data following the completion of the 2014 IRS campaign included 24 weeks (2014 week 51-2015 week 22). Weekly confirmed malaria case counts at each health care facility were standardized as rates per 1,000 population using estimates of the population within each health care facility catchment area. Primary Exposure Variables To assess whether weekly malaria incidence decreased following the IRS campaign and whether this decrease exceeded that which might be expected on the basis of trends prior to the 2014 IRS campaign two binary variables were created: time period indicating before and after the completion of the 2014 IRS campaign (pre vs. post IRS) and spray status indicating whether a health facility was in the sprayed zone (sprayed versus unsprayed). The period variable estimates the difference in malaria morbidity between the pre-and post-IRS campaign (an overall IRS effect) and the spray status variable estimates the difference in malaria morbidity for health facilities located in the sprayed versus unsprayed zones (assessing/controlling for geographic trends). The period by spray status interaction allows the model to estimate a more specific IRS effect of whether there was a difference in malaria morbidity before and after the IRS campaign for health facilities in the sprayed zones. Potential Confounding Variables To account for seasonal variations in malaria cases, indicator variables of the calendar month in which the case was reported were introduced into the model. Additionally, to account for year-to-year variation of malaria cases, indicator variables representing the years 2012, 2013 and 2014 also were entered into the model (2015 was the reference year). Several environmental variables that affect the survival and reproduction of the malaria vector and the development, survival, and reproduction rates of the Plasmodium within Anopheles were included as potential explanatory variables. Elevation, rainfall estimate (RFE), day and night land surface temperature (LST) and Normalized Difference Vegetation Index (NDVI) were extracted from remote-sensing sources for each health care facility. Elevation data was estimated at 90-meter resolution from the Shuttle Radar Topography Mission (SRTM) digital elevation model. Decadal (10 day) RFE data were downloaded via the Africa Data Dissemination Service (ADDS). RFE is an estimation of rainfall from the Meteosat 7 satellite that has been calibrated against ground-based rain gauge data. Day and night LST and NDVI were obtained from Moderate Resolution Imaging Spectroradiometer (MODIS) sensor aboard the NASA satellites Aqua and Terra. Daily day and night LST data, expressed as°C were extracted from MOD11A products with 0.25 km by 0.25 km spatial resolution. LST is a proxy for the prevailing temperature of the air. Sixteen-day composite NDVI was extracted from MOD13Q1 products with 1 km by 1 km spatial resolution. NDVI is correlated with the amount of vegetation and typically ranges from -1 to 1. Values close to 1 indicate very dense vegetation, while values zero and below represent water or impervious land forms. Hydrologic analysis was performed using the digital elevation model to create a stream network layer, containing attribute information expressing the classifications of streams using Strahler's method [16]. In this classification, a stream of order 2 is formed when two streams of order 1 join. Stream classifications ranged from 1 indicating low volume streams typically present only during the rainy season, to 4 indicating high volume, year-round streams usually found at lower elevations. The two major rivers in Mutasa District, the Pungwe and Honde, had a stream order of 4. The Euclidean distance from each household to the nearest stream in each of the 4 classes was calculated in ArcGIS 10.2 (ESRI, Redlands, California). The time unit of analysis was one week. Parasitological data were collected on a weekly basis, while environmental data obtained from remote sensing were obtained at different temporal resolutions. Consequently, all time-varying environmental data were rectified into weekly values by taking averages of daily values and disaggregating decadal values. Due to cloud cover and satellite malfunctions, day and night LST were missing for 0.36% and 0.33% of daily values, respectively. Missing values were imputed by assuming a linear trend for non-missing data. In other words, the difference between two succeeding data points was assumed to be equally distributed among the times with no observed value in between. To account for possible elapsing (lag) time in the effect of time-varying environmental variables on the outcome (weekly number of malaria cases confirmed by RDT), lags up to 3 months were incorporated. Three months was chosen as the maximum biologically plausible lag between malaria incidence and environmental variables. Statistical Analysis Descriptive statistics were used to examine the characteristics of the sample prior to and following the 2014 IRS campaign. Rather than conduct facility-level analyses for each of the health care facilities, a negative binomial panel data model was run at the district-level in which the dependent variable was the number of malaria cases by health care facility and week. The number of weekly malaria cases confirmed by RDT for a health care facility was modeled by using a negative binomial regression model for all facilities with catchment area population size as the regression offset to model the rate of RDT positivity. Poisson regression models for the RDT confirmed cases were consistently over-dispersed (greater variation in the data than the Poisson model can accommodate) throughout the analysis and hence the model was replaced with negative binomial regression, which allows for a scaling factor on the model variance to account for over-dispersion. Further, regression inference was based on a generalized estimating equations (GEE) approach to account for the within health facility repeated measures correlation [17]. All variables were introduced separately as independent variables in the univariate adjusted regression models, adjusted for seasonality and health facility. Variables with a p-value <0.1 in the univariate adjusted regression models were considered as potential candidates for the multivariable selection process. A manual stepwise backwards elimination approach was used to select environmental variables and their associated lags. Selection of the model with the best fit, the best working correlation structure in GEE and optimal lag sizes for time-varying environmental factors were determined by comparing quasi-likelihood under the independence model criterion (QIC) values of different models. The QIC is a modification of the Akaike Information Criterion (AIC) for GEE models and similar to the AIC, a model that minimizes the QIC is considered the best fit [18]. Results were expressed in terms of incidence risk ratios (IRR) with corresponding 95% confidence intervals to quantify the expected change in the incidence rate of malaria when the exposure variable was positive or increasing. A p-value <0.05 was considered statistically significant. The root mean square error (RMSE) standardized by catchment area population was used to evaluate the agreement between observed and model predicted number malaria cases for the district. The RMSE per 1,000 catchment area population was also calculated by health care facility to determine differences in predictions by facility. To assess the impact of the 2014 IRS campaign over time on model predictions, sensitivity analyses were conducted varying the cutoff point dividing the time from the week IRS began to the week IRS was completed. The optimal breakpoint was considered the point (week) where the QIC was smallest. Sensitivity analyses were also performed using data aggregated to a month to investigate whether the results were dependent on the time unit of analysis. Environmental data were linked to the health care facility reports mapped in ArcGIS 10.2 (Redlands, California). All statistical analyses were conducted in STATA 11.2 (College Station, Texas). Ethical Considerations The Institutional Review Boards of the Johns Hopkins Bloomberg School of Public Health, the Biomedical Research and Training Institute and the Medical Research Council of Zimbabwe approved this research. The analysis was based on malaria reports collected routinely by the NMCP in Zimbabwe. It was not necessary or possible to obtain written informed consent as these reports were de-identified and anonymized prior to aggregation. Results After excluding one health care facility that started reporting data in April 2013, the analytical sample comprised 42 health facilities that had complete data on the weekly number of malaria cases from December 2012 to May 2015. The 20 malaria-prone wards selected for spraying covered the catchment areas of 28 health facilities, while the remaining 11 wards covered the catchment areas of 14 health facilities. The campaign succeeded in attaining the project goal of spraying 85% of target structures [14], and spray coverage by ward ranged from 76% to 96% (Fig 1). During the study period, 124,206 malaria cases were reported, of which 113,208 were from health facilities in sprayed wards. During the two high transmission seasons preceding the IRS campaign, the number of malaria cases reported averaged 42,586 compared to 12,222 malaria cases in the high transmission season following application of the organophosphate. The malaria incidence rate fell from 270 per 1,000 in the two high transmission seasons pre-IRS to 71 per 1,000 post-IRS (Table 1), representing a crude incidence risk ratio of 0.26. Several significant associations were identified between malaria incidence and environmental variables in both the univariate and multivariate negative binomial regression models ( Table 2). After adjustment, an increase of 10 millimeters in RFE (estimated rainfall) resulted in a 2% increase in malaria incidence 6 weeks later (IRR 1.02, 95% CI 1.01-1.03). Each 1°C increase in night LST at a 10-week lag resulted in a 2% increase in malaria incidence (IRR 1.02, 95% CI 1.01-1.04). In contrast, each 1°C increase in day LST at a 1-week lag resulted in a 2% reduction in malaria incidence (IRR 0.98, 95% CI 0.97-0.99). Malaria incidence decreased with increasing elevation; every 100-meter increase in elevation was associated with a 20% reduction in malaria (IRR 0.80, 95% CI 0.73-0.88). A similar reduction in malaria incidence was found for every one kilometer increase in distance from a second order stream (IRR 0.81, 95% CI 0.76-0.86) ( Table 2). Adjusting for annual and seasonal trends, environmental variables and clustering at health care facilities, there were no significant differences in malaria incidence between health facilities in sprayed wards compared to unsprayed wards prior to the 2014 IRS campaign (IRR 0.94, 95% CI 0.65-1.34). However, there was a 38% decline in predicted malaria cases after the 2014 IRS campaign in sprayed wards compared to before (IRR 0.62, 95% CI 0.51-0.76). In contrast, no change in predicted malaria incidence was observed after the 2014 IRS campaign compared to before in health facilities located in unsprayed wards (IRR 1.02, 95% CI 0.74-1.41) ( Table 2). The observed weekly cases for the study area agreed closely with the predicted counts throughout the study period (Fig 2). For the entire study period, the final model predicted 132,458 cases across all 42 health facilities, whereas, 124,206 cases were reported. However, there was variability by health facility in how closely the predicted and observed total number of cases were matched. The observed number of malaria cases was higher than expected given the surrounding environmental conditions and seasonal variations at three health facilities (Old Mutare, Honde Mission, and Hauna). The model also over-predicted the number of malaria cases given the surrounding environmental conditions and seasonal variations in some health facilities (Zongoro, Dreaanane and Gatsi). The RMSE per 1,000 catchment areas population was poor for St Peter's, Sagambe, and EHPL health facilities and good for Mutasa, Samanga and Bonda health facilities (S1 Table). The model with first order autoregressive (AR1) correlation was found to fit the data reasonably well compared to other choices of a working correlation matrix. The fit of the final model was assessed using the QIC and RMSE. The addition of environmental variables to the model improved the model fit and predictions, as the QIC was lower for the final model than the null model (QIC for the null 17,867 versus QIC for the full model 3,934). Minimal differences were identified after assessing the sensitivity of the findings to different cut-offs for the binary variable indicating time before and after the 2014 IRS campaign, suggesting the results were robust to the cut-off used. Furthermore, sensitivity analyses consisting of data aggregated to the month level produced similar estimates of the effectiveness of the IRS campaign. Discussion Owing to insecticide resistance in An. funestus in eastern Zimbabwe, the NMCP with support from PMI began a large-scale IRS campaign with the organophosphate pirimiphos-methyl in four high transmission districts in Manicaland Province, including Chimanimani, Mutare, Mutasa and Nyanga Districts. Using health facility surveillance data, a reduction in the incidence of malaria was observed in one of the four high transmission districts. During the subsequent high transmission season following the switch from pyrethroids to organophosphates, a 38% decline in malaria incidence was reported by health care facilities from wards that conducted IRS after accounting for possible confounding by environmental and climatic variables. Previous research demonstrated that switching to an organophosphate insecticide for IRS effectively reduced biting rates and vector densities in areas with pyrethroid-resistant vectors in Ghana, Benin, Tanzania and Zambia [19][20][21][22][23]. This study demonstrates a reduction in malaria morbidity using health care facility surveillance data. Malaria transmission was associated with rainfall, proximity to second order streams, elevation and temperature. These results concur with previous studies that found that elevation [5], temperature, rainfall [6,24] and proximity to streams [25,26] were associated with malaria risk. Zimbabwe has experienced periodic droughts that could impact malaria transmission and confound interpretation of the association between malaria control interventions and Impact of IRS on Malaria Incidence outcomes. After adjusting for climatic variables and seasonality, malaria incidence rates decreased following the 2014 IRS campaign, supporting the conclusion that switching to organophosphates in this setting contributed to the observed reduction in malaria morbidity. No major political, socio-economic or health-care changes with the potential to reduce malaria morbidity by almost half occurred in Mutasa District during the study period. Typically, data from health care facilities only includes data on the number of suspected malaria cases. However, the HMIS in Zimbabwe allows reports of confirmed malaria cases. In calculating incidence rates, the denominator used was the estimated catchment area population size. The reliability of this value has been questioned as this assumes that individuals visit the closest health facility in their catchment area. However, the main results did not change after including an offset for catchment area population size, indicating that the reported catchment area population size may be a reliable estimate. The study also underscores the utility of HMIS data in the evaluation of population level interventions. The HMIS has the advantage of providing quality data quickly and easily, with minimal additional investment. Additionally, HMIS reflects the burden of disease on the health system. Results from this study further suggest that passive surveillance data from the HMIS in Zimbabwe was sufficiently sensitive, and the effect size sufficiently large, to detect a reduction in malaria morbidity following the 2014 IRS campaign. There are several important limitations to this analysis. Causal inferences between IRS and reduction in malaria incidence should be made with caution as spraying was not implemented as part of a randomized control trial. However, data from 14 health facilities located in unsprayed wards were included in the analysis to serve as a comparison group and better estimate secular changes in malaria morbidity unassociated with the 2014 IRS campaign, particularly changes in rainfall. Although the univariate model indicated that health care facilities in unsprayed wards had a lower burden of malaria, the multivariable model showed no significant differences between health care facilities in sprayed and unsprayed wards prior to the 2014 IRS campaign, suggesting that environmental and climatic variables adequately adjusted for the differences. However, the analyses did not account for other factors such as population movement, changes in health seeking behaviors or changes in ITN coverage that also could impact malaria incidence. This seems reasonable given that the rural population of Mutasa District is relatively stable, with access to health facilities providing malaria diagnosis and treatment. Following the distribution of ITNs in 2013, household surveys recorded an increase in household ITN ownership but no changes in ITN usage (unpublished data); therefore, it is unlikely the ITN distribution in 2013 affected the estimate of the reduction in malaria incidence. Additionally, although the number of suspected malaria cases was not explicitly modeled, a descriptive analysis did not indicate changes in diagnostic practices over the study period (data not shown). The HMIS in Zimbabwe has been in place for decades and was previously used to evaluate the impact of changes in malaria morbidity [15], construct empirical seasonality maps [23] and describe the spatial and temporal distribution of malaria [27,28]. The pronounced decline in malaria morbidity observed is evidence supporting the benefit of pirimiphos-methyl in an area with high levels of pyrethroid resistance and with high coverage in the targeted districts. Although the IRS strategy was successful, continued entomological monitoring for insecticide resistance will be necessary. With emerging resistance to multiple insecticides, novel strategies to manage insecticide resistance need to be developed. Supporting Information S1
v3-fos-license
2018-04-03T06:09:46.614Z
2014-07-21T00:00:00.000
10963034
{ "extfieldsofstudy": [ "Medicine", "Computer Science" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://europepmc.org/articles/pmc4169384", "pdf_hash": "60a6ef1e58f9bfe7af85e75d10b07e0fe873c738", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42511", "s2fieldsofstudy": [ "Education", "Medicine" ], "sha1": "60a6ef1e58f9bfe7af85e75d10b07e0fe873c738", "year": 2014 }
pes2o/s2orc
Combining Dedicated Online Training and Apprenticeships in the Field to Assist in Professionalization of Humanitarian Aid Workers : a 2-year Pilot Project for Anesthesia and Intensive Care Residents Working in Resource Constrained and Low-income Countries Introduction: As a result of the gaps in humanitarian response highlighted by several reports, the international community called for an increased professionalization of humanitarian aid workers. This paper describes a pilot project by an Italian university and a nonprofit, non-governmental organization to implement a medical apprenticeship in low-income countries during Anesthesia and Intensive Care Medicine residencies. Methods: Before deployment, participants were required to complete a dedicated online training course about safety and security in the field, principles of anesthesia at the district hospital level, emergency and essential surgical care, essentials of medical treatment in resource-constrained environments and psychological support in emergencies. Results: At the end of the program, a qualitative self-evaluation questionnaire administered to participants highlighted how the project allowed the participants to advance their professional skills when working in a low-resource environment, while also mastering their adapting skills and the ability to interact and cooperate with local healthcare personnel. The project also proved to be a means for personal growth, making these experiences a recommendation for all residents as a necessary step for the professionalization of healthcare personnel involved in humanitarian aid. Introduction In 2010 there was an estimated total 274,000 humanitarian workers worldwide with a growth rate of 4% from the previous year.Although the system is still dominated, in terms of operational presence and resource share, by a small group of NGOs and international organizations whose combined humanitarian expenditure in 2010 exceeded $2.7 billion, there are an estimated 4,400 NGOs worldwide that undertake humanitarian action on an ongoing basis 1 . Introduction In 2010 there was an estimated total 274,000 humanitarian workers worldwide with a growth rate of 4% from the previous year.Although the system is still dominated, in terms of operational presence and resource share, by a small group of NGOs and international organizations whose combined humanitarian expenditure in 2010 exceeded $2.7 billion, there are an estimated 4,400 NGOs worldwide that undertake humanitarian action on an ongoing basis 1 . Introduction In 2010 there was an estimated total 274,000 humanitarian workers worldwide with a growth rate of 4% from the previous year.Although the system is still dominated, in terms of operational presence and resource share, by a small group of NGOs and international organizations whose combined humanitarian expenditure in 2010 exceeded $2.7 billion, there are an estimated 4,400 NGOs worldwide that undertake humanitarian action on an ongoing basis 1 . This heterogeneous and fragmented nature of the humanitarian sector has hindered its ability to efficiently respond to crises worldwide 2 , and concerns have been expressed on the clinical competencies and practices by foreign medical teams (FMTs) during response to large scale disasters such as the 2010 Haiti earthquake or the 2004 South Asian tsunami 3 .This resulted in the international community calling for increased professionalization of the humanitarian sector through "an international professional association, the development of core competencies that drives curricula, and the creation of a universal certification 1 PLOS Currents Disasters system for aid workers" 4 .A survey conducted among professionals involved in humanitarian assistance further confirmed those needs 5 .In this respect, the Consortium of British Humanitarian Agencies (CBHA) published a framework for core and leadership humanitarian competencies 6 that has served as a model for developing education and training curricula. However, as highlighted by the U.K. Department for International Development (DFID) Humanitarian Emergency Response Review, there is still "no straightforward professional route into humanitarian work" and "in every major emergency there are still significant numbers of aid personnel who lack some of the skills essential to their jobs" 7 .Earlier programs to promote global health among residents found many barriers to the successful implementation in the academic curriculum, such as time constraints, lack of approval and funding concerns 8 . Due to these factors, the Research Center In Emergency and Disaster Medicine and Computer Science Applied To Medical Practice (CRIMEDIM) 9 in collaboration with Rainbow for Africa Onlus 10 , an Italian non-profit organization, decided to design and implement a pilot training course for residents in Anesthesia and Intensive Care Medicine.This combined theoretical and essential knowledge and an onsite tutored internship in a resource-constrained field simulation as an internationally recommended apprenticeship.The ultimate goal was to better prepare future humanitarian aid workers through a standard professionalization process.This paper reports on the 2-years pilot project detailing the characteristics and features of the training course and the apprenticeship as well as the feedback of the participants. Institutions Involved CRIMEDIM is an interdepartmental university research center established in 2007 at the Università of Piemonte Orientale (Novara, Italy) devoted to the medical scientific study of disasters.Its aims are to promote and foster research and education in emergency and disaster medicine with particular interest in the application of new informatic technologies.Its ultimate goal is to improve health care system response in emergency, disaster and crisis situations through increasing the knowledge, skills and attitudes of health care providers based on scientific evidence and on field experience at regional, national and international levels. CRIMEDIM is also a member of the established Emergency Medicine and Disaster Management Academy (EMDM Academy) based in Geneva 11 .Rainbow for Africa Onlus is an Italian non-profit non-governmental organization established in 2009 whose mission is to foster quality and access to healthcare in Africa through sustainable interventions.Rainbow for Africa Onlus is primarily involved in development projects for the professional training of local healthcare personnel in Africa, and specifically focused on emergency and essential surgical care. Rainbow for Africa Onlus has been working in Burkina Faso, Ethiopia, Senegal, Haiti and Sierra Leone.It is also involved in Continuous Medical Education courses for Italian healthcare personnel (Tropical Medicine and Ultrasound in low-resource settings) and public awareness campaigns, particularly in schools. Both CRIMEDIM and Rainbow for Africa Onlus are members of the WHO Global Initiative for Emergency and Essential Surgical Care (WHO GIEESC) 12 . Selection of Participants All residents enrolled in the Anesthesiology and Intensive Care residency program at the Università del Piemonte Orientale (Italy) were invited to participate in the pilot project.Residents were made aware that participation in the project was voluntary and would not affect their residency evaluation; however they were informed that participation to the online course was required before deployments.Candidates had to meet the following prerequisites: adequate level of professional competencies with regard to the medical activities to be performed as assessed by the director of the residency program; good knowledge of either English or French (B1 level of common European framework of reference for languages 16 ) depending on the official language spoken in the host country; strong motivation for deployment evaluated by a questionnaire and interviews by representatives of the two institutions. Online Training The online training course was designed in July 2011 and hosted on the CRIMEDIM website using the Modular Object-Oriented Dynamic Learning Environment (MOODLE), a customisable, flexible and interactive Learning Management System (LMS) 17 .system for aid workers" 4 .A survey conducted among professionals involved in humanitarian assistance further confirmed those needs 5 .In this respect, the Consortium of British Humanitarian Agencies (CBHA) published a framework for core and leadership humanitarian competencies 6 that has served as a model for developing education and training curricula. However, as highlighted by the U.K. Department for International Development (DFID) Humanitarian Emergency Response Review, there is still "no straightforward professional route into humanitarian work" and "in every major emergency there are still significant numbers of aid personnel who lack some of the skills essential to their jobs" 7 .Earlier programs to promote global health among residents found many barriers to the successful implementation in the academic curriculum, such as time constraints, lack of approval and funding concerns 8 . Due to these factors, the Research Center In Emergency and Disaster Medicine and Computer Science Applied To Medical Practice (CRIMEDIM) 9 in collaboration with Rainbow for Africa Onlus 10 , an Italian non-profit organization, decided to design and implement a pilot training course for residents in Anesthesia and Intensive Care Medicine.This combined theoretical and essential knowledge and an onsite tutored internship in a resource-constrained field simulation as an internationally recommended apprenticeship.The ultimate goal was to better prepare future humanitarian aid workers through a standard professionalization process.This paper reports on the 2-years pilot project detailing the characteristics and features of the training course and the apprenticeship as well as the feedback of the participants. Institutions Involved CRIMEDIM is an interdepartmental university research center established in 2007 at the Università of Piemonte Orientale (Novara, Italy) devoted to the medical scientific study of disasters.Its aims are to promote and foster research and education in emergency and disaster medicine with particular interest in the application of new informatic technologies.Its ultimate goal is to improve health care system response in emergency, disaster and crisis situations through increasing the knowledge, skills and attitudes of health care providers based on scientific evidence and on field experience at regional, national and international levels. CRIMEDIM is also a member of the established Emergency Medicine and Disaster Management Academy (EMDM Academy) based in Geneva 11 .Rainbow for Africa Onlus is an Italian non-profit non-governmental organization established in 2009 whose mission is to foster quality and access to healthcare in Africa through sustainable interventions.Rainbow for Africa Onlus is primarily involved in development projects for the professional training of local healthcare personnel in Africa, and specifically focused on emergency and essential surgical care. Rainbow for Africa Onlus has been working in Burkina Faso, Ethiopia, Senegal, Haiti and Sierra Leone.It is also involved in Continuous Medical Education courses for Italian healthcare personnel (Tropical Medicine and Ultrasound in low-resource settings) and public awareness campaigns, particularly in schools. Both CRIMEDIM and Rainbow for Africa Onlus are members of the WHO Global Initiative for Emergency and Essential Surgical Care (WHO GIEESC) 12 . Selection of Participants All residents enrolled in the Anesthesiology and Intensive Care residency program at the Università del Piemonte Orientale (Italy) were invited to participate in the pilot project.Residents were made aware that participation in the project was voluntary and would not affect their residency evaluation; however they were informed that participation to the online course was required before deployments.Candidates had to meet the following prerequisites: adequate level of professional competencies with regard to the medical activities to be performed as assessed by the director of the residency program; good knowledge of either English or French (B1 level of common European framework of reference for languages 16 ) depending on the official language spoken in the host country; strong motivation for deployment evaluated by a questionnaire and interviews by representatives of the two institutions. Online Training The online training course was designed in July 2011 and hosted on the CRIMEDIM website using the Modular Object-Oriented Dynamic Learning Environment (MOODLE), a customisable, flexible and interactive Learning Management System (LMS) 17 .system for aid workers" 4 .A survey conducted among professionals involved in humanitarian assistance further confirmed those needs 5 .In this respect, the Consortium of British Humanitarian Agencies (CBHA) published a framework for core and leadership humanitarian competencies 6 that has served as a model for developing education and training curricula. However, as highlighted by the U.K. Department for International Development (DFID) Humanitarian Emergency Response Review, there is still "no straightforward professional route into humanitarian work" and "in every major emergency there are still significant numbers of aid personnel who lack some of the skills essential to their jobs" 7 .Earlier programs to promote global health among residents found many barriers to the successful implementation in the academic curriculum, such as time constraints, lack of approval and funding concerns 8 . Due to these factors, the Research Center In Emergency and Disaster Medicine and Computer Science Applied To Medical Practice (CRIMEDIM) 9 in collaboration with Rainbow for Africa Onlus 10 , an Italian non-profit organization, decided to design and implement a pilot training course for residents in Anesthesia and Intensive Care Medicine.This combined theoretical and essential knowledge and an onsite tutored internship in a resource-constrained field simulation as an internationally recommended apprenticeship.The ultimate goal was to better prepare future humanitarian aid workers through a standard professionalization process.This paper reports on the 2-years pilot project detailing the characteristics and features of the training course and the apprenticeship as well as the feedback of the participants. Institutions Involved CRIMEDIM is an interdepartmental university research center established in 2007 at the Università of Piemonte Orientale (Novara, Italy) devoted to the medical scientific study of disasters.Its aims are to promote and foster research and education in emergency and disaster medicine with particular interest in the application of new informatic technologies.Its ultimate goal is to improve health care system response in emergency, disaster and crisis situations through increasing the knowledge, skills and attitudes of health care providers based on scientific evidence and on field experience at regional, national and international levels. CRIMEDIM is also a member of the established Emergency Medicine and Disaster Management Academy (EMDM Academy) based in Geneva 11 . Rainbow for Africa Onlus is an Italian non-profit non-governmental organization established in 2009 whose mission is to foster quality and access to healthcare in Africa through sustainable interventions.Rainbow for Africa Onlus is primarily involved in development projects for the professional training of local healthcare personnel in Africa, and specifically focused on emergency and essential surgical care. Rainbow for Africa Onlus has been working in Burkina Faso, Ethiopia, Senegal, Haiti and Sierra Leone.It is also involved in Continuous Medical Education courses for Italian healthcare personnel (Tropical Medicine and Ultrasound in low-resource settings) and public awareness campaigns, particularly in schools. Both CRIMEDIM and Rainbow for Africa Onlus are members of the WHO Global Initiative for Emergency and Essential Surgical Care (WHO GIEESC) 12 . Selection of Participants All residents enrolled in the Anesthesiology and Intensive Care residency program at the Università del Piemonte Orientale (Italy) were invited to participate in the pilot project.Residents were made aware that participation in the project was voluntary and would not affect their residency evaluation; however they were informed that participation to the online course was required before deployments.Candidates had to meet the following prerequisites: adequate level of professional competencies with regard to the medical activities to be performed as assessed by the director of the residency program; good knowledge of either English or French (B1 level of common European framework of reference for languages 16 ) depending on the official language spoken in the host country; strong motivation for deployment evaluated by a questionnaire and interviews by representatives of the two institutions. Online Training The online training course was designed in July 2011 and hosted on the CRIMEDIM website using the Modular Object-Oriented Dynamic Learning Environment (MOODLE), a customisable, flexible and interactive Learning Management System (LMS) 17 . The selection of the topics were based on the review of existing professional organizations' training programmes, definitive humanitarian and disaster medicine references, and the experience of the experts from the two institutions (including academic and government).All the experts were anaesthesiologists who had been deployed at least once in the aftermath of a major disaster or during a humanitarian aid program. Following analysis of international guidelines on disaster response training, peer-reviewed publications was conducted.Complementary Internet searches were also performed, and resources from websites such as those from the World Health Organization (WHO), International Committee of Red Cross (ICRC) and Médecins Sans Frontières (MSF) reference guidelines were included.Curricular committee consensus on those entities regarded to be most important was required for item inclusion.The course was than constructed and resulted in 7 academic units with an estimated completion time of 32 hours (4 FTE -Full-Time Equivalent).A summary of the units is provided in Table 1. In order to pass the online training, participants had to complete a final test to assess their level of knowledge.The test consisted of 12 multiple-choice questions on the topics dealt in the units of the training, such as: The module provided relevant information about the host country profile, including historical context, demographic data, cultural context, economy and currency and health services and infrastructures.Candidates also had to comply with general rules as recommended by the Italian Ministry of Foreign Affairs' Crisis Unit21 and provide proof of having been given the required vaccinations. 2 Anesthesia at district hospital The module discussed the main peculiar aspects of anesthesia at district hospital in a low-income country, and in particular: international standards for safe anesthesia22 , medicines available23 , available infrastructures and equipment24 , 25 , general and regional anesthesia techniques, including monitoring the anesthetized patient and postoperative care26 . 10 Surgery at district hospital The module described the main features of surgical care at district hospital 27 , 28 , 29 , with a focus on essential trauma care30 and management of burns. Medical treatment in resource-constrained environment The module covered the essentials of medical treatment in resourceconstrained environment, including obstetrics in remote settings, according to MSF clinical guidelines31 , 32 . Mental health and psychological support The module presented mental health and psychological support in emergency settings according to IASC guidelines33 3 Medical apprenticeships Upon completion of the aforementioned training course, candidates had the opportunity for a 30 day apprenticeship period The selection of the topics were based on the review of existing professional organizations' training programmes, definitive humanitarian and disaster medicine references, and the experience of the experts from the two institutions (including academic and government).All the experts were anaesthesiologists who had been deployed at least once in the aftermath of a major disaster or during a humanitarian aid program. Following analysis of international guidelines on disaster response training, peer-reviewed publications Security in the field The module included the "UN Basic Security in The Field" online training course19 and the report by ICRC on violence affecting healthcare personnel 20 .For the module to be considered completed, each candidate had to submit the certificate provided upon completion of the "UN Basic Security in The Field" online training course. Host Country: general information, health infrastructures and required vaccinations The module provided relevant information about the host country profile, including historical context, demographic data, cultural context, economy and currency and health services and infrastructures.Candidates also had to comply with general rules as recommended by the Italian Ministry of Foreign Affairs' Crisis Unit21 and provide proof of having been given the required vaccinations. 2 Anesthesia at district hospital The module discussed the main peculiar aspects of anesthesia at district hospital in a low-income country, and in particular: international standards for safe anesthesia22 , medicines available23 , available infrastructures and equipment24 , 25 , general and regional anesthesia techniques, including monitoring the anesthetized patient and postoperative care26 . 10 Surgery at district hospital The module described the main features of surgical care at district hospital 27 , 28 , 29 , with a focus on essential trauma care30 and management of burns. Medical treatment in resource-constrained environment The module covered the essentials of medical treatment in resourceconstrained environment, including obstetrics in remote settings, according to MSF clinical guidelines31 , 32 . Mental health and psychological support The module presented mental health and psychological support in emergency settings according to IASC guidelines33 3 Medical apprenticeships Upon completion of the aforementioned training course, candidates had the opportunity for a 30 day apprenticeship period The selection of the topics were based on the review of existing professional organizations' training programmes, definitive humanitarian and disaster medicine references, and the experience of the experts from the two institutions (including academic and government).All the experts were anaesthesiologists who had been deployed at least once in the aftermath of a major disaster or during a humanitarian aid program. Following analysis of international guidelines on disaster response training, peer-reviewed publications Security in the field The module included the "UN Basic Security in The Field" online training course19 and the report by ICRC on violence affecting healthcare personnel 20 .For the module to be considered completed, each candidate had to submit the certificate provided upon completion of the "UN Basic Security in The Field" online training course. Host Country: general information, health infrastructures and required vaccinations The module provided relevant information about the host country profile, including historical context, demographic data, cultural context, economy and currency and health services and infrastructures.Candidates also had to comply with general rules as recommended by the Italian Ministry of Foreign Affairs' Crisis Unit21 and provide proof of having been given the required vaccinations. 2 Anesthesia at district hospital The module discussed the main peculiar aspects of anesthesia at district hospital in a low-income country, and in particular: international standards for safe anesthesia22 , medicines available23 , available infrastructures and equipment24 , 25 , general and regional anesthesia techniques, including monitoring the anesthetized patient and postoperative care26 . 10 Surgery at district hospital The module described the main features of surgical care at district hospital 27 , 28 , 29 , with a focus on essential trauma care30 and management of burns. Medical treatment in resource-constrained environment The module covered the essentials of medical treatment in resourceconstrained environment, including obstetrics in remote settings, according to MSF clinical guidelines31 , 32 . Mental health and psychological support The module presented mental health and psychological support in emergency settings according to IASC guidelines33 3 Medical apprenticeships Upon completion of the aforementioned training course, candidates had the opportunity for a 30 day apprenticeship period designed in the framework of the field programs carried out by Rainbow for Africa in Burkina Faso, Senegal and Sierra Leone where safety and security were acceptable (i.e.no unstable environments).A senior anaesthesiologist with tutoring experience was present for the entire training period. Grounding on the objectives of the development project, the needs of the host country, and the learning objectives of candidate's curricula, an individual learning agreement was developed 13 .The learning agreement recognized the student's abilities and included the educational objectives and the learning activities the student was expected to accomplish during the field practicum, as well as the components of the supervision which was agreed to and provided to the student. During the apprenticeship, candidates had the opportunity of being actively involved in the clinical activities performed at the hospital they were assigned to, and specifically: anesthetic management of patients due to undergo surgical interventions (both scheduled and emergent), management of medical emergencies requiring life support maneuvers, postoperative medical care, and tutored outpatient case management. Furthermore, candidates could take part in the training sessions organized for the local healthcare personnel.They were involved either in the planning prior to departure as programme designer or in the actual training sessions as instructor, or both.All planned activities were considered relevant to the candidate's residency curriculum.Framework agreements were then arranged between contributing parties and the local institutions; all candidates were given the required pre-deployment vaccinations as well as medical and professional insurance. A debriefing was planned at the end of each apprenticeship with the purpose of evaluating learning outcomes.Though a standardized evaluation process was not used, such debriefing included a report on the activities performed and the related outcomes (phrased according to Bloom's Taxonomy) written by the supervising senior anesthesiologist.Participants had the opportunity to further express their feedback to representatives of the involved organizations.Participants were also encouraged to provide suggestions on how to improve the program and were involved in course handovers between existing years of training. Self Evaluation At the end of the project participants were asked to complete a self-evaluation questionnaire with the aim of analyzing the impact of the project on their professional skills, personal perspective of the value of the project and to document their opinions about the value of the professionalization of the humanitarian aid sector.Participants expressed their level of agreement to 10 questions using a five-point Likert-type scale response (Likert items: strongly disagree; disagree; don't agree nor disagree; agree; strongly agree).Participants were allowed to further comment on the project using an open-ended item questionnaire. The questionnaire was administered using SurveyMonkey (SurveyMonkey LLC, Palo Alto, California USA) via a direct e-mail invitation to complete the survey. Written informed consent was obtained from each participant prior to the start of the course.To assure confidentiality, the students submitted the satisfaction questionnaire anonymously.Since all data were de-identified, the evaluation was deemed exempt from institutional review approval by the local Ethics Committee. Results Currently the training course has been successfully completed by 8 Anesthesia and Intensive Care Medicine residents (Table 2) who voluntary participated in the program, met prerequisites, proficiently passed the online training final test (defined as having correctly answered to at least 66% of the test questions) and therefore had the opportunity to participate in a medical apprenticeship described as follows (Table 3): Teams of residents accounting for a total of 6 participants joined the development program carried on in Burkina Faso (October 2011, March 2012, September 2012).Under tutorage of a senior anesthesiologist designed in the framework of the field programs carried out by Rainbow for Africa in Burkina Faso, Senegal and Sierra Leone where safety and security were acceptable (i.e.no unstable environments).A senior anaesthesiologist with tutoring experience was present for the entire training period. Grounding on the objectives of the development project, the needs of the host country, and the learning objectives of candidate's curricula, an individual learning agreement was developed 13 .The learning agreement recognized the student's abilities and included the educational objectives and the learning activities the student was expected to accomplish during the field practicum, as well as the components of the supervision which was agreed to and provided to the student. During the apprenticeship, candidates had the opportunity of being actively involved in the clinical activities performed at the hospital they were assigned to, and specifically: anesthetic management of patients due to undergo surgical interventions (both scheduled and emergent), management of medical emergencies requiring life support maneuvers, postoperative medical care, and tutored outpatient case management. Furthermore, candidates could take part in the training sessions organized for the local healthcare personnel.They were involved either in the planning prior to departure as programme designer or in the actual training sessions as instructor, or both.All planned activities were considered relevant to the candidate's residency curriculum.Framework agreements were then arranged between contributing parties and the local institutions; all candidates were given the required pre-deployment vaccinations as well as medical and professional insurance. A debriefing was planned at the end of each apprenticeship with the purpose of evaluating learning outcomes.Though a standardized evaluation process was not used, such debriefing included a report on the activities performed and the related outcomes (phrased according to Bloom's Taxonomy) written by the supervising senior anesthesiologist.Participants had the opportunity to further express their feedback to representatives of the involved organizations.Participants were also encouraged to provide suggestions on how to improve the program and were involved in course handovers between existing years of training. Self Evaluation At the end of the project participants were asked to complete a self-evaluation questionnaire with the aim of analyzing the impact of the project on their professional skills, personal perspective of the value of the project and to document their opinions about the value of the professionalization of the humanitarian aid sector.Participants expressed their level of agreement to 10 questions using a five-point Likert-type scale response (Likert items: strongly disagree; disagree; don't agree nor disagree; agree; strongly agree).Participants were allowed to further comment on the project using an open-ended item questionnaire.The questionnaire was administered using SurveyMonkey (SurveyMonkey LLC, Palo Alto, California USA) via a direct e-mail invitation to complete the survey. Written informed consent was obtained from each participant prior to the start of the course.To assure confidentiality, the students submitted the satisfaction questionnaire anonymously.Since all data were de-identified, the evaluation was deemed exempt from institutional review approval by the local Ethics Committee. Results Currently the training course has been successfully completed by 8 Anesthesia and Intensive Care Medicine residents (Table 2) who voluntary participated in the program, met prerequisites, proficiently passed the online training final test (defined as having correctly answered to at least 66% of the test questions) and therefore had the opportunity to participate in a medical apprenticeship described as follows (Table 3): Teams of residents accounting for a total of 6 participants joined the development program carried on in Burkina Faso (October 2011, March 2012, September 2012).Under tutorage of a senior anesthesiologist designed in the framework of the field programs carried out by Rainbow for Africa in Burkina Faso, Senegal and Sierra Leone where safety and security were acceptable (i.e.no unstable environments).A senior anaesthesiologist with tutoring experience was present for the entire training period. Grounding on the objectives of the development project, the needs of the host country, and the learning objectives of candidate's curricula, an individual learning agreement was developed 13 .The learning agreement recognized the student's abilities and included the educational objectives and the learning activities the student was expected to accomplish during the field practicum, as well as the components of the supervision which was agreed to and provided to the student. During the apprenticeship, candidates had the opportunity of being actively involved in the clinical activities performed at the hospital they were assigned to, and specifically: anesthetic management of patients due to undergo surgical interventions (both scheduled and emergent), management of medical emergencies requiring life support maneuvers, postoperative medical care, and tutored outpatient case management. Furthermore, candidates could take part in the training sessions organized for the local healthcare personnel.They were involved either in the planning prior to departure as programme designer or in the actual training sessions as instructor, or both.All planned activities were considered relevant to the candidate's residency curriculum.Framework agreements were then arranged between contributing parties and the local institutions; all candidates were given the required pre-deployment vaccinations as well as medical and professional insurance. A debriefing was planned at the end of each apprenticeship with the purpose of evaluating learning outcomes.Though a standardized evaluation process was not used, such debriefing included a report on the activities performed and the related outcomes (phrased according to Bloom's Taxonomy) written by the supervising senior anesthesiologist.Participants had the opportunity to further express their feedback to representatives of the involved organizations.Participants were also encouraged to provide suggestions on how to improve the program and were involved in course handovers between existing years of training. Self Evaluation At the end of the project participants were asked to complete a self-evaluation questionnaire with the aim of analyzing the impact of the project on their professional skills, personal perspective of the value of the project and to document their opinions about the value of the professionalization of the humanitarian aid sector.Participants expressed their level of agreement to 10 questions using a five-point Likert-type scale response (Likert items: strongly disagree; disagree; don't agree nor disagree; agree; strongly agree).Participants were allowed to further comment on the project using an open-ended item questionnaire.The questionnaire was administered using SurveyMonkey (SurveyMonkey LLC, Palo Alto, California USA) via a direct e-mail invitation to complete the survey. Written informed consent was obtained from each participant prior to the start of the course.To assure confidentiality, the students submitted the satisfaction questionnaire anonymously.Since all data were de-identified, the evaluation was deemed exempt from institutional review approval by the local Ethics Committee. Results Currently the training course has been successfully completed by 8 Anesthesia and Intensive Care Medicine residents (Table 2) who voluntary participated in the program, met prerequisites, proficiently passed the online training final test (defined as having correctly answered to at least 66% of the test questions) and therefore had the opportunity to participate in a medical apprenticeship described as follows (Table 3): Teams of residents accounting for a total of 6 participants joined the development program carried on in Burkina Faso (October 2011, March 2012, September 2012).Under tutorage of a senior anesthesiologist 5 Host evaluation of an unconscious trauma patient after a car accident Approach to a mass casualty incident due to fire in the hospital Management of a postpartum hemorrhage Evaluation of a burnt patient Techniques of loco-regional anesthesia and side effects of local anesthetics Ketamine anesthesia Anesthesia breathing circuitsTable 1. Summary of units included in the online training course definitions of Complex Emergencies definition (IASC, December 1994) and characteristics of UN response framework 1 Security in the field The module included the "UN Basic Security in The Field" online training course19 and the report by ICRC on violence affecting healthcare personnel 20 .For the module to be considered completed, each candidate had to submit the certificate provided upon completion of the "UN Basic Security in The Field" online training course.Country: general information, health infrastructures and required vaccinations 4 , 6 , 13 , 14 , 17 , and programs such as the H.E.L.P. (Health Emergencies in Large Populations) and United Nations Disaster Management Training Programme (DMTP), learning objectives were fixed.Topics were than structured into a framework represented by subject headings.To identify the content details, a systematic literature search in Medline and Google Scholar with the key terms [training], [disaster], [crisis], [management], [humanitarian], [competency/ies] and [resource-poor and-constrained environments] Table 1 . 4 , 6 , 13 , 14 , 17 , and programs such as the H.E.L.P. (Health Emergencies in Large Populations) and United Nations Disaster Management Training Programme (DMTP), learning objectives were fixed.Topics were than structured into a framework represented by subject headings.To identify the content details, a systematic literature search in Medline and Google Scholar with the key terms [training], [disaster], [crisis], [management], [humanitarian], [competency/ies] and [resource-poor and-constrained environments]was conducted.Complementary Internet searches were also performed, and resources from websites such as those from the World Health Organization (WHO), International Committee of Red Cross (ICRC) and Médecins Sans Frontières (MSF) reference guidelines were included.Curricular committee consensus on those entities regarded to be most important was required for item inclusion.The course was than constructed and resulted in 7 academic units with an estimated completion time of 32 hours (4 FTE -Full-Time Equivalent).A summary of the units is provided in Table1.In order to pass the online training, participants had to complete a final test to assess their level of knowledge.The test consisted of 12 multiple-choice questions on the topics dealt in the units of the training, such as: Summary of units included in the online training course Table 1 . 4 , 6 , 13 , 14 , 17 , and programs such as the H.E.L.P. (Health Emergencies in Large Populations) and United Nations Disaster Management Training Programme (DMTP), learning objectives were fixed.Topics were than structured into a framework represented by subject headings.To identify the content details, a systematic literature search in Medline and Google Scholar with the key terms [training], [disaster], [crisis], [management], [humanitarian], [competency/ies] and [resource-poor and-constrained environments]was conducted.Complementary Internet searches were also performed, and resources from websites such as those from the World Health Organization (WHO), International Committee of Red Cross (ICRC) and Médecins Sans Frontières (MSF) reference guidelines were included.Curricular committee consensus on those entities regarded to be most important was required for item inclusion.The course was than constructed and resulted in 7 academic units with an estimated completion time of 32 hours (4 FTE -Full-Time Equivalent).A summary of the units is provided in Table1.In order to pass the online training, participants had to complete a final test to assess their level of knowledge.The test consisted of 12 multiple-choice questions on the topics dealt in the units of the training, such as: Summary of units included in the online training course
v3-fos-license
2016-09-21T08:51:56.807Z
2016-01-01T00:00:00.000
49413454
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.sagepub.com/doi/pdf/10.5772/62921", "pdf_hash": "823d51934480ce29021a1fb38a12fc61cd7a6e38", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42512", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "sha1": "823d51934480ce29021a1fb38a12fc61cd7a6e38", "year": 2016 }
pes2o/s2orc
Global Health Innovation Technology Models Chronic technology and business process disparities between High Income, Low Middle Income and Low Income (HIC, LMIC, LIC) research collaborators directly prevent the growth of sustainable Global Health innovation for infectious and rare diseases. There is a need for an Open Source-Open Science Architecture Framework to bridge this divide. We are proposing such a framework for consideration by the Global Health community, by utilizing a hybrid approach of integrating agnostic Open Source technology and healthcare interoperability standards and Total Quality Management principles. We will validate this architecture framework through our programme called Project Orchid. Project Orchid is a conceptual Clinical Intelligence Exchange and Virtual Innovation platform utilizing this approach to support clinical innovation efforts for multi-national collaboration that can be locally sustainable for LIC and LMIC research cohorts. The goal is to enable LIC and LMIC research organizations to accelerate their clinical trial process maturity in the field of drug discovery, population health innovation initiatives and public domain knowledge networks. When sponsored, this concept will be tested by 12 confirmed clinical research and public health organizations in six countries. The potential impact of this platform is reduced drug discovery and public health innovation lag time and improved clinical trial interventions, due to reliable clinical intelligence and bio-surveillance across all phases of the clinical innovation process. Challenges Faced by Emerging Markets and Developing Countries in Global Health Innovation Efforts LMICs and LICs have been impacted by chronic disparities in the adoption and advancement of clinical trial and population health innovation efforts, due to conflicting regulatory requirements for Phase I to Phase IV clinical trial models primarily defined in the Western world [1,2]. An excerpt from the WHO Public Health Innovation report [3] summarizes this current dilemma: "Scientists in developing countries should be involved in the development of the research protocol from the beginning to ensure that local health needs of developing countries are taken account of. Otherwise, the reality will be that physicians and researchers in developing countries who take part in conducting clinical trials are placed in the role of data collectors for trials designed only to fit the needs of people in the developed world. Measures and policies should be implemented to ensure that these physicians and researchers can design and initiate clinical trials that address health problems in their own countries, rather than fulfil research protocols designed elsewhere." Examples of Impeded Progress in Emerging Markets: India In the last two decades, India has become one of the most sought-after locations within the cluster of emerging markets for global clinical trials, due to its potential for fast recruitment of patients. However, India's ability to sustain its growth within the drug discovery domain continues to be significantly challenged by both regional and global regulatory changes [4]. According to Dr. Arun Bhatt, President of Clininvent Research Private Limited in Mumbai, India, the clinical trial protocols during this period of growth within India became more complex, demanding and inefficient for both the research teams and patients. Between 1999 and 2005 [4], the average number of inclusion criteria increased threefold. The average number of procedures grew annually by 6.5%, reaching a median number of 35 procedures in 2005. In 2012, a typical phase III protocol included 50 eligibility criteria, 167 procedures and 13 endpoints [4]. Gaps in Global Health Innovation The root causes of these disparities are based on a lack of an agile technology infrastructure and robust ideation processes that can be adapted to their clinical research environments. Fragmented clinical trial processes and infrastructure deficiencies have left many promising research scientists with the inability to fully collaborate with their HIC peers. As a result, these researchers are unable to consistently partner on acute pandemic viral outbreaks, preventive medicine initiatives and new vaccine development that could save millions of lives. The International Council for Harmonisation (ICH) has identified the following unresolved areas of drug discovery innovation [5,6], shown below in Table 1: According to ICH, if these barriers are addressed, significant progress in the area of drug discovery and medical innovation can be achieved, resulting in [5,6] a reduction in the costs of internal failures (rejects, reworks, reprocessing and investigations). This also includes optimized regulations to enable LMICs and LICs to meet drug discovery submission criteria and expedite the availability of medicines to patients. There are clear signs that the Healthcare and Life Science industries are ready to address these barriers, due to [7,8] a growing demand for more focused research on infectious, rare, poverty-based and tropical diseases worldwide. In addition, there is a reinvigorated global movement towards clinical trial policy harmonization and acceptance of imported, de-identified clinical data sets for cross-trial analysis. Collectively, these changes in the Global Health Innovation community have resulted in the formation of new collaboration frameworks designed to bridge infrastructure and clinical intelligence disparities between HICs and LICs. Pharmaceutical and Medical Innovation Model Approaches In response to the change in the industry for expanded innovation worldwide, the International Federation of Pharmaceutical Manufacturers and Associations (IFPMA) has developed an innovation framework called the Pharmaceutical Innovation Platform (PIP). The focus of PIP is to provide a framework for a sustainable clinical innovation model, supported by collaborative partnerships, to achieve new heights in drug discoveries and healthcare solutions [9]. An excerpt from its PIP framework emphasizes this point [9]: "Healthcare, science and medicine challenges are global -all parties need to collaborate to meet these challenges effectively. Innovation is the vital element in this effort: when the public sector, industry, and civil society pull together to promote innovation, public health improves and lives are saved." The IFPMA continues to state that PIP is not only achievable for industrialized countries, but for developing countries as well. "Partnerships among established R&D companies, local R&D companies, international organizations, and local governments can be effective ways to harness the expertise of the various partners to find new treatments and cures for diseases which primarily affect poor countries [9]". Innovation Community Fragmented approaches to quality systems related to Good Clinical Practices (GCP) internationally Suboptimal deployment of limited resources to identify, enact or support effective elements of a quality system and continual improvement by both industry and regulatory agencies Delays may occur in the availability of medicines to patients round the world due to vast disparities in access to robust quality management systems and mature clinical trial practices Delays in the implementation of innovation and continual improvement of existing products may occur due to differences in expectation across differing regulatory bodies around the world Inability to implement consistently across stakeholder best practices related to Total Quality Management in other industries, which contributes to a lack of agility and repeatability in the quality of clinical trial practices The IFPMA's PIP model is composed of healthcare delivery characteristics that are necessary for an effective clinical innovation environment to grow locally, nationally and internationally [9]. Table 2 lists the attributes that the IFPMA outlines in its PIP model [9]. The model goes into greater detail regarding the tactical aspects of each of these areas. We saw in our research congruent themes between IFPMA, WHO and FDA in this regard, pertaining to scalable clinical innovation models for LMICs and LICs to use as their strategic compass in navigating towards locally sustainable research efforts. Table 3 is a summary of both the FDA's and WHO's adaptable innovation models. The model ultimately enables LMIC and LIC innovators to transition from being the primary recipients of innovation, to originators of clinical innovation. It will also take bold and unconventional thinking in order to overcome the socioeconomic and infrastructure barriers that innovators face within these regions of the world. Both WHO and the FDA provided solid narrative guidance on implementation approaches to their models. However, there was limited evidence, beyond their illustrative examples, of the measurable outcomes of the use of these models by LMICs and LICs in our systematic review. This has inspired our thinking as healthcare innovators, to re-architect and integrate best practices from IFPMA, WHO, the FDA, and the new agile Open Source technology and Open Science frameworks, which are gaining greater acceptance as a platform of choice for clinical intelligence communities, as the next generation of collaborative research [10]. Next Generation of LMIC and LIC Virtual Collaboration Models The movement towards Open Source-Open Science collaboration is evident with the new e-health platforms that are taking shape to address these challenges. As the Global Health community enters into a broader range of eHealth adoption efforts, we see pioneering growth and expansion of mHealth (mobile phone technology used for healthcare data exchange and patient engagement) solutions that are taking a progressive approach to enabling more effective patient engagement and peer-to-peer clinical decision support at the point of care. This has also led to supporting new patient engagement research models for public health bio-surveillance and education efforts related to infectious disease prevention and control and medication adherence in LIC and LMIC regions of the world. As a result, some mHealth initiatives have moved the needle with the adoption of new patient engagement efforts, simply due to leveraging this new technology. Yet it is still in its infancy as a stand-alone solution that ignites locally based clinical innovation. The impact and sustainability of clinical innovation powered by mHealth solutions are still uncertain, due to the following factors outlined by WHO in its 2015 guide for mHealth solutions [11], shown in Table 4 below: The goal of an integrated Open Source-Open Science platform is to take mHealth one step further, by enabling re-usable clinical intelligence that can be shared and redistributed in the context of clinical innovation before, during and after care is delivered. As a result, mHealth thus becomes an essential building block to this framework by providing a timely data feed for the innovation process. When mHealth is coupled with an Open Source-Open Science virtual collaboration environment, it will enable LMIC and LIC research scientists to engage in interactive drug discovery and global knowledge sharing for clinical innovation. As a result, it may reduce drug discovery lag time, by enabling timely collaborative clinical trial data sharing and bio-surveillance intelligence across all phases of drug discovery. Conceptual Open Source-Open Science System Design Approach Our conceptual Open Source-Open Science model, Project Orchid, incorporates the above requirements, in order to strengthen existing clinical trial partnerships and support new collaborative efforts that have disparate geographic, cultural and regulatory drivers. We have designed an integrated operational governance and clinical innovation engagement platform that can be adapted to the needs of each stakeholder organization. The intent of our model is to illustrate, through a multi-national TB clinical trial cohort, the potential outcomes of using the Open Source-Open Science framework (Table 5). Open Source-Open Science Innovation Framework Project Orchid consists of two integrated offerings: An Innovation Engagement Framework and a technologyenabled Collaboration Platform (Figures 1 and 2 discovery environment and best practices framework, to harmonize their clinical trial and public health programme governance approaches to produce improved research and care delivery outcomes. Our proposed Open Science Collaboration Maturity Model is used to identify and address current process, resource and competency gaps and disparities in key areas that impede medical innovation across the group of stakeholder organizations. This exercise will enable innovation teams to apply knowledge management, process harmonization and re-engineering techniques and risk mitigation strategies to bridge potential points of failure within the collaboration effort. The model is a process performance hierarchy, designed to address peerto-peer organizational disruption, due to change manage- • Programme and Policy Harmonization: This consists of addressing regulatory and cross-cultural organizational dynamics, process adoption and motivation models. • Ideation and Strategy: This consists of addressing medical innovation modelling and clinical trial bridging strategies. • Operational Engagement: This consists of facilitating, across the cohorts, the identification of service-level key performance indicators, knowledge management and delivery system optimization efforts after the launch of their medical innovation. The structure of the model is to encourage transparency and process agility, while leveraging the collective strengths of all stakeholder organizations in order to accelerate opportunities for business development, product innovation and service management in a more fluid manner. This framework aligns to the ICH Quality System concept tenets: Process Performance and Product Quality Monitoring System Corrective Action/Preventive Action (CA/PA); System Change Management; System Management Review; Knowledge Management and Quality Risk Management [5,6]. Each level of the maturity model corresponds to a key set of measures and methods that a collaborative partnership focuses on in its efforts to enable sustainable innovation. A facilitated Strengths, Weaknesses, Opportunities and Threats (SWOT) analysis is done across all core stakeholders of the partnership to determine how to address disparities in the "as-is" and "to-be" model for collaboration, followed by a risk mitigation and change management governance model adopted at all levels of the joint venture. The structure of the model is to encourage transparency and process agility, while leveraging the collective strengths of all stakeholder organizations in order to accelerate opportunities for business development, product innovation and service management in a more fluid manner [10][11][12][13]. It is recommended that, prior to initiating a formal collaborative partnership, an organization should perform a selfassessment to be fully aware of its SWOT findings and perform an internal cognitive walkthrough, or "what if" scenario with potential partnerships, in preparation for change management stressors or business drivers that may be encountered as part of the harmonization effort for joint ventures [12,13]. The table below (Table 6) itemizes each level of the maturity model and how it is applied across all three domains of the collaboration framework. Clinical Trial Data Quality and Regulatory Policy Harmonization The incremental nature of the Collaboration Maturity Model can be applied when multi-national research teams with LIC and LMIC partners attempt to harmonize the following GCP frameworks and regulatory standards that align to key bioethics and privacy guidelines related to the Clinical Trial Data Management system used in the Life Science industry. They include, at a minimum, the standards listed in Table 7: Open Source-Open Science Solution Architecture and System Design A proposed Open Source-Open Science Collaboration solution architecture should leverage Open Source technology frameworks for 2D and 3D data visualization and healthcare interoperability standards, such as HL7 Fast Healthcare Interoperability Resource (FHIR), Clinical Data Architecture (CDA) and Digital Imaging and Communication in Medicine (DICOM) standards, which are widely used in electronic medical records, medical devices, picture archiving systems and registries worldwide, to enable system-to-system integration and data exchange across the clinical research and care management continuum [13][14][15][16][17][18][19][20]. Open Source-Open Science platforms, like Project Orchid, will also leverage technology components, such as Linux and Drupal, and long-term scalable telecommunication frameworks, such as Internet2 and Unified Communication protocols, which optimize secured clinical data exchange in low bandwidth regions of the world. The proposed implementation model for Project Orchid is to incorporate these standards (Table 8) within Project Orchid's capabilities as a shared Platform-as-a-Service offering, to enable global pharmaceutical and biotechnology firms and government-based Life Science agencies to bridge the clinical data exchange connectivity divide with their LIC and LMIC pilot sites, which have very limited IT and mHealth infrastructure resources. Unified Communication and Usability Standards The following table (Table 9) is a partial list of the protocols and standards that will be used in the development of the Unified Communications recommended by Texas A&M Internet2 Technology and Evaluation Center (ITEC). We will also use the ISO 9241-11 standard to verify the following criteria for user interface design, recommended by Virginia Tech's Advanced Research and Computing, Visionarium Lab: effectiveness, efficiency and satisfaction (ISO 1998). This will take into account both the visually and hearing impaired as well. The use of these standards will ensure effective adoption across diverse organizational, language and cultural attributes for our proposed multinational pilot programme. Open Science User Profiles In order to ensure that each member of the initiative has the appropriate access rights to the Open Source-Open Science Platform, the system will provide role-based capabilities to maintain the data integrity of the system and ensure alignment to Good Clinical Practices (GCP). Table 10 is a limited representation of a typical user profile configuration that a Clinical Research Organization (CRO) may collaborate with. Implementation Model for an Open Source-Open Science Initiative In order to illustrate how an Open Source-Open Science platform can be implemented for a viable clinical innovation effort across multi-national stakeholders, Project Orchid has targeted two scenarios: a multi-national Phase Multi-National TB Phase II/III Clinical Trial Toxicity and Efficacy 3D Bio-surveillance The scope of the proof of concept and pilot is to build, test and deploy, in the field, an Open World Clinical Trial Collaboration Metaverse ecosystem between several central and sub-Saharan African countries, India, and the US, to facilitate a TB Vaccine Toxicity and Efficacy crosstrial data sharing initiative. The technology platform will simulate a vaccine development initiative across these locations with a shared virtual collaboration network that has a 2D/3D Genomic Toxicity and Efficacy Avatar environment, to detect and monitor toxicity and efficacy outcomes of specific formularies in Phase II/III TB clinical trials. The Human Avatar will display growing or diminishing contraindications that are reported via Case Report Forms from the participating researchers, as well as other relevant data feeds, such as laboratory information sys- • This is the standard that allows the transmission of emergency alert mass notifications Table 9. Project Orchid Unified Communication Protocols tems, health information systems, or radiological imaging systems, with projected toxicity or efficacy modelled by the current drug formulary under investigation. These results will be viewable on a toxicity and efficacy intelligence dashboard and a three-dimensional heat map of the areas affected on the Avatar. For example, targeted organs, such as the kidneys or liver, can be highlighted and "virtually scanned" for image manipulation and shared across the clinical research community for analysis. We will also include Open Source and DICOM viewers for 2D and 3D radiology images and pathology samples from submitted patient-level data sets for peer-to-peer consultations and retrospective clinical analysis on short-and long-term therapeutic outcomes. International-Based Clinical Data Quality Standards and Guidelines US-Based Clinical Quality Data Standards and Guidelines The results will also be fed into a Cross-Trial Drug Toxicity and Efficacy Registry for concurrent and systematic review by regulatory bodies, the sponsoring IRB, and Data Monitoring committees within the research community. The following organizations have confirmed their interest in participating in usability testing and piloting this Project Orchid scenario: • D-Tree International: An NGO that specializes in Public Health programmes and mHealth solutions in sub-Saharan Africa and India (Figures 3 and 4) that illustrate the Project Orchid conceptual collaboration platform. Conclusion The current solutions in the LIC and LMIC clinical innovation domain are primarily third party vendor systems, which require unsustainable IT infrastructure and software management support, or heavily paper-based processes supplemented with standard IT desktop applications and databases. There are also very limited Open Source-Open Science, health information exchange and mHealth networks that are integrated into multi-national cohorts and these do not address the dynamic clinical trial frameworks used in drug discovery. Collectively, these systems do not enable the robust capabilities needed to sustain near-real time clinical innovation or meet the regulatory standards for compliance to GCP systems for clinical trial data management. As a result, LICs and LMICs cannot transition effectively from manual administrative efforts in order to participate in multi-national clinical trials and public health innovation, which directly impedes their ability to mature as an organization. As a Global Health community, we must advocate for both agnostic and agile technology architecture frameworks that enable technical, syntactical and semantic interoperability and business process harmonization across LIC and LMIC cohorts and their MIC and HIC stakeholders, for true clinical innovation progress worldwide. Open Source and Open Science Innovation frameworks are leading this incremental transformation in our Life Science domain. This movement has become our next wave of healthcare tion efforts in a global market. This entails developing reusable and extensible technology that can integrate with local and international health information exchange networks that support data liquidity between proprietary and Open Source electronic medical record systems, laboratory and radiology systems and mHealth platforms, which are on a par with MICs and HICs. In addition, they will need to provide virtual Open Science collaboration environments, which enable access to near-real time clinical intelligence, science breakthroughs and new drug discovery partnerships that can accelerate the business development capabilities of LMICs and LICs in the Life Science industry. An example of this is the 2D/3D Open World Metaverse and Digital Library we are developing within our proof of concept. The opportunity for innovation empowerment is within reach for emerging markets and LMICs, when global ingenuity meets Open Source-Open Science technology. Conflict of Interest The author declare no conflicts of interest.
v3-fos-license
2021-01-29T05:37:21.303Z
2021-01-22T00:00:00.000
231720482
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1097/md.0000000000023956", "pdf_hash": "d8ae37b7aa535202a722615688ec4ce3bd82ec54", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42513", "s2fieldsofstudy": [ "Medicine" ], "sha1": "d8ae37b7aa535202a722615688ec4ce3bd82ec54", "year": 2021 }
pes2o/s2orc
Adjuvant targeted therapy combined with surgery for advanced and metastatic renal cell carcinoma Abstract Background: The aim of this systematic review and meta-analysis is to evaluate the efficacy and safety of adjuvant targeted therapy by sunitinib combined with surgery in the treatment of advanced or metastatic renal cell carcinoma. Methods: PubMed/Medline, Web of Science, Cochrane Library, ClinicalTrials.gov (http://www.ClinicalTrials.gov), China National Knowledge Infrastructure (CNKI) will be searched for clinical research articles related to the efficacy and safety of adjuvant therapy combined with surgery in the treatment of advanced and metastatic RCC. The identification, inclusion and exclusion flow charts will be conducted according to the PRISMA guidelines. The quality assessment will be done by Quadas-2 evaluation tool. Key parameters including OS in 10, 20, 30, and 40 months, PFS in 10, 20, and 30 months, objective response rate (ORR), stable disease (SD) rate, progressive disease (PD) rate, median OS and PFS, types of AEs and their occurrence rates, etc will be extracted. The evaluation of the efficacy and safety will be pooled by CMA. Results: This systematic review will provide evidence on the efficacy and safety of adjuvant therapy by sunitinib combined with surgery in treating advanced and metastatic RCC. Conclusion: The study aims to generalize data concerning the response rate, OS, PFS and rates of adverse effects of the perioperative use of sunitinib in advanced and metastatic RCC patients. The evidence provided by this systematic review and meta-analysis will help guide the clinical decision making and enlighten the future management of advanced or metastatic RCC. Registration: This protocol has been registered on the International Platform of Registered Systematic Review and Meta-analysis Protocols (INPLASY registration number: INPLASY2020110093; INPLASY DOI number: 10.37766/inplasy2020.11.0093 Available at: https://inplasy.com). Introduction Renal cell carcinoma (RCC) is one of the most common malignancies of the genitourinary tract originating from the cells in proximal convoluted tubule. [1,2] Accordingly, the global mortality nearly doubled in 15 years from 1985 to 2000. [3,4] Clinically, the majority of patients are asymptomatic during the early stages of RCC, thus only a fraction of patients is efficiently diagnosed and subsequently managed. [5] Due to the rapid progress and high invasiveness of RCC, patients who are only correctly managed during late stage have a comparatively low possibility of complete recovery. [6,7] One of the most important factors contributing to the rapid development and invasiveness is its tendency of metastasis. [8] Specifically, direct metastasis into the peritoneal cavity, migration into the blood vessels and even formation of large-sized venous thrombus into the right atrial system are common pathways for disease progress. [9,10] Usually, the appearance of venous thrombus in the right atrial system marks the turning point for the escalation of disease severity. [11] Despite knowing the rigorous fact, solutions to effectively remove the lesions are still limited. Currently, surgery is the first choice for patients diagnosed with advanced conditions like this for it directly removes both the original tumor and the metastatic thrombus. [12] Nevertheless, radical surgeries often involve sternotomy and optional cardiac arrest, which are challenging. As a result, adjuvant drug therapy has increasingly been adopted to aid surgical process or to directly treat patients. [13,14] Sunitinib is a tyrosine kinase inhibitor (TKI) which is usually used in adjuvant targeted therapy. [15] It can inhibit several receptors which have an effect in tumorigenesis and tumor progress of RCC, including vascular endothelial growth factor receptors (VEGFRs, like VEGFR-1, VEGFR-2, VEGFR-3) and c-Kit. [16,17] Angiogenesis is mandatory in the progression of RCC since it provides tumor tissues with adequate oxygen and nutrients. Thus, combined use of TKIs which helps to restrict the emerging of blood vessels is believed to play a role in managing RCC. Up to now, several important clinical trials have been carried out worldwide to evaluate the efficacy and safety of the aforementioned combined adjuvant targeted therapy and surgery. Some studies including 2 famous landmark trials have pointed out that this combined therapy could degrade tumor stage and halt tumor development and reduce the size of both the original tumor and metastatic neoplasms. [18,19] However, there exist other reports claiming that such combination had no direct impact on tumor growth and on the contrary would bring other drug safety concerns such as provoking several adverse effects. These mainly included hand and foot syndrome, malaise, digestive problems, etc. Besides, many of these studies are single-arm trials or fail to recruit sufficient number of patients. Therefore, the present study will synthesize the results of current available evidences to assess the efficacy and safety of the combined therapy of surgery and adjuvant targeted therapy in patients with advanced or metastatic RCC. Material and methods This protocol has been registered on the International Platform of Registered Systematic Review and Meta-analysis Protocols (INPLASY registration number: INPLASY2020110093; INPLASY DOI number: 10.37766/inplasy2020.11.0093 Available at: https://inplasy.com). This protocol will be conducted according to the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols (PRISMA-P). This study has been approved by the Ethics Committee of West China Hospital, Sichuan University (Chengdu China). Search strategy Following PRISMA-P, we will carefully search authenticated databases including PubMed/Medline, Web of Science, Cochrane Library, ClinicalTrials.gov (http://www.ClinicalTrials.gov), China National Knowledge Infrastructure (CNKI) for clinical research articles related to the efficacy and safety of adjuvant therapy combined with surgery in the treatment of advanced and metastatic RCC. In order to study the latest articles, we will only include articles published between January 2008 and May 2020. Article selection We will assign 3 independent reviewers to judge duplicates and relevancy, in which obvious duplicates and irrelevant articles will be excluded from further analysis. Remaining articles will be interrogated to extract full texts and raw data. In this process, case reports, reviews, letters and meeting proceedings will be excluded. The inclusion criteria will include: first, reported at least either indicators for survival analysis or data concerning the AEs; and second, randomized controlled trials and any observational design, including cross-sectional, case-control, and cohort designs. The consensus on the article selection process will be reported and more independent reviewers will be consulted if discrepancy should occur. Data extraction After selecting qualified articles, we will extract important information from these articles. The information will mainly include the basic details of the articles, patients demographic characteristics, data concerning the efficacy and safety. More specifically, key parameters will include OS in 10, 20, 30, and 40 months, PFS in 10, 20, and 30months, objective response rate (ORR), stable disease (SD) rate, progressive disease (PD) rate, median OS and PFS, types of AEs and their occurrence rates, etc. The baseline characteristics of the articles will include title, first author, nationality, department, study design and enrollment year. Finally, sex and median age, ethnicity of the patients will also be carefully extracted as the demographic features. Quality assessment We will perform standard quality assessment of the included studies based on Quadas-2 tool. By Quadas-2, the articles will be evaluated in the following processes: sequence generation (selection bias), allocation concealment (selection bias), blinding of participants and personnel (performance bias), blinding of outcome assessment (detection bias), incomplete outcome data (attrition bias), selective reporting (reporting bias), and others. Publication bias To avoid publication bias, if over 10 articles will be included to extract qualified data for further analysis, we will apply the "funnel plot" to detect the potential risk of publication bias. If not, we will implement Begg test and Egger test. All the aforementioned tests will be performed through Stata 14.2 (Stata Corp). Heterogeneity assessment We will apply I 2 statistics and Galbraith plot method to evaluate the heterogeneity. If I 2 < 50%, we will use a fixed-effects model. If we come across a high heterogeneity, we will use the Galbraith plot to identify the outliers and thus perform a sensitivity analysis. Statistical analysis The occurrence rate of AEs, including AEs of all grades and of grade ≥3 AEs as well as their 95% confidential interval (CIs) will be calculated based on data collected from these single-arm trials. All the analyses and calculations mentioned above will be conducted using comprehensive meta-analysis (CMA) (Biostat, Englewood, NJ). Discussion Advanced and metastatic RCC is a common type of malignancy of the kidney which easily leads to life-threatening events. The first-line treatment of advanced and metastatic RCC is radical Jin et al. Medicine (2021) 100: 3 Medicine surgery, despite that the surgery itself has quite a high degree of risk. [20] To provide better prognostic opportunity, physicians have brought up the concept of combined adjuvant therapy by sunitinib. Although a series of studies proved that adjuvant therapy by sunitinib was beneficial in that it was able to decrease tumor size and prevent continuous production of tumor thrombus. [21] However, there were indeed some other studies providing the opposite conclusions. This systematic review and meta-analysis will objectively report the efficacy of sunitinib based on large data analysis. [22] Besides the efficacy, the safety concerns of sunitinib were also heatedly discussed. So far, previous articles have claimed that proteinuria, anemia, asthenia, pause syndromes, etc were the most common AEs. Through systematic review and metaanalysis, we will provide the general pooled occurrence rate of allgrade AEs and grade≥3 AEs in order to enlighten the safety concerns of sunitinib.
v3-fos-license
2017-06-17T00:58:46.714Z
2014-11-27T00:00:00.000
2126123
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.eurosurveillance.org/deliver/fulltext/eurosurveillance/19/47/art20969-en.pdf?containerItemId=content/eurosurveillance&itemId=/content/10.2807/1560-7917.ES2014.19.47.20969&mimeType=pdf", "pdf_hash": "24d6fea30e71417434fef75d573969bfbe8046f1", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42514", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "29df0aa9990c78f9e6e5f470c5425cb7ff82b501", "year": 2014 }
pes2o/s2orc
Continuous increase in HIV-1 incidence after the year 2000 among men who have sex with men in Rome: insights from a 25-year retrospective cohort study To assess trends in HIV-1 incidence and risk factors for seroconversion among men who have sex with men (MSM) resident in Rome, Italy, a retrospective longitudinal cohort study was conducted over 25 years. Incidence rates and trends were modelled using Poisson regression and risk factors were assessed by multivariate Cox models. Of 1,862 HIV-1-negative individuals, 347 seroconverted during follow-up. HIV-1 incidence rates increased from 5.2/100 persons/year (p/y) in 1986 (95% confidence interval (CI): 2.3–11.5) Introduction In Europe, human immunodeficiency virus (HIV) incidence declined among men who have sex with men (MSM) after the first phase of the epidemic because of a decrease in risky sexual practices [1].This decline continued until the mid-1990s when the introduction of combination antiretroviral therapy (cART) dramatically changed the quality of life and survival of infected individuals and boosted the re-emergence of sexual risk behaviour which exposed MSM to sexually transmitted infections (STI) in general and HIV-1 in particular [2][3][4][5].National surveillance systems and observational studies suggest that since the early 2000s, HIV-1 incidence among MSM has been increasing in highincome countries, including many European countries, Australia, and the United States (US), thus causing the epidemic to re-emerge [6].In developed countries, incidence rates of HIV-1 infection are therefore disproportionately higher among MSM than in other sexually active adult populations, despite decades of public health efforts and the benefits, after 1996, of cART on the transmission risk of HIV-1 [7][8][9]. In Italy, ca 1,200-1,300 MSM acquire HIV infection every year, and in 2011, nearly 40% of the newly diagnosed HIV cases were MSM.Moreover, the proportion of HIV infections attributable to sex with a same-sex male partner almost doubled from 2000 to 2011 [10,11]. Date Although rates of HIV infection among MSM decreased until the late 1990s, data for the period after 2000 indicate an increase in the occurrence of HIV infection in this population, which is consistent with the re-emergence of ulcerative STIs in the same period [4,12]. To assess HIV-1 incidence trends and risk factors among MSM living in Rome, a cohort of individuals repeatedly tested for HIV antibodies was identified among a large population of men attending a structured HIV screening programme in the largest STI clinic of the city.The following study contains data from a 25-year follow-up period. Site recruitment and study population Of the MSM attending the STI clinic of the San Gallicano Dermatological Institute in Rome, Italy between April 1984 and December 2010 who were tested for HIV-1 antibodies at least twice and were negative in the first test, we included opportunistically in the study all those who had never used drugs intravenously.The site of enrolment represents an important counselling and testing service for at-risk groups such as STI patients, MSM and migrants, who voluntarily access an ongoing HIV-1 screening programme.The centre's activity is directed at an urban population of more than 2.5 million inhabitants and its catchment population comprises also patients from other regions in Italy and from abroad (ca 20%). Study participants were men who stated they had had anal sex with at least one male partner during the six months before the first HIV-1 antibody test.Two thirds of them perceived themselves to be at risk because of their sexual behaviour and therefore repeatedly selfreferred to our STI centre for HIV testing.Other participants were individuals to whom HIV testing was offered because of their self-reported risky behaviour or a diagnosis of an STI.During the study period, recruitment was promoted according to the methodology of 'snowball' sampling, also in collaboration with non-governmental associations for human and health rights of MSM.At local level, brochures and leaflets were used to advertise the screening programme and to sustain the peer-to-peer recruiting.The 'snowball' procedure was used during the whole duration of the study.At each visit, participants were also visited and managed for the risk of other STIs according to the European guidelines for STI diagnosis and treatment [13].In this context, participants were screened for Treponema pallidum infection and for rectal and pharyngeal Neisseria gonorrhoeae infection.Blood samples for serological tests and biological samples were collected from all participants. Ethics statement The study was approved by the San Gallicano Dermatological Institute Ethics Committee and performed in compliance with the Helsinki Declaration. Informed consent was obtained from all participants according to Italian law concerning strategies for HIV-1 counselling and testing. Data collection During each consultation, a standardised questionnaire, administered face-to-face by an interviewer, was used to collect information on demographics, medical history, lifestyle (i.e.alcohol and/or intravenous drug use) and sexual behaviour (i.e.sexual practices, total number of partners ever, number of partners in the previous six months, characteristics of sexual partnership etc.).For all participants, additional clinical data were also retrieved from individual medical records available in the electronic clinical archive of the STI clinic. Laboratory testing Blood samples were tested for HIV- Statistical analysis Incidence rates were calculated using the personyears (p/y) approach for MSM who were HIV-negative at enrolment and who underwent at least one additional HIV test during the study period.Time at risk was defined as the difference between the date of the first (entry into the study) and the last blood test (exit from the study).Because the exact time of seroconversion was not known, we assumed that HIV seroconverters were at risk for half the interval between the last negative and the first positive HIV test.HIV incidence was computed from 1985 to 2009.To minimise potential biases in the rates estimated at the two extremes of the study period, the first (1984) and the last (2010) year were excluded from the analysis of the HIV-1 trend owing to the potential incompleteness of follow-up data.Because incidence estimates varied greatly from year to year, the incidence trend over time was obtained using the locally weighted scatterplot smoothing (Lowess) technique.In addition, the incidence trend was modelled using Poisson regression with the calendar year as a continuous variable and allowing smoothing of varying trends via linear splines. The model was adjusted according to age at HIV seroconversion, included in the regression as a continuous covariate and modelled using linear splines. Specific incidence rate ratios (IRR) for HIV-1 seroconversion were calculated for participants who were free of syphilis or gonorrhoea (any sites) at the beginning of the study and who acquired either syphilis or gonorrhoea during the study period.The date of diagnosis was used to estimate the time at risk for each participant.The follow-up of a given individual who became infected with syphilis was split into an exposed and an unexposed period.In the same way, the follow-up for gonorrhoea was split into an exposed and an unexposed period. Results From April 1984 to December 2010, 4,352 MSM attended the HIV screening programme of the STI clinic of the San Gallicano Dermatological Institute in Rome, Italy.Among them, 1,862 HIV-seronegative MSM were tested more than once and qualified for inclusion in the longitudinal study.The total time at risk for the participant population was 8,638. HIV-1 incidence trend analysis The annual incidence rates showed relevant changes during the 25-year study period.At the beginning, HIV incidence increased from 5.2 per 100 p/y (95% CI: 2.3-11.5) in 1986 to 9.2 per 100 p/y (95% CI: 6.4-13.0) in 1992.After 1992, the incidence decreased progressively to 1.3 per 100 p/y in 2001.After 2001, the incidence showed a progressive increase and reached a peak in 2009 at 11.7 per 100 p/y (95% CI: 7.4-18.6).The decennial fitted HIV-1 incidence curve, obtained after smoothing correction, made it possible to define a trend with an initial mild increase until 1992, followed by a decrease until 2001 and a noticeable increase in incidence until 2009 (Figure 1).Using the smoothing curve, the analysis adjusted according to age at HIV seroconversion showed that the increase in incidence from 1985 to 1992 was mainly sustained by individuals who were younger than 25 years and by those aged 25 to 49 years (Figure 2A).For the young MSM, a peak in incidence was reached in 1993 (incidence rate (IR) = 11.The trend analysis by birth cohort was consistent with the findings observed when stratifying the data according to age group and showed that all the three selected birth cohorts contributed to the increase in HIV-1 incidence after 2001.This increase was mostly due to MSM born after 1975 who were tested for the first time after 1995 (Figure 2B).HIV-1 incidence curves for the birth cohorts of MSM born before 1960 and between 1960 and 1974 were almost parallel and consistent with the overall trend (Figure 1).In contrast, the HIV-1 trend for the birth cohort born after 1974 showed a rapid decrease from 1995 to 2000, followed by a rapid increase until 2009. Risk factor analysis No difference in the median number of HIV tests performed was observed between seroconverters and non-seroconverters (3.0 vs 3.0, OR = 1.01; 95% CI: 0.98-1.03).For seroconverters, the median time of infection was 1.72 years (IQR: 0.47-3.93)following enrolment.No difference in median age between HIV seroconverters and non-seroconverters (35.4 years vs 35.2 years) was observed.In addition, no statistically significant difference in the number of sexual partners (both lifetime and in the six months preceding the HIV tests), in the proportion of condom use per sexual act, and in the characteristics of partnership were observed between HIV seroconverters and non-seroconverters (data not shown). Other demographic characteristics and risky sexual behaviour of the participants by HIV status at the end of the study are presented in Table 2.The risk of HIV infection increased inversely with the age at diagnosis.Compared with MSM older than 41 years, the incidence rate was more than doubled (IRR = 2.33) for those aged 33 to 41 years and five times higher (IRR = 4.99) for those aged 16 to 32 years.Moreover, the risk of HIV-1 seroconversion was reduced by half in individuals with a high educational level (IRR = 0.46) and in those who were not Italian citizens (IRR = 0.49).No significant difference in the risk of seroconversion was observed between individuals with a history of STIs at enrolment compared with those with no history of STIs.However, a significant increase in the risk of HIV-1 infection was found among MSM who became infected with syphilis (IRR = 1.69; 95%CI: 1.31-2.19)or gonorrhoea (IRR = 1.79; 95% CI: 1.29-2.47)during the study period. Potential confounders for HIV-1 seroconversion were included in the multivariate Cox model.Independent risk factors for HIV-1 seroconversion in MSM living in Rome included: young age, low educational level, Italian nationality and occurrence of syphilis and/or gonorrhoea during the year preceding HIV-1 seroconversion (Table 3). More specifically, our data confirmed that the risk of becoming HIV-infected increased inversely with the age at the diagnosis.The incidence rate was 17 times higher in MSM younger than 33 years and five times higher in those aged 33 to 41 years compared with the oldest age group.Similarly, MSM born after 1974 and those born between 1960 and 1974 showed a seven and three times higher risk of seroconversion compared with those born before 1960. Furthermore, the risk of becoming HIV-1-infected was twice as high for MSM with a low educational level and for Italian citizens than for MSM with a high educational level (IRR adjusted = 0.54) and those with other citizenship (IRR adjusted = 0.45). The multivariate analysis confirmed a dramatic increase in the risk of becoming infected with HIV after acquiring syphilis.In particular, the risk increased ca eight-fold for the first year (IRR adjusted = 7.71) and ca 13-fold (IRR adjusted = 13.5, data not listed in the Tables) for the first six months since diagnosis of syphilis, but decreased after the first year, matching the risk for syphilis-free MSM (IRR adjusted = 1.16).A smaller but statistically significant increase in the risk of HIV-1 seroconversion was also observed for subjects who had contracted gonorrhoea during the year before HIV-1 seroconversion (IRR adjusted = 1.66). Discussion Our longitudinal study on MSM living in Rome has provided valuable insights about the dynamics and the characteristics the HIV epidemic in this urban population over the last 25 years.Preliminary data from this cohort about HIV-1 trends until 2003 have already been published previously [12]. Between 1985 and 1992, the first period of virus circulation in Italy, we observed a slight increase in HIV-1 incidence year by year.This is consistent with other studies which suggest that the HIV epidemic among MSM in southern Europe started later than in northern Europe and the US.In particular, our findings differ from those of the Dutch cohort of MSM in Amsterdam, where the incidence rates were higher in the first half of the 1980s and decreased between 1985 and 1992 [14]. The fast and progressive decline in HIV-1 incidence rates in our cohort between 1992 and 2001 was also described in other Western countries and can be attributed to important behavioural changes in MSM populations who adopted stable safe sexual practices in the 1990s, a fact reported in several large studies [2,[15][16][17].Finally, we observed a dramatic increase in HIV-1 incidence after 2001 and until 2009, consistent with our previous findings collected in the same population until 2003 [12]. This increase in incidence observed in MSM in Italy is in accordance with observations from other European studies despite different methodological approaches [9,18].The trend was larger and accelerated among younger MSM, although it was observed for all the age groups, including those 50 years and older, who did not contribute to the increase in HIV-incidence at the beginning of the study period (from 1985 to 1992). The dramatic increase in incidence measured since 2001 coincided with the re-emergence of bacterial STIs, such as syphilis and gonorrhoea observed among MSM living in Rome from 2000 to 2007 [4, and data not shown.Similar findings apply to other European countries during the same period [19,20].We observed that the risk of acquiring HIV-1 infection was higher among MSM who contracted syphilis during followup than for those remaining syphilis-free until the end of the study.The presence of these genital ulcerative lesions and the increased number of inflammatory cells, which enhance the potential for HIV transmission in MSM who contracted syphilis, may explain the higher HIV rates that we and other authors found for these subjects [5,21].Besides the biological synergism between syphilis and HIV, the high number of Italian MSM who contracted HIV-1 infection within six months of being diagnosed with syphilis may also be due to the fact that both infections share a common route of transmission and pattern of risk factors, and that social networks of persons with syphilis or at risk for syphilis are also at higher risk for HIV infection, a fact also reported in a 12-month prospective study in Chinese MSM from 2006 to 2007 [22].Another obvious explanation may be heightened perception of being at risk for HIV-1 and/or other sexually transmitted infections among individuals already diagnosed with syphilis, which would lead at first to a larger number of MSM being tested and consequently to an earlier diagnosis of HIV, and later to a change in sexual behaviour.The adoption of safer sexual practices as a response to educational programmes to which MSM were enrolled after the diagnosis of primary or secondary syphilis may also explain why the risk of becoming infected with HIV-1 dramatically declined within the first year since diagnosis of syphilis. The association between the occurrence of gonorrhoea and the risk of HIV-1 seroconversion is consistent with data from the Amsterdam Cohort Study, although Dutch MSM who had contracted gonorrhoea in the six months before seroconversion had a much higher risk than that observed in our study [14].The risk of HIV-1 seroconversion increased after the diagnosis of gonorrhoea and remained stable for MSM who had contracted gonorrhoea in the year before seroconversion, suggesting that gonorrhoea may be considered as a proxy for risky sexual practices but, in contrast to syphilis, did not increase the biological susceptibility of seronegative individuals for HIV. Other relevant predictors of HIV-1 seroconversion in our cohort were young age, low educational level and Italian nationality.After 2000, association of young age with a higher risk of HIV-1 infection was reported in several incidence studies in Europe [14,23] and in the US [24], perhaps due to a lower ratio of condom use per sexual act in the young population [14,25], the way young individuals select partners (i.e.Internetbased recruitment of partners) [26,27] or their specific psychosocial attitudes (i.e.involvement in the homosexual community, importance of gay identity, self-homophobia) [28].To the same extent, a low educational level was found to increase the risk of HIV-1 seroconversion in other European studies [14]. Our study may have suffered from some limitations and biases.Firstly, the repeatedly tested MSM in our longitudinal cohort study may not be representative of the whole MSM population living in Rome.In fact, our data probably describe the circulation of HIV-1 in a subpopulation of MSM with an increased risk of sexually transmitted infections.Nor can we exclude that attending an HIV-1 monitoring programme which offers counselling activities aimed at the reduction of sexual risk behaviour may reduce the risk of HIV-1 infection so that our population may have been at a lower risk than the general MSM population in Rome.Secondly, the individual data collection form, defined more than two decades ago, may have failed to evidence the role of certain risk factors for HIV-1 infection, especially those relevant among the homosexual communities in recent years (i.e.non-injection recreational drugs use, choosing sexual partners using the Internet or serosorting practice). Despite these limitations, findings suggest that, in accordance with the re-emergence of syphilis and gonorrhoea observed in Rome after 2000, particularly among young Italian MSM with a low educational level, HIV-1 infection continues to be contracted disproportionately more frequently by the homosexual community in Rome.These findings highlight the need for improved monitoring systems for HIV-1 risk in urban communities of MSM and more effective interventions targeted at limiting the circulation of other STIs in this population and to prevent waning of the benefits of ART on the HIV-1 transmission rate at community level. We have started in mid-2011 a programme with multiple approaches based on opt-out strategies for HIV testing, outreach testing by a rapid saliva-based assay and the reduction of barriers to accessibility for MSM to our STI clinic (also supported by local non-governmental organisations for homosexuals) to decrease the fraction of MSM unaware of their HIV infection, to increase the number of individuals tested for the first time, and to reduce the span between last negative and first positive HIV-1 antibody tests among those tested more than once.In addition, continuing behavioural interventions are needed to improve basic information and attitudes to promote persistent condom use in the homosexual community in Rome, particularly among the young and those with low educational level as well as among HIV-1 infected MSM. Figure 1 Figure 1Fitted HIV-1 incidence trend in men who have sex with men repeatedly tested in Rome,Italy, 1985Italy, -2009 (n=1,862) (n=1,862) 4 years.The median number of HIV tests received over time by the study participants was 3.0 (interquartile range (IQR): 2-6) and the median follow-up time was 2.76 years (IQR: 0.92-5.50).No change in the median number of tests per individual was observed during the study period.A total of 347 HIV-1 incident infections were observed, with an overall seroincidence rate (IR) of 4.0 (95% confidence interval (CI): 3.6-4.5)per 100 p/y. Table 1 CI: confidence interval; IRR: incidence rate ratio.IRR per year within the selected period was adjusted by age at the time of diagnosis. Table 2 Characteristics of men who have sex with men repeatedly tested for HIV-1 antibodies, by HIV status at the end of the study period, Rome,Italy, 1985Italy, -2009 (n=1,862) (n=1,862) CI: confidence interval; IRR: incidence rate ratio; STI: sexually transmitted infections.Subtotals differ depending on how the number of individuals for whom the information was known.aFor non-seroconverted individuals, the age at last HIV-negative test is shown.b Age at diagnosis/first intercourse was used as continuous variable.c Information missing for 72 HIV-infected and 250 HIV-uninfected individuals.d At enrolment.The STIs investigated includes: syphilis, gonorrhoea (any sites), genital herpes, genital warts, non-gonococcal genital infections and chlamydia.e During follow-up period. Table 3 Multivariate analysis of risk factors for HIV infection among men who have sex with men with no history of injecting drugs, Rome, Italy, 1985-2009 (n=1,540 a ) CI: confidence interval; IRR: incidence rate ratio; SC: HIV-1 seroconversion.aMissing values on educational level for 322 individuals.b During follow-up period.
v3-fos-license
2023-08-30T15:19:24.023Z
2023-08-28T00:00:00.000
261318754
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.international-agrophysics.org/pdf-170884-93899?filename=Selection%20of%20redox%20dye.pdf", "pdf_hash": "17b178c3ac630021f2b42fe274cf4c1a1945d11b", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42515", "s2fieldsofstudy": [ "Biology" ], "sha1": "df713bb1e402dc76910169c74fd0d9c971ddc096", "year": 2023 }
pes2o/s2orc
Selection of redox dye and inoculum conditions for the optimisation of respirometric indices in Verticillium and Trichoderma ** . Understanding which carbon or nitrogen sources the competing fungal species prefer is pivotal for specific biotechnological applications. However, this is not straightforward, as each strain sometimes behaves differently under the experimental conditions adopted. To analyse the trophic overlap of two species, it is necessary to refine diagnostic techniques and exclude variables that may interfere with the measurements. A protocol for establishing the suitability of chromogenic dyes in the analysis of filamentous fungi with phenotype microarrays is described here. The research goal was to determine the most suitable redox dye indicator and its optimal concentration that reacts quantitatively to the respiratory activity of both Verticillium spp. and Trichoderma spp. isolates in the presence of a nitrogen source. The commercial Biolog TM Redox Dye Mixes D, E, and F and also TTC (2,3,5-Triphenyltetrazolium chloride), INT (2-(4-iodophenyl)- 3-(4-nitrophenyl)-5-phenyl-2 H -tetrazolium chloride), and MTT solutions were evaluated in selected concentrations. Both their nutritive and toxic effects were quantified. Results showed that 1% “F” and 0.5% “D” Biolog TM dyes were appropriate for the functional evaluation of isolates belonging to the Verticillium and Trichoderma genera. They can be used in Biolog TM PM-nitrogen plate respiratory assays for a comprehensive functional charac - terisation of these organisms. INTRODUCTION Respiration is the process by which microorganisms consume organic matter and release carbon dioxide (CO 2 ) as a byproduct. This activity is a fundamental component of the microbial decomposition of organic materials and is crucial for nutrient cycling in soil and ecosystem functioning. Respiratory activity can be used as an indicator of microbial sensitivity to various environmental stimuli. Also, respiration combined with other activity indicators is used to understand how environmental conditions influence microbial metabolic activity . Respiration-based microbial activity indicators have been applied to determine microbial efficiency in breaking down specific organic compounds or response to different environmental conditions, such as temperature and moisture levels (Li et al., 2023), to identify potential candidates for bioremediation or study the mechanisms of pollutant degradation (Kaushik et al., 2021), or individuate the most efficient strains promoting plant growth or suppressing plant pathogens, e.g. in the rhizosphere (Bhadrecha et al., 2023). In such a context, the degree of utilization of specific substrates, the effect of nutritional cofactors, the functional relationship between macronutrients such as carbon and nitrogen (Spohn, 2015), and also the synergy of specific carbon sources in determining the outcome of interspecies relationships (e.g., commensalism versus competition; Canfora et al., 2017), become information of primary importance. In the rhizosphere, a delicate ecosystem that largely depends on plant exudates, thousands of species of fungi and bacteria compete and fight for space and resources (Broeckling et al., 2008). Trichoderma and Verticillium are fungi genera commonly found in soil and have important ecological roles (Akram et al., 2023). While there are some similarities in their ecological functionality, there are also some key differences. Both Trichoderma and Verticillium are important saprophytes that can decompose dead plant material in the soil. Trichoderma produces a range of extracellular enzymes, such as cellulases, hemicellulases, and ligninases, that can break down complex plant polymers, such as cellulose and lignin . Verticillium also produces similar enzymes for decomposition (Nuez et al., 2022). While some species of Verticillium can form mutualistic relationships with plants (Robb, 2007), this is not as well-documented as it is in Trichoderma. Some species of Trichoderma can colonise the roots of plants, where they can promote plant growth, enhance nutrient uptake or help the plants to tolerate environmental stressors such as drought (Adedayo and Babalola, 2023) or salt (Contreras-Cornejo et al., 2014). While Trichoderma and Verticillium can cause plant diseases (Brotman et al., 2010), Verticillium species are more well-known for their pathogenicity (Daayf, 2015). Verticillium species can invade the plant's vascular tissues, where they grow and produce toxins that interfere with the plant's water and nutrient transport systems, resulting in wilt symptoms and even plant death (Reusche et al., 2012). On the other hand, Trichoderma is not typically pathogenic to plants, and some species have beneficial effects on plant growth (Di Mola et al., 2023). There have been rare reports of Trichoderma causing disease symptoms in certain plant species under specific conditions (Poveda et al., 2020;Pfordt et al., 2020). Both Trichoderma and Verticillium have biocontrol activity against plant pathogens, but Trichoderma is more wellknown for its biocontrol properties (Guzmán-Guzmán et al., 2023). Trichoderma produces a range of enzymes and secondary metabolites that can inhibit the growth of other fungi, including plant pathogens (Manzar et al., 2022). Verticillium can also produce secondary metabolites (verticillin) with biocontrol activity (Lu et al., 2019), but this is not as well-studied as it is in Trichoderma. Fungal antagonisms provide natural and precise biotechnological weapons to protect crops from pathogens (Raaijmakers et al., 2009). Developing consistent and effective biological control strategies requires a comprehensive understanding of the ecological dynamics between microbial species in the rhizosphere. Berg et al. (2005) demonstrated, for example, the natural potential of certain Trichoderma strains for the biocontrol of some plant pathogenic species of Verticillium. The efficacy of many Trichoderma strains against some of the most widespread plant pathogens at the rhizospheric level is due to certain peculiarities of this genus, which can compete like a few others, both for space and nutrients (Pylak et al., 2019). This is due to their intense aggressiveness against other species (Trichoderma species are capable of mycoparasitism, antibiosis, and competition), forms of plant growth stimulation, successful defensive mechanisms, and, more generally, the ability to modify the rhizospheric microenvironment to their advantage (Tyśkiewicz et al., 2022). The mechanism of action of Trichoderma spp. against phytopathogens that has been least studied and described in the literature is competition for nutrients (Oszust et al., 2020a). Effects of carbon concentration and carbon-tonitrogen ratio on the growth and sporulation of several biocontrol fungi were previously described (Gao et al., 2007;Gao and Liu, 2009). Nitrogen strongly impacts fungal species' competitiveness in several ecological niches (Band et al., 2022). A tailored supplementation with carbon substrates was recently proposed to improve the effects of biocontrol preparations based on antagonistic bacteria or fungi . These substrates are selected to positively affect antagonistic fungi included in biopreparations to implement their efficacy against plant pathogens' activity. In other words, the potential trophic niche overlap between pathogenic and antagonistic fungi increases competition in favour of the "good" fungi (Wallis, 2021). The Phenotype MicroArray TM system (PM) can be efficiently applied to study the differential fungal use of substrates as carbon or nitrogen sources. It is a sensitive, reliable, and repeatable method based on functional fingerprinting (Cruz-Magalhaes et al., 2022) that also finds application in prebiotic supplement selection Pylak et al., 2021). However, carbon source-based panels have been more widely used, while nitrogen-based screening has been much less exploited, even though nitrogen is an essential element, especially in trophic competition among species (Maynard et al., 2017). Fungal phenotypic diversity is usually described based on differences in respiratory activity as an evidence of substrate catabolism or based on biomass production, but these parameters are rarely combined and considered together. The increase in fungal biomass measured in the PM method can occur with the consumption of a small amount of substrate, corresponding to a condition of high metabolic efficiency. Conversely, an increased respiratory response of a fungus combined with low biomass production can potentially indicate a stressful condition . This phenomenon might be an additional value in selecting prebiotic supplements. The Phenotype MicroArray TM microplates dedicated to fungi are not provided with a redox dye responsive to the respiratory activity of fungi. In fact, redox dyes can be toxic to some fungal species, or fungi can use them as carbon or nitrogen sources for biomass production, reducing their effectiveness or altering the method's performance. This study set up and applied a methodology to measure respiration and biomass production of filamentous fungi at the same time through the Phenotype MicroArray TM technique. In particular, we were interested in defining an appropriate protocol for selecting a redox dye without toxic or nutritional effects on the strains studied and its optimal concentration to achieve reliable respirometric staining in functional screening. In particular, respirometric dyes were tested to select the one that would allow comparing the nitrogen metabolism of Verticillium and Trichoderma strains without being used as a nitrogen source or not consistently colouring the two fungal genera. MATERIALS AND METHODS Three experiments were conducted on four strains of both genera Verticillium and Trichoderma (pathogen and antagonist, respectively). A first test evaluated fungi's use of chromogenic compounds as carbon and nitrogen sources. Plates were prepared with the different chromogenic substances, at different concentrations, without other organic compounds, and the growth of the fungus was measured as an increase in biomass (turbidity measure), representing the use of the dye as a nutrient source. A second test aimed to define the proper nitrogen concentration to be used with the carbon source. Finally, an experiment was set up to evaluate the toxicity of different chromogenic compounds. In this case, the fungi were inoculated into the plates with different inoculation fluids supplemented with glucose and nitrogen for optimal growth conditions in each well. The different dyes were then added at different concentrations to assess, compared with a control, any detrimental effect of the dyes on fungal development. The strains listed here and used in the experiment are part of the project EcoFruits BIOSTRATEG3/344433/16/ NCBR/2018 and are currently stored in the microbial strains collection of the Institute of Agrophysics, Polish Academy of Sciences, Lublin, Poland. Some tested isolates are patent pending, but others are used in research. The nutrition test was performed in 0.9% NaCl, with dyes used as the only carbon and nitrogen sources. Sterile transparent 96-well microplates (Nest Scientific TM , Woodbridge, USA) were inoculated with 180 µl of fungal spores suspended in 0.9% NaCl (73% Transmittance). 20 µl of 10x concentrated dyes were added and mixed to obtain final concentrations of 0.1%, 0.5%, 1% and 2% of D, E, F Biolog TM dyes, or 0.1%, 0.5% and 1% of MTT, TTC and INT dyes. The controls for each dye were the trials with no dye addition (20 µl 0.9% NaCl). The 10x concentrated stocks of D, E, and F dyes were prepared using 0.9% NaCl, while MTT, TTC, and INT dyes were prepared using 0.1 M HCl 10% Sodium Dodecyl Sulfate (SDS) solution. All ingredients were filtered-sterilised through a syringe filter (the size of pores 0.22 µm). All analyses were performed in three technical replications. The microplate cultures with added dyes and controls were incubated in the dark for ten days at 24°C. Every 24 h, fungal functional responses to dyes were measured. Absorbance readings (optical density) were obtained using MicroStation TM (Biolog TM , Hayward, USA) at 750 nm. To evaluate the effect of nitrogen concentration on fungal development in the plate wells, we selected a mineral nitrogen source to be administered at different concentrations. Ammonium nitrate (NH 4 NO 3 ) (Sigma Aldrich TM , Saint Louis, Missouri, USA) was used in final concentrations of 0.05, 0.1, 0.5, 1 and 0% (the control) in three different liquid media: PM3,5-8 inoculating fluid (PM) according to Biolog TM (Hayward, USA) preparation protocol, IF-FF inoculating fluid (FF) (Biolog TM , Hayward, USA) and 0.9% NaCl (NaCl). The final concentration of other ingredients in all the microplates was as follows: D-glucose 100 mM, potassium phosphate (pH 6.0) 5 mM, sodium sulfate 2 mM. Sterile transparent 96-well microplates were inoculated with a volume of 90 µl of fungal spores already suspended on the chosen liquid medium (73% T) and 10 µl of 10x concentrated NH 4 NO 3 also suspended in the same media. The total volume of 100 µl was mixed thoroughly by pipetting. Analyses were performed in three technical replications, and all ingredients were filter-sterilised. The inoculated microplates were incubated in the dark for ten days at 24°C. Every 24 h, turbidity at 750 nm wavelength, corresponding to biomass quantification, was measured using MicroStation TM (Biolog TM Hayward, USA). The toxicity test was performed with the PM3,5-8 inoculating fluid (PM) medium with added 0.1% NH 4 NO 3 the different redox dyes separately . Sterile transparent 96-well microplates were inoculated with 180 µl of fungal spores' suspension in 0.1% NH 4 NO 3 -PM medium (73% Transmission). 20 µl of 10x concentrated dyes were added to each well and mixed to obtain final concentrations of 0.1 0.5, 1, or 0.01 0.05 and 0.1% for MTT and INT. The 10x concentrated stock solutions of D, E, and F Biolog TM dyes were prepared using 0.9% NaCl, while MTT and the INT with a 0.1 M HCl 10% Sodium Dodecyl Sulfate (SDS) solution. Analyses were performed in three technical replications. The plates inoculated with the fungi and the different dyes were incubated in the dark for ten days at 24°C. Absorbance readings were performed every 24 h using MicroStation TM (Biolog TM , Hayward, USA) at 490 nm and 750 nm wavelengths, thereby obtaining both biomass (turbidity) and chromogenic compound colouration measurements based on respiratory activity. Analysis of variance (ANOVA) followed by Tukey's post hoc honestly significant differences (HSD) test at p < 0.05 were used to compare the treatments. Statistica 13.1 software (StatSoft®, Tulsa, Oklahoma, USA) was used. The analysis was performed by reading the plates every hour during a 216 h incubation. Principal component analysis (PCA) and discriminant analysis (DA) techniques were applied to the biomass dataset obtained by measuring the growth of all the Verticillium, and Trichoderma isolates with increasing concentrations of nitrogen (NH 4 NO 3 ). PCA was used to summarise the variability associated with incubation time (24 to 216 h) and to reduce the number of variables for further statistical analyses (Massart et al., 1998). The factor scores obtained for the first principal components (PCs) resulting from PCA were then used to run DA (XLSTAT 2019.3.2, Addinsoft, New York, USA), using first the strains and then the nitrogen concentrations as apriori grouping factors. The DA was applied to verify: 1) whether the response to nitrogen concentration may outweigh differences in nitrogen use by individual strains; 2) whether, in the face of different nitrogen concentrations, individual strains were still distinguishable based on their growth. RESULTS The nutritional effects of selected redox Biolog TM dyes (Fig. 1a) and other dyes (MTT, INT, TTC) ( Fig. 1b) on fungal growth in 0.9% NaCl medium was documented for 0.5 and 1% of all dye types, with 0.1 % of F and TTC, and 2% of F dye. After adding 0.1% D, E, MTT, and INT dyes, no fungal growth observed. Since, instead, the TTC dye addition caused a clear nutritional effect at all tested concentrations, TTC was excluded from the following experiments. on the PM medium. The highest optical density values (at 750 nm) were found for 0.05, 0.1 and 0.5% nitrogen concentrations, and these were significantly (p < 0.05) higher than 1 and 0% (the control). As expected, NaCl medium alone supported the most negligible biomass production. The 0.1% nitrogen concentration produced the greatest fungal biomass values in all tested media (Fig. 2). Figure 3 shows the fungal growth rate during two weeks of turbidity measurements recorded every 24 h. The 0.1% NH 4 NO 3 -PM medium produced the highest bio-mass. These results allowed us to preliminarily point to the 0.1% NH 4 NO 3 -PM medium as the best model for the toxicity tests. Figure 4 presents the diversified performance of Verticillium spp. (V1-V4) (Fig. 4a) and Trichoderma spp. (T1-T4) (Fig. 4b) individual isolates in terms of biomass production (based on absorbance readings at 750 nm) at different NH 4 NO 3 concentrations in the PM medium. The two plots were obtained by reducing the time variable (daily absorbance readings) with a PCA and using the new principal coordinates in a discriminant analysis (DA). Principal component analysis reduced the variance of the dataset due to readings over time (24-216 h of incubation, with readings every 24 h). The first four new orthogonal components explained more than 97% of the total variance and were used in discriminant analysis to compare the data clustering based on the nitrogen concentration and the fungal isolate. The discriminant analysis produces a predictive model for membership in the apriori-defined grouping. It allows an assessment of whether the data succeeded in separating the observations into the initially assigned groups. The model consists of a set of discriminant functions (one for each group to be identified) based on linear combinations of the predictor variables that provide the highest or best discrimination between groups. In both Verticillium and Trichoderma datasets, each DA successfully separated the different fungal isolates. Conversely, the different concentrations were not grouped, indicating that the amount of nitrogen is not a determining variable in the dataset. In the DAs shown in Fig. 4c and 4d, the only group classified as 100% (all the observations initially assigned to that group resulted in a correct classification) in both datasets is the 0% nitrogen group. The 0.1% concentration was the second better classified (75% in the Verticillium dataset, 50% in the Trichoderma one). In the case of the DAs in Figs 4a and 4b, the different isolates showed varying classifications, with V1 classified at 93.33%, V4 and T2 at 80%, and the least well-classified being the T4 (26.67% classification, meaning that many biomass observations belonging to this fungal isolate were assigned to other isolates). Once the most suitable inoculum fluid and the nitrogen concentration capable of providing greater reproducibility were identified, the effect of adding the different dyes on fungal functionality was evaluated. Figure 5 illustrates the toxic effect of different redox dyes on fungal growth in a medium with 0.1% nitrogen substrate (NH 4 NO 3 ). In particular, the effect of different dye concentrations on absorbance values at 490 nm ( Fig. 5a and b) and 750 nm ( Fig. 5c and d) is shown. Similar behaviour was observed for all redox dyes provided by Biolog TM, with no apparent toxic effects. Biomass production and respiratory activity were higher with dye E compared with dyes D and F. Figure 6 shows that colour development was obtained for all redox dyes marketed by Biolog TM , while no staining was observed for MTT and INT dyes (Table 1 shows the plate inoculation scheme). Significant differences in colour development were found between the different isolates of both Trichoderma and Verticillium, especially for the former. In addition, Trichoderma T2 and T3 isolates showed different sporulation capacities depending on the redox dye used and its concentration (Fig. 5). DISCUSSION Tetrazolium salts are compounds used to measure the redox activity of metabolically active cells (Braissant et al., 2020). They are chemically different molecules that are initially colourless but, in the presence of metabolically active cells, are reduced to red-violet formazan derivatives by reduced nicotinamide adenine dinucleotide (NADH)-dependent oxidoreductases and dehydrogenases or phosphorylated derivatives (NADPH). There are various chemical forms of tetrazolium salts, and many of them cannot penetrate through the walls of fungi or react in the presence of their dehydrogenases, while most tetrazolium salts penetrate bacterial cells (Braissant et al., 2020). The analysis of microbial cell metabolism and phenotype profiling employing sensitivity tests to various compounds, especially nutrients, is a powerful and rapid approach that allows for the functional differentiation of morphologically similar microbial strains without relying on more time-and resource-consuming genetic methods (Acin-Albiac et al., 2020). Phenotypic profiling is based on the use of tetrazolium salts and, to a lesser extent, on turbidity measurements, i.e. the development of biomass by the microorganism, which can be measured by spectrophotometry. However, using metabolic profiling with fungi presents more difficulties than for bacteria. Fungi can degrade complex organic compounds with extracellular enzymes such as laccase, manganese peroxidase and lignin peroxidase (Przystas et al., 2015;Singh, 2017). These fungal abilities are often ap-plied to decolourise and detoxify effluents treated with azo dyes (e.g., from textile industries) (Sen et al., 2016). Moreover, formazans have a structure with 4 nitrogen (N) atoms: [-N=N-C(R)=N-NH-] and are closely related to azo dyes with amine nitrogen (−N=N−) (Sen et al., 2016). Thus, fungi can decompose formazan derivatives and use nitrogen for sustenance, making fungi's metabolic profiling unreliable. Furthermore, redox dyes could be toxic to some fungal isolates (Bayineni, 2022). This study aimed to define the best experimental approach to analyse the metabolic profile of filamentous fungi employing the Phenotype MicroArray TM technique. In particular, we were interested in selecting redox dye without toxic or nutritional effects on the strains studied and testing its optimal concentration to achieve reliable respirometric staining in functional screening. Specifically, we were also interested in selecting a respirometric dye that would allow us to compare the nitrogen metabolism of Verticillium and Trichoderma strains without itself being used as a nitrogen ANOVA with Tukey's post hoc honestly significant differences (HSD) test at p < 0.05; mean values and standard deviation bars are shown; n = 3; different small letters above bars indicate significant differences between different nitrogen concentrations for each redox dye type. source or differently colouring the two fungal genera. The experimental work consisted of testing and selecting chromogenic molecules that were not a source of nutrition and, at the same time, were not toxic for the fungi. If there is little or no carbon source available, the fungus tends to use the dye more as a nutritional source. In the nutritional test, it was necessary to evaluate the chromogenic compounds' performance without other carbon or nitrogen sources. Therefore, we tested fungal biomass development in the presence of the chromogenic compound alone as evidence of the fungus's ability to metabolise it. The experiment used a simple saline solution as the inoculation fluid, with no other nutrients but the dye. The absence of biomass production by the fungus was considered a positive result, that is, the inability of the fungus to use the dye to grow. In contrast, in the toxicity test, the expected positive result was that the dye would not significantly inhibit fungal growth in the culture medium, where biomass production should be observed to the same extent as without the dye. Therefore, this test used a culture medium to ensure optimal fungal growth. In addition, since the culture medium must also be suitable for absorbance and turbidity measurements in microplates, it must also meet translucency conditions. a) b) In addition to the inoculum fluid and the type of indicator compound, it was necessary to test different concentrations of a nitrogen source. Ammonium nitrate (NH 4 NO 3 ) was chosen as the model compound for fungal nitrogen nutrition . The experiments were conducted in sterile transparent 96-well flat-bottom plates. Compared with commercial plates containing only the chromogenic compound (MT2, Biolog TM ), the setup chosen in this experiment allowed simultaneous manipulation of nutrient concentrations and redox dye type (Oszust et al., 2018). Another critical element that had to be considered in the experimental setup was the type of solvent needed to dissolve the different redox dyes. For MTT and INT, SDS-0.01M HCl was used as a solvent, according to Septisetyani et al. (2014). Biolog TM dyes are sold in formulations readily soluble in water, as Touchette et al. (2022) described. Finally, the wavelengths used to measure dye formation and biomass production were 490 nm and 750 nm, respectively. They were chosen because they have previously been shown to provide repeatable results for functional comparison between Trichoderma spp. and Verticillium spp. isolates (Oszust et al., 2020b). In this regard, absorbance values at 490 nm reflect the respiration rate, corresponding to substrate utilisation, while the absorbance value at 750 nm measures biomass production (growth pattern) (Oleszek et al., 2019;Pinzari et al., 2017). As for the toxic effect of dyes MTT and INT, the general trend was that all proposed doses (0.01, 0.05 and 0.1%) significantly limited fungal growth compared to the control (without dye). This indicated that MTT and INT dyes are toxical to the Verticillium spp. and Trichoderma spp. isolates tested so far, regardless of the culture medium used. The manufacturer's protocol for commercial phenotype microarray (PM) plates (e.g., FF plates) considers A≥0.25 as positive values absorbances Chou et al., 2022;Frąc et al., 2022). MTT and INT did not meet the requirements of redox dyes for the PM technique. Conversely, the Biolog TM 0.5% D and 1% F dyes were re-liable indicators for PM-nitrogen plate measurements, even after long incubations (e.g., 216 h). The duration of incubation and, thus, the consistency of redox compounds' staining in repeated respiration measurements is an indispensable factor in the case of filamentous fungi compared to bacteria and yeasts. This is because the growth of fungi in phenotype microarrays can be slow, and measurements typically have to be protracted over a week, with daily readings, unlike bacteria that can reach full-colour development in 24-48 h (Pinzari et al., 2016). Differences in colour development in the wells result from a sum of factors, among all the ability of the fungus to biodegrade or biotransform the redox dye, the type of enzymes and species-specific processes in the respiratory chain, or the influence of extracellular enzymes on the mechanism of oxidoreduction of the dye compound . Furthermore, in the face of an initial formation of colour in the wells indicating respiratory activity, appreciable later discolouration of the wells by degradation of the formazan precipitated in the mycelium has sometimes been observed, especially during prolonged incubations. The discolouration by the fungus of the redox dye may influence and mislead the interpretation of results in such studies (Lasinio et al., 2021). Phenotype Microarrays experiments must rely on a robust protocol, not affected by the toxicity or nutritional effect of the dye or the biodegradability of the coloured compound. Reliable indicators are a necessary condition in the study of metabolic processes in fungi using redox respirometric compounds. The described experiment selected the most suitable tetrazolium salt and its concentration for a robust and reliable respirometric analysis of fungal strains in Verticillium and Trichoderma genera. Among the formulations and chemical compounds tested, the 1% F Biolog TM and 0.5% D dyes were shown to form stable staining in both Verticillium and Trichoderma trials without giving rise to toxicity phenomena and without being used as a nitrogen source by the fungi. This makes them suitable for studying the nitrogen metabolism of these species. Furthermore, in the presence of a nitrogen source at different concentrations, these indicator compounds provided proportional and consistent colourimetric responses, allowing quantitative and repeatable measurements of fungal respiration and biomass development. The use of Phenotype Microarray TM plates produced by Biolog TM with filamentous fungi is problematic compared to bacteria due to some factors related to the fungi's mode of development and the complexity of their metabolism (Pinzari et al., 2016). The measurement of the metabolic profile of bacteria is based on the development of colour in the wells, which is directly considered a response to substrates use in the wells. In the case of fungi, the only reliable measure is usually the development of biomass through turbidity readings. It is necessary to avoid using respirometric dyes with fungi because of their variability of response in different fungal species and their frequent toxicity. However, an essential piece of information is lost in this way, which does not necessarily correlate directly with biomass development. Instead, the relationship between respiration and biomass often provides essential details about the energy coefficient of the organism growing on a specific substrate (Pinzari et al., 2016;2017). Through a prior assessment of redox dyes' reliability, toxicity and nutritional effect, it is possible and scientifically sound to use respiration in the phenotypic profiling of filamentous fungi. In the case exemplified, the results obtained will allow assessing the functionality of the genera Verticillium and Trichoderma with the Biolog TM Phenotype Microarray. The protocol here developed allows the measurement of the respiratory properties of fungi as an added value to the measurement of biomass production. What has been shown outlines the substantive experimental steps that should be performed whenever a redox dye is used to measure respiration in filamentous fungi. Particular attention should be paid to testing redox dye's toxic and nutritional effects on each strain under study. The results showed a very high species-or genus-specific variability in the colouring of the dyes. Nonetheless, it was possible to identify respirometric indicators suitable for comparing the fungal isolates on a metabolic basis, both as a function of respiratory activity and biomass production. The results obtained so far will help to select suitable nitrogen sources as prebiotic supplements in Trichoderma biopreparations with antagonistic activity towards phytopathogenic strains of Verticillium spp. While Verticillium and Trichoderma have different modes of action, they may compete for similar nitrogen sources in their environment. Both Verticillium and Trichoderma are capable of decomposing organic matter and utilising various nitrogen sources for their growth and survival. This includes proteins, amino acids, peptides, and other nitrogen-containing compounds found in the soil or plant debris. They produce a range of enzymes, among other proteases, that enable them to break down complex nitrogen compounds into simpler forms that can be assimilated (Morán-Diez et al., 2019). In environments where both Verticillium and Trichoderma coexist, there can be competition for limited nitrogen resources. This competition may arise when nitrogen sources are scarce or when the two fungi occupy similar ecological niches. Factors that influence the outcome of this competition can include their growth rates, nutrient uptake efficiencies, and ability to produce extracellular enzymes (Dutta et al., 2023). It's important to note that the specific outcome of competition between Verticillium and Trichoderma can vary depending on the environmental conditions, the availability of nitrogen sources, and the specific strains or species involved. In some cases, one fungus may outcompete the other and dominate the niche, while in other situations, they may coexist without significant competitive interactions. Verticillium dahliae is a soil-borne fungus that thrives and endures in soil environments. It is renowned as a soilborne plant pathogen responsible for inducing vascular wilt disease in a broad range of plant species, including economically significant crops. During its saprophytic phase, Verticillium dahliae can persist in the soil even without a susceptible host. It exploits organic matter present in the soil, such as plant debris, root exudates, and other organic materials, as a source of nourishment. To enhance its survival in adverse conditions, the fungus forms compact survival structures called microsclerotia, enabling it to endure for extended periods in the soil. Upon encountering a suitable host, Verticillium dahliae invades the plant's roots and colonises its vascular system, giving rise to the telltale wilting symptoms associated with Verticillium wilt. The fungus reproduces and produces spores within the host tissue, facilitating its continued survival and propagation within the soil (Zhang et al., 2022). However, in addition to its pathogenic tendencies, Verticillium dahliae engages in various interactions with other soil microorganism's species such as Trichoderma citrinoviride, Trichoderma atroviride, and Trichoderma pseudokoningii that are often used as beneficial microorganisms for biocontrol purposes (Guzmán-Guzmán et al., 2023). These interactions encompass competition for resources, associations with beneficial microorganisms, and responses to environmental factors. Thus, the soil is the ecological niche where Verticillium dahliae and Trichoderma spp. compete, as species typically do not colonise the xylem, as they are primarily rhizosphereassociated fungi and do not have the ability to invade the vascular system of plants. Their interactions and activities are primarily focused on the soil and root surfaces (Sohrabi et al., 2023). Overall, the competition for nitrogen sources between Verticillium and Trichoderma is a complex ecological process that can be influenced by multiple factors. Further research is needed to better understand the dynamics of this competition and its implications for plant health and disease suppression. In this direction, reliable methods and protocols are needed to better understand the interaction mechanisms between microorganisms. CONCLUSIONS 1. Fungal competition for nutrients is measurable and exploitable in biocontrol. Competition among fungi for nutrients is well-documented in various ecosystems, including soil. In nature, complex ecological mechanisms resolve and regulate contrasts, such as compartmentalization into different ecological niches. However, when fungal strains are to be used in the biocontrol of crop pathogenic species, it becomes functional to exploit the abilities of competition and contrast between species to select strains that are particularly efficient in dominating others. In addition, understanding the mechanisms of fungal competition may hint at developing targeted strategies to improve biocontrol efficiency. For example, manipulating nutrient availability, introducing specific nutritional stimuli, or supporting microbial strains alongside biocontrol fungi could promote their dominance and strengthen their ability to challenge pathogens. 2. Toxic and nutritional effects of dyes affect the results of metabolic profiling. The use of phenotype microarrays to study the metabolism of fungi allows many applications in a variety of research areas, as the comparison of different strains for their functional selection to specific biotechnological fields. However, this work showed that dyes typically used as markers of respirometric activity may be toxic to some species of fungi or have stimulatory or nutritional effects on others, thereby affecting their enzymatic activity and altering their metabolic pathways. As a result, these dyes may lead to inaccurate or biased results, as they may interfere with the functional processes they are intended to study. The selection of dyes to be used to highlight dehydrogenase activity within multiwell plates must be made with great care, especially when comparing different species that may have different sensitivity to the dyes and thus provide for the same panel of nutritional compounds tested for metabolic profiling different results because of different reaction to the dye and not because of actual functional differences. 3. 1% F and 0.5% D Biolog TM dyes are reliable with Verticillium and Trichoderma spp. This study selected dyes without toxic effects and their correct concentration to obtain repeatable and reliable results with Trichoderma and Verticillium species strains. Such an approach is necessary whenever the metabolic profile of fungal strains in microplates has to be compared using colourimetric markers of dehydrogenase activity based on formazan derivatives. Indeed, it was shown in this study how there is high variability in the interspecific and intraspecific response of filamentous fungi to different types of dyes. Therefore, in phenotypic comparisons between strains, there is a risk of confounding a different toxicity response to the dye by differences in metabolic response to administered nutrients. For Verticillium and Trichoderma species, it was seen that commercial Biolog TM type "F" dyes at 1% concentration and "D" dyes at 0.5% concentration are to be considered reliable and capable of comparable responses in the two genera of fungi compared. 4. Implications for selecting nitrogen sources as prebiotic supplements in Trichoderma-based biopreparations. Functional profiling of filamentous fungi by multiwell plates used as phenotype microarrays is helpful in many biotechnological applications. To exploit the full potential of profiling, it is necessary to measure both their biomass production and the respirometric activity of the strains for certain nutrient compounds. It is, therefore, necessary to measure both well turbidity and colour formation. In this study, we demonstrated how formazan compounds, typically used as colourimetric markers of fungal dehydrogenase activity and respiration indicators, can also be used as nitrogen sources by fungi. Fomazan and its derivatives have a chemical structure that includes four nitrogen atoms, similar to amine dyes (azo dyes) that some fungal strains can decolorize. Some species of fungi possess enzymes capable of using colourimetric indicators as nitrogen sources. Therefore, if the purpose of profiling is to analyze and compare the performance of different nitrogen sources for multiple strains, the choice of the correct indicator molecule and its concentration becomes essential for the reliability of the results. In this work, indicator substances were selected that allow, without substantial nutritional interference, testing of different nitrogen sources for use as prebiotic supplements in Trichoderma-based biopreparations intended to contrast phytopathogenic strains of Verticillium spp.
v3-fos-license
2020-07-09T09:02:48.698Z
2020-07-06T00:00:00.000
221342929
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.researchsquare.com/article/rs-41117/v1.pdf?c=1631860029000", "pdf_hash": "e7a8eb4c08ed0406e41569680cf820ccbe9ac3d5", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42517", "s2fieldsofstudy": [ "Medicine" ], "sha1": "6a2c6ae5d357bed462ec01b96e6d0ffb0a9202cf", "year": 2020 }
pes2o/s2orc
Performance of RIPASA and AIR Scores in an Egyptian Subset of Patients with Suspected Diagnosis of Acute Appendicitis Background: Early and denitive acute appendicitis (AA) diagnosis remains a great challenge in clinical practice worldwide. Currently, several risk scores are available for the clinical assessment of AA, although there are differences in their accuracy rates. The aim of the study was to assess and analyze the accuracy of the acute inammatory response (AIR) and Raja Isteri Pengiran Anak Saleha Appendicitis (RIPASA) scores in an Egyptian population. Methods: A series of 310 patients, 61% male with a median age of 18 years, with Rt. Iliac fossa pain were clinically evaluated for suspicion of AA. All patients were assessed using both the AIR and RIPASA scores. Following evaluation, all patients underwent appendectomy. A nal diagnosis of AA was based on histopathology. Two predetermined cutoff values for the AIR score and one cutoff value for the RIPASA score was evaluated for sensitivity, specicity, and accuracy. Statistical analysis was performed using SPSS version 22. Cross tabulation was performed to assess sensitivity, specicity, positive and negative predictive values along with other tests for parametric and non-parametric variables. Results: The sensitivities of the RIPASA score cutoff value of ≥ 7.5 and AIR score cutoff values of >4 and >8 were 83.8%, 92.2%, and 33.1%, accordingly, while the specicities were 69.2%, 76.7%, and 84.6%, respectively. AIR score of >4 (91%) was more accurate compared to AIR score of >8 (37.4%) and RIPASA score of ≥ 7.5 (82.5%). Conclusion: An AIR score of >4 had better sensitivity, specicity, and accuracy compared to a RIPASA score of ≥ score of an evaluation Background Appendicitis is one of the most frequent acute surgical disorders with one in seven life-long prevalence (1). The incidence of acute appendicitis (AA) has been steadily declining since the late 1940s. In developed countries, AA affects approximately 5,750 per 100 000 population per year in developing countries, with the highest incidence between 10 and 30 years of age (2,3). Geographical differences have been asserted, with a long-life risk for AA of 9,8 and % in the USA, Europe, and Africa, respectively (4). There is also great variation in the presentation, severity, radiological evaluation, and surgical management of patients with AA in relation to the average income in each country (5). Clinical diagnosis is a craft taking time to understand. It is based on information gleaned from a large number of variables in the patient's history, clinical examination, and laboratory studies. Experience with many patientgs is required to understand the variations in this complex pattern of information. However, even experienced surgeons often err by giving too much credence to pain and tenderness and too little to the in ammatory response (6). The diagnosis of AA is primarily clinical, and only contrast enhanced computed tomography (CT) has a high enough sensitivity and speci city to establish the correct diagnosis (7,8). However, high cost and limited availability are often factors that hinder its utility especially in developing countries. A clinical scoring system that has acceptable sensitivity and speci city with a low rate of appendectomy would be especially useful. The Acute In ammatory Response (AIR) and Raja Isteri Pengiran Anak Saleha Appendicitis (RIPASA) scoring systems have been recently developed to aid in the rapid diagnosis of AA especially in developed countries to replace the remote Alvarado score that has been used in Western nations (9,10). We prospectively compared RIPASA with AIR scores in patients presenting with right iliac fossa (RIF) pain with suspected appendicitis in our institution. Methods A prospective analysis was performed on 310 patients presenting with signs and symptoms suggestive of AA from September 2017 to April 2018. Written consent was obtained from each patient before enrollment in the study. The current analysis was carried out at Suez Canal University Hospitals, Ismailia, Egypt. All patients were evaluated using RIPASA (9) and AIR scores (10) by a general surgery registrar, and they were monitored by an external observer (one of the authors) who is familiar with the parameters of both scores to avoid bias and under-or overestimation. All patients presented to the emergency room (ER) with right iliac fossa pain, which raise an overall suspicion in AA, both males and females ≥ 16 years of age were included. Patients with the following conditions were excluded: <16 years or ≥ 60 years of age, BMI ≥ 40 kg/m 2 , presence of in ammatory bowel disease, a pathological diagnosis of malignant appendicular lesions, appendicular perforation, pregnancy, and patients with a RIF mass or a history of urolithiasis. A complete history, including the nature and duration of symptoms, was taken from each patient. All patients received a complete physical examination by the registrar in the ER. Blood work-up included hemoglobin level, white leucocyte count (WBC), and C-reactive protein (CRP). A urine pregnancy test for females in the reproductive age group who presented with amenorrhea was performed. A routine urinalysis with microscopy and pelvi-abdominal ultrasound was the investigation of choice in female patients who have been complaining of a suspicious gynecological disease that could not be excluded by history and clinical examination in order to avoid inclusion of complicated ovarian cysts, polycystic ovary and other pure gynecological lesions in the current study. After the previous parameters for patients' nal enrollment had set, the ineligible patients were excluded from the study (Fig. 1). All patients were subjected to the following: 1. RIPASA and AIR scoring for each clinically diagnosed case of appendicitis. 2. Histopathological con rmation was the gold standard con rmation after appendectomy and assessment was done considering positive or negative specimen. Statistical analysis All analyses were performed using SPSS version 22.0 (IBM, Armonk, New York, U.S) and GraphPad Prism version 8.0.1 (GraphPad Software, San Diego, California, U.S). A p value < 0.05 was considered signi cant. Variables were tested for their pattern of distribution using the Kolmogorov-Smirnov test and visual assessment of histograms. Data with normal distribution were expressed as mean ± standard deviation while non-normally distributed variables were presented as median and interquartile range (IQR). Following histopathological examination, all patients were classi ed into 2 groups based on the presence of absence of acute appendicitis. To assess possible differences between the groups, for parametric and non-parametric variables we used the unpaired student t and Mann Whitney U tests, respectively. To compare categorical variables, the Chi-square test was used. Cross tabulation was performed to assess sensitivity, speci city, positive, and negative predictive values (PPV and NPV), respectively. The diagnostic accuracy of each scoring system was calculated using histopathology as the gold standard for diagnostic con rmation. To detect the area under the curve (AUC) for each score, a receiver operating characteristic (ROC) curve was plotted to evaluate both scores using SPSS version 22.0 software. Results Following the exclusion of 44 patients, 310 were identi ed for nal analysis, with the majority being males (61%) and a male: female ratio of 1.6:1 ( Table 1). The nal diagnosis was based on histopathological ndings following open appendectomy. A total of 92% of patients had histologically proven appendicitis, while 8% had a normal (negative) appendix. The median age among all patients was 18 years with no difference between those with and without appendicitis (Table 1). Most patients had an onset of symptoms less than 48 hours prior to surgery. All patients suffered from pain in the RIF (100%), with nausea and vomiting in 90%, RIF tenderness in 88%, and rebound tenderness in 77%. These were all statistically signi cant in the positive subgroup (Table 1). Only 47% of the patients presented with fever, Rovsing sign was positive in 88%, WBCs were elevated in 61%, CRP > 50 g/L was present in 8%, and a negative urinalysis was present in 77%. A signi cantly greater proportion of patients presented with these signs and laboratory ndings in the positive compared to the negative group (Table 1). The RIPASA and AIR scores for all patients had a median of 12.5 (range 5-15) and 8 (range 6-9), respectively. There were higher median scores in the positive compared to the negative group (Table 1). Although there was a wide range of hospital stay (1-12 days) in the positive group, their mean length of stay was not signi cantly different than the negative group (Table 1). Based on the histopathological nal diagnosis, a RIPASA score with a cutoff value ≥ 7.5 was capable of detecting 238 out of 284 truly positive cases and correctly excluding 18 of 26 truly negative with a sensitivity and speci city of 83.80% and 69.23%, respectively (Tables 2 and 4). However, an AIR score with a cutoff value > 4 achieved better results, as it could identify 262 out of 284 truly positive cases and correctly exclude 20 of 26 truly negative cases with a sensitivity and speci city of 92.25% and 76.92%, respectively (Tables 3 and 4). An AIR score with a cutoff value > 8 had a higher speci city but lower sensitivity, 84.61% and 33.09% respectively (Table 3). An AIR score > 4 is more diagnostic than a RIPASA score ≥ 7.5 (PPV, 97.7% vs. 96.7; NPV, 47.6% vs. 28.1%, respectively) and highly accurate (91% vs. 82.5%, respectively) ( Table 4). The AUC was 0.827 and 0.846 for AIR and RIPASA scores, respectively, (Fig. 2). The rate of postoperative complications was approximately 9%, A total of 9 patients developed chest infection (2.9%) and 2 developed venous thromboembolism (0.6%). Three of the 9 also had infections at the surgical site, and 2 of 9 developed a pelvic abscess. Infections at the surgical site were detected in 11 patients (3.5%), while a pelvic abscess was identi ed in 8 (2.6%). Four patients (1.6%) had isolated urinary tract infections following surgery (Fig. 3). Discussion Acute appendicitis is one of the most popular emergencies in surgery around the world. The gold standard treatment is appendectomy. However, a debate now exists regarding the possibility of medically treating AA (11). A delay in diagnosis may lead to complications including perforation, abscess formation, peritonitis, and even death (12). However, an increased rate of surgery for normal appendices has undesirable consequences such as post-operative adhesions and ileus (11). These drawbacks to aggressive surgery have led to the use of ultrasonography and CT to reduce the rate of negative appendectomies and improve diagnostic accuracy. However, these additional diagnostic modalities have led to increased nancial burdens and health care costs particularly in less developed countries with limited resources (13). Given these ndings and the di culties of over and underestimation in the diagnosis of AA, several scoring systems have been developed. Since its introduction in 1986, the Alvarado scoring system has been the most popular (14). Subsequently, modi ed Alvarado scoring systems have emerged to improve the original Alvarado score's lower speci city and sensitivity when applied in western nations (15). During the past decade, a new scoring system was developed to treat de ciencies in the Alvarado and modi ed Alvarado scoring systems in Asian populations. The RIPASA scoring system was initiated in 2010, and it depends solely on history, clinical examination, and 2 blood tests (9,16). The AIR score was initiated parallel to the RIPASA score to overcome the drawbacks of the previous scores. This score incorporates two simple lab tests (WBC count and CRP) in its scheme. The AIR score assesses the intensity of AA in relation to an elevated WBC count together with the percent of segmented neutrophils and CRP(10). The total negative appendectomy rate in the current study was 8.4%, which is less than the rates documented by Shuaib et al., (13), Chong et al., (17) and Rathod et al., (18) who documented rates of 18.4%, 22.9%, and 20.69%, respectively. ROC analysis was performed in the present study to ascertain a cut off score for RIPASA with higher sensitivity and speci city by plotting the true positive rate against the false positive rate for different cut points of a RIPASA score. The AUC for the RIPASA system was greater than that of the AIR (Fig. 2). However, when the AIR score was calculated for a score > 8, there was a higher speci city but much lower sensitivity. To the best of our knowledge, only a few studies have directly compared the RIPASA and AIR scoring systems for the diagnosis of AA. In the present study, the sensitivity of an AIR score > 4 was greater than that of a RIPASA score ≥ 7.5 and signi cantly better than AIR scores > 8. The sensitivities for an AIR score > 4, an AIR score > 8, and a RIPASA score ≥ 7.5 was 92.25%, 33.09%, and 83.8%, respectively. The speci city of an AIR score > 8 was greater than an AIR score > 4 and a RIPASA score ≥ 7.5, 84.6%, 76.9% and 69.2%, respectively. The present study illustrates that the negative appendectomy rate (Table 2 and 3) was recorded in AIR score > 4 in 2.2% (6 out of 268) patients subjected to the score and in RIPASA ≥ 7.5 score, it was recorded in 2.9% (8 out of 246 ) of cases, while the AIR score > 8 demonstrates a negative appendectomy rate of 4% of cases. The highest rate of suspected patients who proved later to be truly positive by histopathogy was registered in AIR > 4 score with 97.8% (262 out of 268). Table 5 illustrates a comparison between the current study and previous studies (16,19) which evaluated the RIPASA versus the AIR scoring system in the diagnosis of AA. The highest sensitivity and speci city, 93.1% and 91.6%, respectively, were recorded by Karami et al (16). However, Bolívar-Rodríguez et al., in a study performed in Mexico, reported a sensitivity of 97.2% and a speci city of 27.6% (19).These ndings show that a RIPASA score with a cut-off > 7.5 can successfully diagnose AA with a greater than 80% accuracy and higher PPV and positive likelihood ratio than 2 of the 3 previously compared studies. Table 5 Comparison of sensitivity, speci city, PPV, and NPV of the AIR and RIPASA scores Karami 7 Bolívar-Rodríguez We postulate that both a RIPASA score ≥ 7.5 and an AIR > 4 have a very good ability to detect truly sick patients when they are at high risk. The current era of scoring systems aiding in the preoperative diagnosis of appendicitis could help to avoid unnecessary laparotomies. The current study adds more precision to the validity of both AIR and RIPASA as objective tools that can be applied to our population to support decision-making in the management of a patient with suspected AA. Our observed rate of negative appendectomies, 26 out of 310 patients (8.4%), is certainly within an acceptable range. The RIPASA and AIR scoring systems did not show a signi cant difference in the diagnosis of AA in an Egyptian population. They both contain easily obtained parameters from the history, physical examination, and a few simple blood tests. Both scoring systems are bene cial for making a quick decision in resource limited and low-middle income countries without the need for imaging assistance. Managing patients with suspected AA remains a challenge and the recommended management approach is indeed unknown, even after ultrasound, CT, and diagnostic laparoscopy were introduced. Conclusion An AIR score of >4 had better sensitivity, speci city, and accuracy compared to a RIPASA score of ≥7.5; however, an AIR score of >8 was more speci c for the Egyptian population. Both scoring systems could safely prevent hospitalization and unnecessary investigations in patients with doubtful diagnosis. Additional randomized, controlled trials are needed to validate the best scoring system for acute appendicitis diagnosis. Consent for publication We obtained consent from all the patients included in our study with institutional consent forms. (7):1540-5. Figure 1 Flow chart of the included patients. It shows the excluded patients and the inclusion criteria for the participants. ROC curves obtained by calculating the results of both scores. ROC determines the sensitivity and speci city for both scores along with AUC, which appears to be greater for the RIPASA system than that of the AIR.
v3-fos-license
2022-02-04T14:16:37.520Z
2022-02-01T00:00:00.000
246489771
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2022.868523/pdf", "pdf_hash": "b42f7b067cf822bb02a002fd41c724ac92a8dde2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42518", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "92c2b6c1449ee2a0d3c8f784ea53e9f0d8143a71", "year": 2022 }
pes2o/s2orc
Three Models of Vaccination Strategies Against Cryptococcosis in Immunocompromised Hosts Using Heat-Killed Cryptococcus neoformans Δsgl1 Vaccines are one of the greatest medical accomplishments to date, yet no fungal vaccines are currently available in humans mainly because opportunistic mycoses generally occur during immunodeficiencies necessary for vaccine protection. In previous studies, a live, attenuated Cryptococcus neoformans Δsgl1 mutant accumulating sterylglucosides was found to be avirulent and protected mice from a subsequent lethal infection even in absence of CD4+ T cells, a condition most associated with cryptococcosis (e.g., HIV). Here, we tested three strategies of vaccination against cryptococcosis. First, in our preventative model, protection was achieved even after a 3-fold increase of the vaccination window. Second, because live C. neoformans Δsgl1-vaccinated mice challenged more than once with WT strain had a significant decrease in lung fungal burden, we tested C. neoformans Δsgl1 as an immunotherapeutic. We found that therapeutic administrations of HK C. neoformans Δsgl1 post WT challenge significantly improves the lung fungal burden. Similarly, therapeutic administration of HK C. neoformans Δsgl1 post WT challenge resulted in 100% or 70% survival depending on the time of vaccine administration, suggesting that HK Δsgl1 is a robust immunotherapeutic option. Third, we investigated a novel model of vaccination in preventing reactivation from lung granuloma using C. neoformans Δgcs1. Remarkably, we show that administration of HK Δsgl1 prevents mice from reactivating Δgcs1 upon inducing immunosuppression with corticosteroids or by depleting CD4+ T cells. Our results suggest that HK Δsgl1 represents a clinically relevant, efficacious vaccine that confers robust host protection in three models of vaccination against cryptococcosis even during CD4-deficiency. INTRODUCTION Invasive fungal infections are primarily caused by environmental fungi that mainly infect immunocompromised individuals resulting in~1.5 million deaths a year that account for~50% of all AIDS-related deaths (1,2). Individuals most at risk include HIV/AIDS patients (3)(4)(5), cancer patients receiving chemotherapy (6,7), solid organ transplant recipients (8)(9)(10), or patients taking medication to control chronic diseases (11)(12)(13)(14). Unlike most fungi that do not infect humans, the pathogenicity of invasive fungal species begins with the ability to grow and replicate at human body temperature (15,16), which suggests that climate change, particularly global warming, may play a role in increasing infections from environmental fungi in more temperate climates (17)(18)(19). The incidence of invasive fungal infections is expected to further increase as the global immunocompromised population continues to rise due to novel immunosuppressive therapies or comorbidities, such as the current COVID-19 pandemic (20)(21)(22)(23)(24)(25). One of these fungal pathogens is Cryptococcus neoformans, a basidiomycetous yeast ubiquitously found in environmental sources such as avian habitation, trees, and soil (3,5,26). C. neoformans is a main etiological agent of cryptococcosis, a life-threatening invasive fungal infection that originates in the respiratory tract (27)(28)(29). Upon inhalation of the environmental fungal propagules, immunocompetent hosts often remain asymptomatic while they either clear the initial infection eliminating the yeast from the lungs or control fungal proliferation by enclosing the persistent yeast in lung granulomas where the fungal cells remain dormant (30)(31)(32). Conversely, immunocompromised individuals lacking a necessary component of the immune system, namely CD4 + T cells as seen with HIV/AIDS, generally fail to control the initial infection or maintain the integrity of the lung granulomas containing latent cryptococcal cells leading to host pathology (11,33). These individuals may experience uncontrolled fungal replication and dissemination of the fungus to the central nervous system potentially leading to life-threatening meningoencephalitis (9,33) accounting for~220,000 new cryptococcal cases and~180,000 deaths a year (34,35). Vaccines are considered to be one of the greatest medical accomplishments to date (36). Although the high mortality rate upon extrapulmonary cryptococcosis in at-risk individuals can be partly attributed to the poor efficacy, host toxicity, and pathogenacquired resistance of current antifungal therapeutics (37)(38)(39), the absence of fungal vaccines is a major constraint in overcoming invasive fungal infections in humans (40). While there has been ample research into the development of a vaccine against C. neoformans [reviewed in (41)(42)(43)], none have advanced past the pre-clinical research stage. The lack of vaccine advancement is chiefly due to the fact this pathogen infects mostly immunocompromised individuals with low CD4 + T cell counts (3,28), and the majority of current cryptococcal vaccine research lack host protective efficacy in this immunodeficiency. As such, vaccine formulations exhibiting high efficacy in animal models that resemble immunodeficiencies associated with cryptococcosis (e.g., lacking CD4 + T cells) are in high demand (42,44). Exposure to C. neoformans may result in the yeast being cleared or safely contained within lung granulomas in immunocompetent hosts (31,33). In addition to the necessity of vaccine studies being carried out in immunodeficient conditions, the literature currently contains only cryptococcal vaccines used in a prophylactic manner. However, there are no reports of vaccination strategies against the reactivation of dormant C. neoformans from lung granuloma breakdown due to immunosuppressive occurrences [reviewed in (44)]. This disparity is mainly attributed to the lack of tools to evaluate vaccines against infection by the reactivation of latent fungal cells in mouse models since mice do not form lung granulomas to wild-type (WT) C. neoformans and remains a major understudied bottleneck in the advancement of a clinically available anti-cryptococcal vaccine. Our lab has previously engineered a mutant strain of C. neoformans (Dsgl1) that accumulates large amounts of sterylglucosides (SGs) and provided the first evidence on the key role of sterylglucosidase 1 (Sgl1) on fungal virulence (45). SGs have been previously shown to possess immunological functions [reviewed in (46)]. The use of the plant SG, bsitosterol, increased T cell proliferation and Th1 polarization (47,48), significantly prolonged survival of mice infected with Candida albicans (49,50), and promoted the recovery of patients with pulmonary tuberculosis in combination with regular antituberculosis treatment (48). However, our work provided the first physio-pathological studies with fungal-derived SGs, and our recent structural studies will enable the rational design of new antifungal agents targeting Sgl1 (51). Prior studies in our lab have shown that C. neoformans Dsgl1 induces a proinflammatory lung cytokine environment with robust effector cell recruitment to the lungs as well as confers complete host protection to lethal WT challenge under immunodeficiencies most associated with cryptococcosis (e.g., lacking CD4 + T cells) (52). Interestingly, we found that protection required SGs in combination to the immunosuppressive glucuronoxylomannan (GXM)-based capsule since an acapsular mutant strain (Dcap59Dsgl1) was no longer protective (53) nor induce a protective cytokine response to ex vivo stimulated gd T cells (T.G. Normile, T.H. Chu, B.S. Sheridan, and M. Del Poeta, submitted for publication), suggesting that SGs may act as an immunoadjuvant to GXM to induce host protection. Most recently, we uncovered the immune-mechanism of protection involved TLR2-mediated production of IFNg and IL-17A by gd T cells resulting in a robust CD4 + or CD8 + T cell response for complete host protection to subsequent WT infection (T.G. Normile, T.H. Chu, B.S. Sheridan, and M. Del Poeta, submitted for publication). Overall, these studies suggest that C. neoformans Dsgl1 represents a viable live, attenuated vaccine. In the present study, we validate three different models of successful vaccination strategies against cryptococcosis using heat-killed (HK) C. neoformans Dsgl1 in condition of CD4 + T cell deficiency. In the canonical prevention model of vaccination, we found that two subsequent administrations of HK C. neoformans Dsgl1 conferred complete host protection to a WT challenge even when CD4 + T cells were depleted, mimicking the results obtained with the live, attenuated mutant. Host protection in immunocompetent and CD4-deficient mice was still found after increasing the time of the vaccination window from 30 to 90 days or after challenging vaccinated mice 3 subsequent times, suggesting our vaccine strategy induces long term host immunity and protection against the WT strain. Interestingly, vaccinated mice receiving multiple WT challenges showed a significant decrease in lung fungal burden compared to vaccinated mice that were challenged only once. Because of these findings, we tested whether C. neoformans Dsgl1 could be used in a therapeutic manner. We found that administration of HK C. neoformans Dsgl1 post WT challenge in naïve mice significantly prolonged survival compared to untreated mice. In previously vaccinated mice, administration of HK C. neoformans Dsgl1 post WT challenge significantly decreased the lung fungal burden post challenge, even during CD4 + T cell deficiency. Finally, we tested HK C. neoformans Dsgl1 in a model of cryptococcal granuloma to study whether our vaccination strategy would prevent fungal reactivation upon immunosuppression. We found that C. neoformans Dsgl1vaccinated mice exhibited significantly enhanced survival and control of fungal proliferation from latent granuloma-contained fungal cells upon inducing immunosuppression with either corticosteroid administration or CD4 + T cell depletion. In conclusion, our results suggest that HK C. neoformans Dsgl1 represents a clinically relevant vaccine candidate and confers robust host protection in three models of vaccination against cryptococcosis during host conditions most associated with clinical cases of cryptococcosis in humans. Fungal Strains and Heat Killed Yeast The fungal strains used in this study were wild-type (WT) C. neoformans var. grubii strain H99, C. neoformans Dsgl1, a mutant strain accumulating sterylglucosides developed by our group (45), and C. neoformans Dgcs1, a mutant strain lacking glucosylceramide synthase (54). For all experiments, fungal strains were freshly recovered from a -80°C freezer stock on YPD agar at 30°C for 3-4 days. An isolated colony was added to 10ml of YPD broth and grown for 16-18hr with shaking, washed three times in sterile PBS, counted with a hemocytometer, and resuspended in sterile PBS at the desired concentration. For HK strains, the desired concentration of live yeast was resuspended in PBS and added to an 80°C heat block for 1hr. All HK strains were confirmed to be fully dead by plating the mixture on YPD plates at 30°C for 4 days and observing no growth. Mice and Ethical Statement Both male and female CBA/J mice were purchased from Envigo. All animals were housed under specific pathogen free conditions and had access to food and water ad libitum. Mice were allowed one week to acclimate upon arrival before any procedures began. All animal procedures were approved by the Stony Brook University Institutional Animal Care and Use Committee (protocol no. 341588) and followed the guidelines of American Veterinary Medical Association. In vivo Infections and Organ Fungal Burden All primary infections and immunizations were carried out in both male and female CBA/J mice 4-6 weeks old. Mice were first intraperitoneally (IP) anesthetized with a ketamine/xylazine solution (95mg of ketamine and 5mg of xylazine per kg of animal body weight). Anesthetized mice were then intranasally (IN) injected with the desired concentration of the specified yeast cells in 20ml of PBS. For fungal burden analysis, mice were euthanized via CO 2 inhalation on pre-determined timepoints. The lungs, brain, spleen, kidneys, and liver were removed, homogenized in 10ml of sterile PBS using a Stomacher 80 blender (Seward, UK), and serial dilutions were grown on YPD plates at 30°C for 3-4 days before being counted and total organ burden calculated. Immunosuppression Treatments Cortisone 21-acetate (CA) (Sigma; cat # C3130) was used to induce leukopenia. Mice were sub-cutaneously administered 250mg/kg/mouse CA in PBS every other day for a set timeline. IP administration of anti-CD4 monoclonal antibody (clone: GK1.5; BioXCell) was used to deplete mice of CD4 + T cells. Antibody dilutions were prepared from the stock solution in PBS each time. Mice were administered 400mg/100ml every 4 days for the duration of the experiment to maintain cell depletion as previously validated in this infection model (52). Control group mice were administered isotype-matched antibody at the same concentration and administration timeline. Vaccination Strategies and Survival Studies Three different vaccination models were used in this study. For survival studies, any animal that appeared to be moribund, exhibited labored breathing or neurological infection, or had lost more than 20% body weight was euthanized via CO 2 . i) For the prevention model, mice were IN injected with 5x10 5 live C. neoformans Dsgl1 in 20ml of PBS, 5x10 5 , 5x10 6 , or 5x10 7 HK C. neoformans Dsgl1 in 20ml of PBS, or 20ml of sterile PBS (unvaccinated controls) 30 days prior to WT challenge unless stated otherwise in the figure caption. Mice were challenged with 5x10 5 C. neoformans WT in 20ml of PBS unless stated otherwise in the figure caption and monitored daily until the predetermined experimental endpoint. ii) For the therapeutic model, live or HK C. neoformans Dsgl1 was used to treat vaccinated or unvaccinated mice post WT challenge. In vaccinated mice, immunocompetent or CD4-deficient mice were administered two subsequent doses of 5x10 7 HK C. neoformans Dsgl1 on days -30 and -15, and mice were challenged with the WT strain on day 0. Mice were administered additional doses of 5x10 7 HK C. neoformans Dsgl1 on days 30 and 45 to reduce WT cells that persist in the lungs of vaccinated mice. In unvaccinated mice, mice were first challenged with 1x10 5 C. neoformans WT strain on day 0. WT challenged mice were treated with 5x10 5 live C. neoformans Dsgl1 in 20ml of PBS, 5x10 7 HK C. neoformans Dsgl1 in 20ml of PBS, or 20ml of sterile PBS (controls) on either day 3 or day 7 post challenge and assessed for survival until day 30. iii) For the reactivation model, we assessed whether vaccination with C. neoformans Dsgl1 could protect mice from lethal reactivation infection from latent fungal cells. First, mice were IN injected with 5x10 5 C. neoformans Dgcs1, an avirulent mutant that has been shown to induce lung granuloma formation that recapitulates a human pulmonary granuloma (54), on day -60. After 30 days, mice were IN injected with 5x10 5 live C. neoformans Dsgl1, 5x10 7 HK C. neoformans Dsgl1, or 20ml of sterile PBS (unvaccinated controls) on day -30 (and a second dose of 5x10 7 HK C. neoformans Dsgl1 on day -15). To induce reactivation of latent C. neoformans Dgcs1, mice were immunosuppressed via administration of either corticosteroids (cortisone 21-acetate) or depleted of CD4 + T cells via IP injection of a monoclonal antibody at set timelines beginning on day 0. Mice were monitored for survival over 30 days. Statistical Analysis All analyses were performed using GraphPad Prism 9 software. The sample size, statistical analysis, and statistical significance is described in each figure caption. The Mantel-Cox log-rank test was used to calculate significance for survival studies. A twotailed unpaired t test was used to calculate statistical significance between two samples, and either an ordinary one-way ANOVA using Tukey's multiple comparisons test for P value adjustment or a two-way ANOVA using Sǐdaḱ's multiple comparisons test for P value adjustment was used to calculate statistical significance between more than two samples. Vaccination With HK C. neoformans Dsgl1 Confers Concentration-Dependent Host Protection We have recently found that murine splenocytes robustly produced the essential protective cytokines IFNg and IL-17A when stimulated with HK C. neoformans Dsgl1 ex vivo, and in fact ex vivo stimulation of splenocytes produced significantly greater quantities of these cytokines compared to live-cell stimulation at the same concentration and on the same timeline (T.G. Normile, T.H. Chu, B.S. Sheridan, and M. Del Poeta, submitted for publication). From this observation, we asked if administration of HK C. neoformans Dsgl1 would provide the same host protection to lethal WT challenge as with the live, attenuated mutant. Since vaccination with HK mutant strains is notoriously known to elicit a weaker immune response than live, attenuated strains (55), mice were administered increasing concentrations of HK C. neoformans Dsgl1 30 days prior to WT challenge. As expected, mice vaccinated with live C. neoformans Dsgl1 were fully protected while unvaccinated mice fully succumbed to infection ( Figure 1A). Interestingly, we observed a concentration-dependent survival rate in mice with the increasing concentrations of HK C. neoformans Dsgl1. Mice administered 5x10 5 HK C. neoformans Dsgl1 fully succumbed to the WT infection in a similar timeline to unvaccinated mice ( Figure 1A). There was a significant increase in median survival time for mice administered 5x10 6 HK C. neoformans Dsgl1, although all mice still succumbed to infection. Remarkably, mice administered 5x10 7 HK C. neoformans Dsgl1 exhibited a 70% survival rate at the endpoint of the experiment that was not statistically different from the complete protection seen with live C. neoformans Dsgl1 ( Figure 1A). Surviving mice visually appeared healthy with normal weight gain, and endpoint organ fungal burden analysis confirmed no extrapulmonary dissemination had occurred ( Figure 1B). Nevertheless, mice vaccinated with 5x10 7 HK C. neoformans Dsgl1 displayed a significantly greater lung fungal burden compared to mice vaccinated with live C. neoformans Dsgl1 ( Figure 1B). These data suggest that mice vaccinated with HK C. neoformans Dsgl1 exhibited concentration-dependent partial protection with 5x10 7 HK C. neoformans Dsgl1 being the most efficacious. T Cell Immunodeficiency We have unveiled that administration of a single dose of 5x10 7 HK C. neoformans Dsgl1 conferred similar host protection compared to vaccination with live C. neoformans Dsgl1, although complete protection was not achieved ( Figure 1A), and the endpoint lung fungal burden was significantly greater than the live mutant vaccinated mice ( Figure 1B). However, due to the decreased length of antigen encounter, vaccination with HK mutants offer less host cell stimulation of protective cytokines, decreased naïve T cell expansion, and attenuated memory T cell formation (36,55). Since we have previously reported that either CD4 + or CD8 + T cells are required for C. neoformans Dsgl1-mediated host protection (52), we hypothesized that repeated immunization with this HK mutant dose may negate the negative facets of HK vaccination and promote stronger adaptive T cell-mediated immunity as seen with other HK mutant vaccine studies (56,57). We tested this hypothesis by administering two subsequent doses of 5x10 7 HK C. neoformans Dsgl1 (HK2d Dsgl1) on days -30 and -15 prior to the WT challenge. Indeed, mice that received two administrations of 5x10 7 HK C. neoformans Dsgl1 exhibited complete host protection (100% survival) at the experimental endpoint ( Figure 2A). Endpoint organ fungal burden analysis showed that HK2d Dsgl1-vaccinated mice displayed no extrapulmonary dissemination and a significantly lower lung fungal compared to live C. neoformans Dsgl1-vaccinated mice ( Figure 2B). In fact, 1 of the 7 HK2d Dsgl1-vaccinated mice fully cleared the WT yeast from the lungs. These data suggest that vaccination with two subsequent doses of 5x10 7 HK C. neoformans Dsgl1 confers complete host protection and aids in pulmonary clearance of the WT fungal cells. To assess if vaccination with HK C. neoformans Dsgl1 possessed clinical relevance, CD4-deficient mice were also vaccinated with two subsequent doses of 5x10 7 HK C. neoformans Dsgl1 and challenged mice with the WT strain. Interestingly, 100% host protection was achieved in HK2d Dsgl1-vaccinated mice depleted of CD4 + T cells ( Figure 2C), and endpoint organ fungal burden analysis revealed no extrapulmonary dissemination of the WT strain ( Figure 2D). There was a significantly greater fungal burden in the lungs of HK2d Dsgl1-vaccinated mice depleted of CD4 + T cells compared A B D C FIGURE 2 | Two doses of heat-killed (HK2d) (C) neoformans Dsgl1 confers complete host protection even in the absence of CD4 + T cells. (A) CBA/J mice (n=10 mice/group) were administered two identical doses of 5x10 7 HK Dsgl1 or PBS (days -30 and -15), challenged with 5x10 5 C. neoformans wild-type (WT) (day 0), and monitored for survival. (B) Endpoint organ fungal burden was quantified in the lungs, brain, spleen, kidney, and liver from HK2d Dsgl1 → WT and compared to the endpoint fungal burden in Live Dsgl1 → WT and 5x10 7 HK Dsgl1 → WT from Figure 1B (n=4-7 mice/group). (C) CBA/J mice (n=10 mice/ group) were depleted of CD4 + T cells prior to administration of two identical doses of 5x10 7 HK Dsgl1 or PBS (days -30 and -15), challenged with 5x10 5 (C) neoformans WT (day 0), and monitored for survival. (D) Endpoint organ fungal burden was quantified in the lungs, brain, spleen, kidney, and liver in HK2d Dsgl1/anti-CD4 → WT mice and compared to the endpoint fungal burden from 5x10 7 HK Dsgl1 → WT and HK2d Dsgl1 → WT from Figure 2B (n=4-7 mice/ group). Graphed data represent the survival percentage of WT challenged mice (A, C) or the mean +/-SD (B, D) and are representative of two independent experimental replicates. Dotted lines represent the limit of detection (LoD) of CFU quantification (B, D). Significance was determined by a two-way ANOVA using Šıdaḱ's multiple comparisons test for P value adjustment (B, D) and significance is denoted as ***P < 0.005; ****P < 0.001. Survival significance was determined by the Mantel-Cox log-rank test (A, C) and denoted on each graph: A: %, P < 0.001 for HK2d Dsgl1 → WT vs. PBS → WT; C: #, P < 0.001 for HK2d Dsgl1/ anti-CD4 → WT vs. PBS/anti-CD4 → WT. to HK2d Dsgl1-vaccinated immunocompetent mice. However, there was a significantly lower lung fungal burden in HK2d Dsgl1-vaccinated mice depleted of CD4 + T cells compared to live C. neoformans Dsgl1-vaccinated immunocompetent mice ( Figure 2D). Overall, these data indicate that vaccination with two subsequent administrations of 5x10 7 HK C. neoformans Dsgl1 confers host protection from WT challenge in both immunocompetent and CD4-deficient mice, and the HK vaccination strategy may provide a greater efficacy in host clearance of the WT strain from the lungs compared to live vaccination strategy. Vaccination With Live or HK C. neoformans Dsgl1 Confers Long-Lasting Host Immunity to Lethal WT Infection Because administration of 2 subsequent doses of 5x10 7 HK C. neoformans Dsgl1 also conferred complete host protection to the WT strain even during CD4-deficiency (Figure 2), we sought to investigate the efficacy of host protection after vaccination with either live or HK C. neoformans Dsgl1 via alternations to our preventative vaccination model during immunocompetency and CD4-deficiency. To assess the longevity of the vaccine-induced host protection, we increased the time between the administration of the vaccine and WT challenge. Immunocompetent or CD4deficient mice were administered either live C. neoformans Dsgl1 or PBS and challenged with the WT strain 90 days later (a 3-fold increase between vaccination and WT challenge). Interestingly, all vaccinated mice survived the WT challenge, while all unvaccinated mice succumbed to the WT infection ( Figure 3A). Endpoint organ fungal burden in surviving mice showed that no extrapulmonary dissemination was observed ( Figure 3B). Similar to our previous studies, the lung fungal burden in CD4-deficient vaccinated mice was significantly greater than in immunocompetent vaccinated mice. These data show that vaccination with live C. neoformans Dsgl1 confers long term host immunity to lethal WT challenge, which strongly suggests long-lived memory T cells even during CD4-deficiency. Because vaccination with live C. neoformans Dsgl1 promoted long term immunity resulting in complete host protection to the A B D C FIGURE 3 | Vaccination with live or heat-killed (HK) C. neoformans Dsgl1 confers long-lasting host protection. (A) CBA/J mice (n=10 mice/group) were administered anti-CD4 antibody or left untreated prior to vaccination with 5x10 5 Live Dsgl1 or PBS controls, and the depletions continued for the entirety of the experiment at noted intervals. Mice were given an extended 90-day rest period where vaccinated and unvaccinated mice were then challenged with 5x10 6 C. neoformans wildtype (WT) (day 0) and monitored for survival. (B) Endpoint organ fungal burden was quantified in the lungs, brain, spleen, kidney, and liver in surviving mice (n=8 mice/group). (C) CBA/J mice (n=10 mice/group) were administered either isotype or anti-CD4 antibodies prior to vaccination with two identical doses of 5x10 7 HK Dsgl1 or PBS controls on days -90 and -75, and the depletions continued for the entirety of the experiment at noted intervals. Mice were given an extended 90-day rest period where vaccinated and unvaccinated mice were then challenged with 5x10 6 C. neoformans WT (day 0) and monitored for survival. (D) Endpoint organ fungal burden was quantified in the lungs, brain, spleen, kidney, and liver in surviving mice (n=7 mice/group). Graphed data represent the survival percentage of WT challenged mice (A, C) or the mean +/-SD (B, D). Dotted lines represent the limit of detection (LoD) of CFU quantification (B, D). Significance was determined by a two-tailed unpaired t-test (B, D) and significance is denoted as **P < 0.01. Survival significance was determined by the Mantel-Cox log-rank test (A, C) and denoted on each graph: A: %, P < 0.001 for Live Dsgl1 → WT or Live Dsgl1/anti-CD4 → WT vs. PBS → WT; C: ns, not significant (P > 0.05) for HK2d Dsgl1/isotype → WT vs. HK2d Dsgl1/anti-CD4 → WT; #, P < 0.001 for HK2d Dsgl1/isotype → WT vs. PBS/anti-CD4 → WT; &, P < 0.001 for HK2d Dsgl1/anti-CD4 → WT vs. PBS/anti-CD4 → WT. WT strain, we then asked if HK C. neoformans Dsgl1 provided the same protection. Immunocompetent or CD4-deficient mice were administered 2 subsequent doses of either HK C. neoformans Dsgl1 or PBS on days -90 and -75 and challenged with the WT strain on day 0. We observed a 90% and 70% survival rate in immunocompetent and CD4-deficient mice, respectively ( Figure 3C). Nonetheless, the difference between the median survival time for immunocompetent mice and CD4deficient mice was not statistically different, the endpoint lung fungal burdens were nearly identical, and no extrapulmonary dissemination of the WT yeast was observed in either group ( Figure 3D). Altogether, these data suggest that vaccination with live or HK C. neoformans Dsgl1 provides long-lived host protection and robust lung containment even after a 3-fold increase of the vaccination window. Vaccination With C. neoformans Dsgl1 Confers Complete Host Protection to Multiple WT Challenges Even in the Absence of CD4 + T cells During chronic infections following an antigen encounter, T cells may become tolerant and non-responsive but remain alive for extended periods of time in a hyporesponsive state (58). Because we observe persistent fungal cells in the lungs post WT challenge, T cell anergy may potentially occur after the contraction phase post WT challenge. Since we have now shown that vaccination with either live or HK C. neoformans Dsgl1 provides long term immunity, we then wanted to investigate the possibility of induced T cell anergy in our model since T cell-mediated immunity is an essential facet to host vaccine protection in our model (52) To test for induced T cell anergy, immunocompetent or CD4deficient C. neoformans Dsgl1-vaccinated mice underwent multiple WT challenges, monitored for survival, and lung fungal burden was assessed at the end of each WT challenge period (experimental design schematic: Figure 4A). Very interestingly, C. neoformans Dsgl1-vaccinated mice completely survived for a total of 105 days after three subsequent lethal WT challenges on days 0, 45, and 75 ( Figure 4B). Endpoint lung fungal burden analysis showed that there was a significant decrease of persistent WT yeast in the lungs of mice that were WT challenged a second time ( Figure 4C). This decrease in lung burden did not further decrease after a third challenge. In addition, the decrease in the persistent lung fungal burden from the subsequent WT challenge resulted in no statistical difference between the lung burden in isotype-treated and CD4deficient mice ( Figure 4C). Overall, these data indicate that In those studies, we showed that the lung fungal burden at days 45, 75, and 105 post WT challenge was nearly identical at all timepoints, and histopathology at these timepoints displayed a decreased percentage of inflamed lung tissue and increased formation isolated nodules of contained yeast cells (52). In this study, we found for the first time that vaccination with two subsequent doses of HK C. neoformans Dsgl1 results in a significant decrease of lung fungal burden compared to live C. neoformans Dsgl1-vaccination for both immunocompetent and CD4-deficient mice ( Figure 2D). Moreover, vaccinated mice that received more than one WT challenge displayed a significant reduction in the lung fungal burden compared to mice that were received only one WT challenge ( Figure 4C). Because subsequent WT challenges decreased the lung fungal burden, we asked if administration of our vaccine could be used as a therapeutic strategy and administered after the WT challenge. To investigate the therapeutic potential of HK C. neoformans Dsgl1 administration in C. neoformans Dsgl1-vaccinated mice, immunocompetent and CD4-deficient mice were challenged with the WT strain first and then received either 1 or 2 subsequent administrations of HK C. neoformans Dsgl1 (experimental design schematic: Figure 5A). We found a significant decrease in the lung fungal burden after therapeutic administration of HK C. neoformans Dsgl1 in both immunocompetent and CD4-deficient mice ( Figure 5B). From the baseline lung fungal burden on day 30 post challenge, there was a significantly greater lung burden in CD4-deficient mice compared to the isotype-treated as we have seen previously. In addition, the lung fungal burden in mice that were treated with either 1 or 2 administrations of PBS (control groups) on days 30 and 45, respectively, was nearly identical to the baseline lung fungal burden ( Figure 5B). Interestingly, there was a significant decrease in the lung burden in mice that received 1 or 2 administrations of HK C. neoformans Dsgl1 post WT challenge compared to the PBS-treated groups at those timepoints as well A B FIGURE 5 | Administration of heat-killed (HK) C. neoformans Dsgl1 post wild-type (WT) challenge significantly reduces persistent WT yeast from the lungs of vaccinated mice. (A) Experimental design schematic. CBA/J mice (n=9 mice/group) were administered either isotype or anti-CD4 antibody prior to vaccination with two identical doses of 5x10 7 HK Dsgl1 on days -30 and -15, and all mice were challenged with 5x10 5 C. neoformans WT on day 0. After 30 days post WT challenge, 3 mice/ group (HK2d Dsgl1/isotype → WT and HK2d Dsgl1/anti-CD4 → WT) were sacrificed for lung fungal burden determination, and the remainder of the mice were then administered either 5x10 7 HK Dsgl1 or PBS. Fifteen days later on day 45, 3 mice/group were sacrificed for lung fungal burden determination (HK2d Dsgl1/isotype → WT/PBS-1, HK2d Dsgl1/isotype → WT/HK-1, HK2d Dsgl1/anti-CD4 → WT/PBS-1, and HK2d Dsgl1/anti-CD4 → WT/HK-1), and the remainder of the mice were then administered either 5x10 7 HK Dsgl1 or PBS. Fifteen days later on day 60, 3 mice/group were sacrificed for lung fungal burden determination (HK2d Dsgl1/isotype → WT/PBS-2, HK2d Dsgl1/isotype → WT/HK-2, HK2d Dsgl1/anti-CD4 → WT/PBS-2, and HK2d Dsgl1/anti-CD4 → WT/HK-2). (B) Endpoint organ fungal burden was quantified in the lungs for mice sacrificed on days 30, 45, and 60 (n=3 mice/group/timepoint). Dotted line represents the limit of detection (LoD) of CFU quantification. Graphed data represent the mean +/-SD (B). Significance was determined by an Ordinary one-way ANOVA using Tukey's multiple comparisons test for P value adjustment and is denoted as *P < 0.05; **P < 0.01; ***P < 0.005. as in the lung burdens between mice that received 1 or 2 administrations of HK C. neoformans Dsgl1 ( Figure 5B). Together, these data suggest that therapeutic administration of HK C. neoformans Dsgl1 post WT challenge significantly reduces the number of persistent WT yeast in the lungs of vaccinated mice. Therapeutic Administration of Live or HK C. neoformans Dsgl1 Post WT Challenge Significantly Prolongs Survival in Unvaccinated Mice Because we observed the efficacious therapeutic potential of HK C. neoformans Dsgl1 administration post WT challenge in vaccinated mice, we then asked if therapeutic administration of HK C. neoformans Dsgl1 post WT challenge was useful in naïve, unvaccinated mice. We tested this hypothesis by challenging naive mice with the WT strain and then administered HK C. neoformans Dsgl1, live C. neoformans Dsgl1, or PBS on either day 3 or day 7 and monitored for survival (experimental design schematic: Figure 6A). While all mice administered PBS fatally succumbed to infection, all mice administered HK C. neoformans Dsgl1 or live C. neoformans Dsgl1 on day 3 post WT challenge survived to the experimental endpoint ( Figure 6B). In addition, mice administered HK C. neoformans Dsgl1 or live C. neoformans Dsgl1 on day 7 post WT challenge exhibited a 70% and 60% survival rate at the experimental endpoint, respectively. Nevertheless, there were no differences in the lung fungal burden between any of the surviving groups ( Figure 6B). Of note, all surviving mice displayed extrapulmonary dissemination of the WT strain to the brain. Interestingly, mice that received therapeutic administration of live or HK C. neoformans Dsgl1 on day 3 had fewer brain CFU compared to mice administered on day 7 ( Figure 6C). Overall, these data suggest that live or HK C. neoformans Dsgl1 aids to significantly prolong the survival of mice from fatal WT infection. Vaccination With Live or HK C. neoformans Dsgl1 Protects Mice From Fatal Infection by Reactivation of Latent Cryptococcosis via Immunosuppression We have now shown that live or HK C. neoformans Dsgl1 can be effectively used both preventatively (Figures 2-4) and therapeutically ( Figures 5, 6) to elicit robust host protection. However, C. neoformans is not only a primary pathogen since fungal cells can be contained within lung granulomas in immunocompromised hosts for extensive periods of time but immunosuppressive conditions, such as CD4-lymphopenic HIV/ AIDS patients, can cause granuloma breakdown, latent fungal cell proliferation, and brain dissemination potentially resulting in fatal meningoencephalitis (11,31). Thus, we investigated the ability of C. neoformans Dsgl1 to protect mice from cryptococcal reactivation from a lung granuloma. To test this, mice were intranasally inoculated with C. neoformans Dgcs1, a mutant strain lacking glucosylceramide synthase, that has been previously reported to induce pulmonary granuloma formation in mice over 30 days. At 30 days post Dgcs1 administration, we administered live C. neoformans Dsgl1, HK C. neoformans Dsgl1, or PBS. After another 30 days, all groups of mice underwent either corticosteroid-induced immunosuppression to induce leukopenia or CD4 + T cell depletion to induce CD4 lymphopenia, and mice were monitored for survival (simplified experimental design schematic: Figure 7A; detailed experimental design schematic: Supplementary Figure 1). Extraordinarily, we observed that mice administered live C. neoformans Dsgl1 or HK C. neoformans Dsgl1 exhibited a 75% and 62.5% survival rate, respectively, at the experimental endpoint post corticosteroidinduced immunosuppression, while all PBS-treated mice fully succumbed to fatal reactivation ( Figure 7B). Similarly, mice administered live C. neoformans Dsgl1 or HK C. neoformans Dsgl1 exhibited a 100% and 87.5% survival rate, respectively, at the experimental endpoint post CD4 + T cell depletion, which were significantly greater than the PBS-treated mice that displayed a 37.5% survival rate ( Figure 7C). These data suggest that vaccination with live or HK C. neoformans Dsgl1 can be used to protect the host from cryptococcal reactivation from a lung granuloma in the event that they become immunocompromised. To examine the efficacy of our vaccine strategy in the experimental reactivation model, the endpoint lung fungal burden in mice pre-immunosuppression (day 0) was compared to the fungal burden in the lungs of mice that survived until the experimental endpoint post-immunosuppression (day 30). We first observed there were no differences in the lung fungal burdens between any of the groups pre-immunosuppression ( Figure 7D). Interestingly, there were no statistical differences between the Figure S1). (B, C). CBA/J mice were infected with 5x10 5 C. neoformans Dgcs1 on day -60 to induce lung granuloma formation. After 30 days, mice were administered either 5x10 5 Live Dsgl1 or PBS on day -30 or 5x10 7 HK Dsgl1 on days -30 and -15. Finally on day 0, all groups of mice underwent continuous immunosuppressive treatment with either the corticosteroid cortisone acetate (CA) (B) or anti-CD4 antibody (C) to cause reactivation of the latent C. neoformans Dgcs1 yeast contained within the lung granulomas and assessed for survival over 30 days. (D) Endpoint lung fungal burden comparison in mice pre-immunosuppression on day 0 (Dgcs1 → PBS and Dgcs1 → Dsgl1) (n=3 mice/group) and post-immunization on day 30 for CA-treated mice (Dsgl1/CA and HK2d Dsgl1/CA) (n=6-7 mice/group) and anti-CD4-treated mice (PBS/anti-CD4, Dsgl1/anti-CD4, and HK2d Dsgl1/anti-CD4) (n=9-10 mice/group). Dotted line represents the limit of detection (LoD) of CFU quantification. Graphed data represent the survival percentage of mice (B, C) and the mean +/-SD (D). Significance was determined by an Ordinary oneway ANOVA using Tukey's multiple comparisons test for P value adjustment (D) and is denoted as **P < 0.01. The Mantel-Cox log-rank test was used to determine survival significance (B, C) and denoted on each graph: B: #, P < 0.001 for either Dsgl1/CA or HK Dsgl1/CA vs. PBS/CA; C: %, P < 0.01 for Dsgl1/anti-CD4 or HK Dsgl1/anti-CD4 vs. PBS/anti-CD4. endpoint lung fungal burdens post-immunosuppression in C. neoformans Dsgl1-vaccinated mice or between the endpoint lung fungal burdens in mice pre-immunosuppression compared to in mice post-immunosuppression. In fact, the only observed statistically significant difference was between the surviving PBStreated CD4-deficient mice and live C. neoformans Dsgl1vaccinated mice, which further supports that vaccination with C. neoformans Dsgl1 protects mice from lethal reactivation upon immunosuppression ( Figure 7D). Comparably, all surviving PBS-treated mice displayed significantly greater brain dissemination compared to vaccinated mice, which were almost fully absent of any brain fungal burden (Supplementary Figure 2). Overall, these data suggest that administration of live or HK C. neoformans Dsgl1 inhibits the proliferation of dormant fungal cell in the lung granuloma and protects from extrapulmonary dissemination upon immunosuppression. DISCUSSION In the current study, we have presented ample evidence on the highly efficacious use of HK C. neoformans Dsgl1 in conferring robust host protection in three separate models of vaccination against cryptococcosis during immunosuppression. More specifically, we have shown that: i) preventative vaccination with 2 doses of HK C. neoformans Dsgl1 conferred complete host protection to lethal challenge with decreased endpoint lung fungal burden compared to live cell vaccination; ii) therapeutic administration of HK C. neoformans Dsgl1 post WT challenge resulted in a continual decrease in the lung fungal burden with each subsequent vaccine administration, conferring significantly increased survival rate; and iii) our vaccination strategy prevented cryptococcal reactivation from a lung granuloma, by inhibiting proliferation of latent fungal cells and improving survival upon immunosuppression. Host protection was both concentration-and dose-dependent requiring 2 subsequent administrations of 5x10 7 HK C. neoformans Dsgl1. The requirement for multiple doses has been seen with other HK vaccine-inducing mutant strains (56,57), while others required only one dose (59,60). Because all these studies including our current work use a similar vaccine concentration between 1x10 7 and 5x10 7 , the only difference for the single dose requirement was the use of the KN99 WT strain compared to the H99 WT strain. Nevertheless, all studies on HK vaccine-inducing mutants, including this current work, report 100% protection to the lethal WT challenge. However, the true standout characteristic for a clinically relevant vaccine formulation is the ability to induce protection in a model most associated with a disease, which is CD4-deficiency for cryptococcosis (36,40). In comparison to our present findings where we report 100% protection in CD4-deficiency with HK C. neoformans Dsgl1 vaccination, the only other HK vaccine-inducing mutant to demonstrate protection during CD4-deficiency was from Wang and colleagues using a HK F-box protein (Dfbp1) mutant strain (56). Similarly, both HK C. neoformans Dsgl1 and the HK Dfbp1 mutants demonstrated complete protection in both immunocompetent and CD4-deficient CBA/J mice, although differences between the two are noteworthy. First, our present work with HK C. neoformans Dsgl1 resulted in a~1 log lower endpoint lung fungal burden for isotype-treated mice compared to isotype-treated mice in the report by Wang and colleagues (although the endpoint lung fungal burden for CD4-deficient mice was nearly identical) (56) (Figure 2). Second, we observed a complete lack of any extrapulmonary dissemination in mice vaccinated with HK C. neoformans Dsgl1, while several mice displayed fungal CFU in the brain and spleen in the study by Wang and colleagues (56). Finally, the WT challenge dose used in our work was 15x greater than used by Wang and colleagues. With regards to our vaccine, we aimed to test the rigor and robustness of C. neoformans Dsgl1 in the preventative model of vaccination via functional alterations to our experimental design. Since T cell mediated immunity is a well-established keystone of anti-cryptococcal immunity (61,62), the need for either CD4 + or CD8 + T cells in C. neoformans Dsgl1 host protection (52), and our recent findings that show memory T cell recall responses of IFNgand IL-17A-producing subsets in C. neoformans Dsgl1-vaccinated mice to the WT strain (T.G. Normile, T.H. Chu, B.S. Sheridan, and M. Del Poeta, submitted for publication), these functional alterations focused upon memory T cells. The first alteration involved a 3-fold increase in the time between vaccination and WT challenge, where vaccination began 90 days prior to WT challenge for both live and HK C. neoformans Dsgl1. All immunocompetent and CD4-deficient mice vaccinated with live C. neoformans Dsgl1 survived the lethal WT challenge, and a respective 90% and 70% survival was observed in mice vaccinated with HK C. neoformans Dsgl1 (Figures 3A, C). The protection observed in the extended rest period suggests the induction of longlived memory T cells post vaccination with C. neoformans Dsgl1. Future immunophenotyping assays will be aimed to define the type of circulating memory T cells, such as central memory, tissueresident memory, or effector memory. Complete host protection was not observed in 100% of the CD4 + deficient mice when they were vaccinated with HK C. neoformans Dsgl1 90 days prior to WT challenge. This suggests that the immunological memory induced was either less robust or shorter-lived compared to vaccination with the live cell strain. It is noteworthy to mention that the WT challenge dose was doubled in this experimental design due to the increased age of the mice at the time of challenge. However, the decreased length of antigen encounter using HK mutant strains may have potentially resulted in less robust naïve T cell stimulation and fewer memory T cells following the contraction phase (55,63). Optimization of the dosing regimen will be required in future studies. Potential adjustments could include increasing the number of doses, increasing the time between the first and second dose, or altering the concentrations to induce more robust immunity with a lower first dose and a greater second dose. The second functional alteration to the preventative vaccination model experimental design was to increase the number of WT challenges administered to vaccinated mice. During chronic infections, such as when fungal cells are persisting in the lungs, T cells may become tolerized to antigens remaining alive for extended periods of time in a hyporesponsive state known as T cell anergy (58,64). Mice vaccinated with C. neoformans Dsgl1 exhibited the opposite, however. First, all mice that received two or three subsequent WT challenges exhibited 100% survival even during CD4 + T cell deficiency ( Figure 4B). Second, the endpoint lung fungal burden in mice that received at least 2 WT challenges displayed a~2 log decrease compared to mice that received only 1 WT challenge ( Figure 4C). This suggests the efficacy observed with the functional alterations in the preventative model during vaccination with C. neoformans Dsgl1 elicits long-lived, non-exhaustive T cell memory with increasing clonal functionality upon subsequent WT encounters. Future work will address a phenotypic and functional characterization comparing T cells from mice administered one WT challenge with mice administered more than one challenge. Persistent fungal cells remaining in the lungs post WT challenge in C. neoformans Dsgl1-vaccinated mice have been an observable facet in all experimental variations in this study and previous work from our lab. Moreover, fungal cell persistence post WT challenge in vaccinated mice has been reported in other cryptococcal vaccine studies as well (56,57,59,60,65,66). Because we observed a decrease in the lung fungal burden after a second WT challenge ( Figure 4C), we investigated the immunotherapeutic ability of HK C. neoformans Dsgl1 to decrease further WT fungal cells remaining in the lungs. The first administration of HK C. neoformans Dsgl1 significantly decreased the persistent fungal burden to a similar degree as mice that received a second WT challenge ( Figure 5B and Figure 4C). Interestingly, mice that received a second administration of HK C. neoformans Dsgl1 significantly decreased the remaining fungal cells to an even further extent compared to mice that were administered PBS or mice that received only 1 therapeutic dose of HK C. neoformans Dsgl1 ( Figure 5B). In fact, 2 of the 3 mice fully cleared the WT fungal cells from the lungs. Thus, HK C. neoformans Dsgl1 exhibits robust immunotherapeutic potential in previously vaccinated immunocompetent mice. Collectively, the therapeutic potential of HK C. neoformans Dsgl1 administration has demonstrated highly efficacious host protection in both previously vaccinated ( Figure 5) and unvaccinated mice ( Figure 6). While this adds an entirely new dimension to our vaccine, immunotherapeutic administration is scarce in the literature with only a few other reports. The first immunotherapeutic study utilized P13, an antigenic peptide mimotope of the cryptococcal capsular GXM conjugated to either tetanus toxoid or diphtheria toxoid (67,68). Immunization with P13 after an otherwise lethal challenge significantly prolonged survival, yet all mice soon succumbed to fatal infection (67). Similarly, Datta and colleagues established a model of chronic infection in mice and administration of P13 significantly prolonged host survival compared to control mice, but again all mice soon succumbed to fatal infection (68). In addition to the P13 conjugate vaccine, a TNFa-expressing adenoviral vector was also utilized post lethal WT challenge (69). Although survival was not assessed, the authors reported a significant decrease in lung fungal burden, increased IFNg levels, and a significant increase in macrophage and neutrophil recruitment to the lungs. Overall, in addition to the robust efficacy in the preventative model of vaccination, C. neoformans Dsgl1 has now been shown to possess unrivaled immunotherapeutic potential adding to the clinical significance of our vaccine. Although both the prevention and therapeutic models increase the novelty and translational potential of our vaccine, we have demonstrated vaccine-induced host protection against lethal infection due to reactivation of latent fungal cells upon immunosuppressive treatments (Figures 7B, C). To our knowledge, this is the first time a vaccine against the reactivation infection has been reported in the literature. Previous work in our lab had shown that mice treated with FTY720, a prescribed treatment for relapsing remitting multiple sclerosis, was linked to granuloma breakdown with a disorganization of the peripheral macrophages with a shift towards an M2 polarized state (11). In addition, our findings also validate the reactivation model, as it showed that the C. neoformans Dgcs1-induced granuloma in mice can lose integrity upon immunosuppression resulting in fungal proliferation in the lungs, brain dissemination, and ultimately death. In fact, clinical cases can occur due to the reactivation o f g r a n u l o m a -c o n t a i n e d f u n g a l c e l l s f r o m e i th e r immunosuppression or comorbidities (HIV/AIDS progression) (27,70). Because of this, we tested our vaccination strategy in this mouse model during prolonged corticosteroid-induced immunosuppression as well as CD4-deficiency. We observed a 70% and 60% survival rate in mice vaccinated with live or HK C. neoformans Dsgl1, respectively, at the endpoint after corticosteroid-induced immunosuppression with cortisone acetate (Figure 7B), and a 100% and 90% survival rate in mice vaccinated with live or HK C. neoformans Dsgl1 at the endpoint after depletion of CD4 + T cells ( Figure 7C). Interestingly, the corticosteroid-induced immunosuppression was more lethal than the depletion of CD4 + T cells, which may be attributed to the mechanism of immunosuppression. Corticosteroid-induced immunosuppression induces leukopenia, inhibits phagocytosis, and decreases antigen presentation capabilities (71,72), while depletion of CD4 + T cells ablates circulating CD4 + lymphocytes. So, we speculate that the difference in lethality of the infection may be the speed at which the immunosuppression took effect. Although there was an observed difference in survival between the two modes of immunosuppression, the endpoint lung fungal burden between the two modes of immunosuppression were nearly identical ( Figure 7D). In fact, there were also no differences between the endpoint lung fungal burden of mice preimmunosuppression and C. neoformans Dsgl1-vaccinated mice post-immunosuppression. This suggests that vaccination with either live or HK C. neoformans Dsgl1 controls the proliferation of the latent fungal cells in the lungs even after the immunosuppressive regime. This is further supported from the endpoint lung fungal burden in unvaccinated CD4-deficient mice being significantly greater than the lung fungal burden in the vaccinated mice, which indicate that fungal cells extensively proliferate in unvaccinated mice upon immunosuppression. The same was observed for extrapulmonary dissemination to the brain (Supplementary Figure 2). While there were only 1-2 C. neoformans Dsgl1vaccinated mice that displayed fungal dissemination, all the surviving unvaccinated mice had significant fungal burden in the brain. Overall, vaccination with either live or HK C. neoformans Dsgl1 demonstrated remarkable efficacy in this cryptococcal model of reactivation. In conclusion, we have shown here that HK C. neoformans Dsgl1 demonstrates a highly efficacious vaccine candidate that goes beyond the canonical preventative model of primary disease prevention. We have expanded not only to a more clinically relevant HK formulation but also to additional models of vaccine strategies to protect against cryptococcosis during CD4-deficiency, including using our vaccine as a therapeutic mean and using our vaccine to prevent reactivation of a latent infection upon immunodepression. Here forth, the tools for investigation into the protective immunity against fungal reactivation from pulmonary granulomas in mice are now available, which greatly opens future possibilities to significantly add to this completely absent portion of the literature. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author. ETHICS STATEMENT The animal study was reviewed and approved by Stony Brook University Institutional Animal Care and Use Committee (protocol no. 341588). AUTHOR CONTRIBUTIONS TGN and MDP took part in the conceptualization of this study as well as the writing and finalization of the manuscript. TGN performed all animal and experimental procedures, statistical analysis, and figure generation. All authors contributed to the article and approved the submitted version.
v3-fos-license
2023-01-18T15:25:01.146Z
2017-02-01T00:00:00.000
255946874
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1186/s13014-017-0773-4", "pdf_hash": "6fd99b8eb6adc0bc52a62acbc3ba69ae709d0f69", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42519", "s2fieldsofstudy": [ "Medicine" ], "sha1": "6fd99b8eb6adc0bc52a62acbc3ba69ae709d0f69", "year": 2017 }
pes2o/s2orc
Lung metastases treated with stereotactic body radiotherapy: the RSSearch® patient Registry’s experience To report overall survival and local control for patients identified in the RSSearch® Patient Registry with metastatic cancer to the lung treated with SBRT. Seven hundred two patients were identified with lung metastases in the RSSearch® Registry. Of these patients, 577 patients had SBRT dose and fractionation information available. Patients were excluded if they received prior surgery, radiation, or radiofrequency ablation to the SBRT treated area. Between April 2004-July 2015, 447 patients treated with SBRT at 30 academic and community-based centers were evaluable for overall survival (OS). Three hundred four patients with 327 lesions were evaluable for local control (LC). All doses were converted to Monte Carlo equivalents and subsequent BED Gy10 for dose response analysis. Median age was 69 years (range, 18–93 years). Median Karnofsky performance status (KPS) was 90 (range 25/75% 80–100). 49.2% of patients had prior systemic therapy. Median metastasis volume was 10.58 cc (range 25/75% 3.7–25.54 cc). Site of primary tumor included colorectal (25.7%), lung (16.6%), head and neck (11.4%), breast (9.2%), kidney (8.1%), skin (6.5%) and other (22.1%). Median dose was 50 Gy (range 25/75% 48–54) delivered in 3 fractions (range 25/75% 3–5) with a median BED of 100Gy10 (range 25/75% 81–136). Median OS for the entire group was 26 months, with actuarial 1-, 3-, and 5-year OS of 74.1%, 33.3, and 21.8%, respectively. Patients with head and neck and breast cancers had longer median OS of 37 and 32 months respectively, compared to colorectal (30 months) and lung (26 months) which corresponded to 3-year actuarial OS of 51.8 and 47.9% for head and neck and breast respectively, compared to 35.8% for colorectal and 31.2% for lung. The median LC for all patients was 53 months, with actuarial 1-, 3-, and 5-year LC rates of 80.4, 58.9, and 46.3%, respectively. There was no difference in LC by primary histologic type (p = 0.49). Improved LC was observed for lung metastases that received SBRT doses of BED ≥100Gy10 with 3-year LC rate of 77.1% compared to 45% for lung metastases treated with BED < 100Gy10 (p = 0.01). Smaller tumor volumes (<11 cc) had improved LC compared to tumor volumes > 11 cc. (p = 0.005) Two-year LC rates for tumor volumes < 11 cc, 11–27 cc and > 27 cc were 72.9, 64.2 and 45.6%, respectively. This correlated with improved OS with 2-year OS rates of 62.4, 60.9 and 46.2% for tumor volumes < 11 cc, 11–27 cc and > 27 cc, respectively (p = 0.0023). In a subset of patients who received BED ≥100Gy10, 2-year LC rates for tumor volumes < 11 cc, 11–27 cc and > 27 cc were 82.8, 58.9 and 68.6%, respectively (p = 0.0244), and 2-year OS rates were 66.0, 58.8 and 28.5%, respectively (p = 0.0081). Excellent OS and LC is achievable with SBRT utilizing BED ≥100Gy10 for lung metastases according to the RSSearch® Registry data. Patients with small lung metastases (volumes < 11 cc) had better LC and OS when using SBRT doses of BED ≥100Gy10. Further studies to evaluate a difference, if any, between various tumor types will require a larger number of patients. Introduction Pulmonary metastases are a very frequent occurrence in patients with cancer. One series of a thousand patients found that 50% who suffered a malignancy-related death had the presence of pulmonary metastases at the time of autopsy [1]. A large surgical series of cancer patients with lung metastases treated with metastasectomy revealed a 15-year survival of 22%, an unexpected outcome for patients with stage 4 disease [2]. Researchers have found genomic differences in microRNA expression of these limited metastatic tumors compared to their widely metastatic counterparts, lending credence to the idea that our binary system of local or metastatic disease might be incorrect [3,4]. Hellman et al. coined a limited metastatic state titled oligometastases where aggressive surgical and ablative therapies could potentially lead to long disease free intervals [5]. Metastasectomy for lung metastases has been the standard of care but is often not possible due to medical comorbidities, extrathoracic disease, unresectable metastases, or short disease free intervals. An ablative therapy such as stereotactic body radiation therapy (SBRT) has been reported in many retrospective reports for lung metastases however, with limited sample size [6]. We present a large series of metastatic lung tumors treated with SBRT using the RSSearch® Registry. Materials and methods The RSSearch® Registry is an international, web-based registry designed for SBRT and stereotactic radiosurgery (SRS) research with currently over 18,000 patients enrolled (www.clinicaltrials.gov/NCT01885299) [7]. RSSearch® was designed to standardize the collection of patient screening, treatment and outcome data for patients treated with SBRT and SRS with the goal of conducting research outcomes analysis to identify the most effective and appropriate clinical uses of SRS/SBRT. RSSearch® is managed by the Radiosurgery Society, a non-profit, professional medical society (www.therss.org) and adheres to the Health Insurance Portability and Accountability Act (HIPAA) in all domains including database security, data transmission, and confidentiality. The database is contracted and maintained by Advertek (Nashville, TN). An audit was performed by the study investigators of sites participating in this study which outlined missing data points. Centers were asked to provide missing data which was generally successful in recapturing this data; however we are unable to quantify its success rate. Data collected in RSSearch® includes the following categories: patient demographics, treated lesion (size, volume, location), treatment plan including use of surgery or chemotherapy, information on SBRT delivery including dose and fractionation, toxicity, symptom control, lesion response, survival data, and progression data. Aggregate de-identified data is accessible by RSSearch® administrators. Requests for retrospective data analysis are sent to the RSSearch® Review Committee, which approves or denies all requests for data. Lesion locations and SBRT treatment sites are described using the World Health Organization (WHO) International Classification of Diseases (ICD), version 9 codes. Toxicity data is coded using the Common Toxicity Criteria for Adverse Event Reporting, version 3. The majority of patients were treated with the CyberKnife™ Robotic Radiosurgery System (Accuray Inc., Sunnyvale, CA) and 2 patients were treated with Truebeam (Varian Medical Systems, Palo Alto, CA). Due to the nature of the current study using registry data, no pre-defined treatment planning criteria were enforced and instead relied upon individual institutional guidelines. After consensus review of the physicians from the treatment sites, the majority of patients were simulated in the supine position using computed tomography (CT) scanning above and below the region of interest during inspiration, expiration, and free breathing. One millimeter slice reconstruction of the treatment planning area was transferred to the treatment planning station. Positron emission tomography (PET) scans were used to aid in target volume delineation via image fusion to the CT scan. Target volumes were delineated by physician (radiation oncologist, pulmonologist, or surgeon) using CT and PET scans. Gross tumor volume (GTV) was often used as the clinical target volume (CTV), with a 3-10 mm margin added circumferentially to define the planning target volume (PTV). Real time tumor tracking was incorporated for patients treated with CyberKnife using the Synchrony® Respiratory Motion Tracking System. Radiation dosimetry on patients treated on the CyberKnife system was planned using the MultiPlan® System (Accuray Incorporated, Sunnyvale, CA) which incorporated non-isocentric and non-coplanar radiation delivery using Monte Carlo or Ray Tracing algorithms. Ray Tracing generally overestimates tumor dose in the lung due to lack of capacity to account for the lung-tumor density heterogeneity. Therefore Ray Tracing dose was converted into a more accurate Monte Carlo equivalent dose using an equation based on the tumor size. Patients who did not have tumor size information available within the RSSearch® Registry were excluded from BED analysis (n = 20). In comparing various fractionation schema and doses, biologically effective dose (BED Gy 10 ) was calculated using the linear quadratic model. All centers performing SBRT/SRS are able to participate in RSSearch®. No compensation is given to patient participants or participating centers. Institutional Review Board (IRB) approval is required at each participating center, and patients must give informed consent. Data is entered into RSSearch® usually in a prospective fashion however retrospective data entry is allowed and coded as such. Patient follow-up was performed per institutional guidelines and the date of last follow up used for actuarial analysis with all time intervals considered. Patients were censored for survival at time of death and for local control at time of local failure. All participating centers reported follow-up clinical and imaging data. Local progression was evaluated independently for each lesion at the participating institution following a modified RECIST (Response Evaluation and Criteria in Solid Tumors) criteria which defined local progression as at least a 20% increase in the size of lesions and/or appearance of one or more lesions in target treatment location and local control was defined as disappearance of, decrease in, or no increase in size of the treated lesions. Statistical analysis and Kaplan-Meier survival curves were performed using GraphPad and Instat Software, La Jolla, CA. Overall survival was calculated for each patient using the first date of SBRT to date of death or date of last follow up. Specific cause of death was not reported for all patients in RSSearch® and therefore not evaluated in this study. Local failure was determined for each treated tumor using last date of SBRT to date of physician reported failure. Subgroups were compared using X 2 , log-rank tests and Gehan-Breslow Wilcoxon tests. Values of p < 0.05 were considered statistically significant. For more information on the background on the RSSearch® Registry, please see the previous descriptive paper on its creation and use [7,8]. Patient characteristics Between April 2004 and July 2015, 702 patients with lung metastases from 28 centers in the US, one center from Germany and one center from Australia, were identified in the RSSearch®Patient Registry. Of those patients, 577 had dose and fractionation information available. One hundred thirty patients were excluded from the study because of previous surgery, SBRT, or radiofrequency ablation (RFA) to the SBRT-treated area. This resulted in 447 patients with lung metastases treated with SBRT evaluable for survival. We are unable to say with certainty that these patients had no other metastatic sites due to the nature of our data collection. There were 304 of these patients with 327 lesions evaluable for LC. Median age of the group was 69 years (range 18-93). Additional patient demographics and characteristics are found in Table 1. Outcome data Median overall survival (OS) for the entire group was 26 months. The 1-, 3-and 5-year OS were 74.1, 33.3, and 21.8% of patients, respectively (Fig. 1). Median LC for the entire group was 53 months. The 1-, 3-, and 5-year LC rates were 80.4, 58.9, and 46.2%, respectively (Fig. 2). We investigated whether primary tumor type (breast, lung, colorectal cancer (CRC), head and neck or other) had an effect on LC and OS following SBRT treatment. There was no statistical difference in LC rates (p = 0.485 by log-rank test; p = 0.181 by Gehan-Breslow Wilcoxon test, Fig. 3a We next investigated whether lung metastases tumor volume was associated with LC and/or OS. Because median and average metastasis volumes were 10.58 cc and 26.72 cc, respectively, lesions were stratified into three groups: tumor volume <11 cc, 11-27 cc, and >27 cc. A statistical difference was noted between the three groups, with improved LC for smaller tumors. Two-year LC was 72.9, 64.2 and 45.6% for tumor volumes < 11 cc, 11-27 cc and > 27 cc, respectively (p = 0.0005 by log-rank test; p = 0.0011 by Gehan-Breslow Wilcoxon test Fig. 4a). This translated into improved OS, with 2-year OS of 62.4, 60.9, 46.1% for tumor volumes < 11 cc, 11-27 cc, and > 27 cc, respectively, and median OS for lesions <11 cc, 11-27 cc, and >27 cc was 29, 31, and 21 months respectively (p = 0.0023 by log-rank test; p = 0.0011 by Gehan-Breslow Wilcoxon test, Fig. 4b). Other factors that were evaluated during statistical analysis included age, KPS, gender, and use of chemotherapy, which were all found to be not statistically significant for LC or OS. Discussion This RSSearch® Patient Registry analysis represents a report of a large cohort of patients treated with SBRT for lung metastases (n = 447 patients). In comparison, another large study includes 217 patients from a single-institutional Tick marks indicate censored patients prospective series and 95 patients analyzed in a singleinstitution retrospective study [6,9,10]. A meta-analysis reported by Ashworth et al. included 757 patients all with lung cancer; however this study only included 88 patients treated with SBRT and included metastases to all body sites and not only lung metastases [11]. The current study includes a heterogeneous population of patients including large sample sizes of various primary sites, tumor sizes, doses, and patient populations from across the United States, Germany and Australia. The use of SBRT for lung metastases allows for high ablative doses of radiation with the potential for extended local control and survival as shown in the current analysis. Multiple studies since have found LC rates of isolated or few lung metastases to be 70-100% at 1 year [12][13][14][15]. Our LC rate compares favorably at 80.37%. The two-year weighted OS rate compiled in an article review by Alongi et al. was 54% (range 39-84%) [16]. In comparison, the 2-year OS rate for all patients reported in our study compares similarly at 53.02%. In assessing LC, most studies support using a BED of at least 100Gy 10 to have LC comparable to metastasectomy of pulmonary metastases [17,18]. Doses of less than 100 Gy 10 however are still used in clinical practice with high LC rates and minimal toxicity [13]. In our current study, we saw a statistically significant difference with higher LC rates in the BED ≥ 100Gy 10 group, adding further evidence for its use. This LC rate did not translate into improved OS for all size lesions, but when stratifying by metastasis volume there was a trend for improved OS in lesions smaller than 11 cc. In the current study, we saw differences in OS between primary histology types, favoring improved OS for H&N, breast, and CRC but without difference in local control by primary histology. Takeda et al. found CRC lung oligometastases to have poorer LC than other histologies (lung, H&N) [10]. These results do not corroborate with our results, as oligometastatic tumors from lung primaries fared worse in the present study compared to Takeda et al. where other histologies (including lung primary oligometastatic tumors) fared better. Takeda et al. was limited to a small sample size of patients with lung oligometastases (n = 44) of which half were from colorectal cancer compared with our study with a much larger cohort. In addition, other studies have found no relationship between oligometastatic lung tumors from CRC primary vs other primary oligometastatic lung tumors on multivariate analysis (MVA) [14,19]. Our study also compares oligometastatic lung tumors by primary individually instead of grouping primaries together for comparison. Our study also found differences in LC and OS rates based on tumor volume, with significantly higher LC and longer OS for smaller volumes. McCammon et al. had similar findings on univariate (UVA) and multivariate (MVA) analysis, finding significant differences in rates of LC comparing values above and below their median tumor volume (8.9 cc), but this study did not see differences in OS [19]. In comparison, two previous studies treated large volume metastases with SBRT, both averaging a median volume of 41 cc per lesion treated, and found a LC rate comparable to small volume lesions, again without reporting OS [20,21]. Approximately 25% of patients undergoing metastasectomy have long term survival [2,22,23]. The remaining patients usually see progression of their disease and development of new metastases within 6 months of ablative therapy [24,25]. A greater time interval between locoregional disease and metastatic disease portends better prognosis and longer disease-free survival [2,22]. Other factors portending to longer disease-free survival include having fewer metastases, non-synchronous metastases, stable disease before ablative therapies, estrogen-positive receptor breast cancer primaries, and complete ablations [24][25][26]. Because this represents a registry for patients treated with SBRT for lung metastases, many of these parameters could not be evaluated. It is difficult to properly evaluate OS using SBRT for lung metastases in context of comparing it to metastasectomy. There is an absence of phase III randomized controlled trials, and the phase I/II trials that have been completed have patients with widely variable clinical characteristics [16]. Only 1 retrospective study by Yu et al. compared SBRT to metastasectomy in 58 patients with osteosarcoma, with OS at 40% in both groups [27]. In addition, there is a bias in selection of patients for SBRT -they are generally judged to be inoperable due to their medical comorbidities which could significantly affect OS rates [16]. We did not find significant survival differences by KPS however our dataset lacks more rigorous comorbidity scores such as the Charlson/Deyo score needed to provide survival for potentially operable patients [28]. There are weaknesses to the current study including the short median follow-up of 13 months however this follow-up is comparable to other single institution series [29,30]. Our study includes no predefined treatment planning criteria, defined individually by the participating centers, with variability in dose and fractionation. This variability however allowed for dose response analysis due to wide ranges in dose but may have lowered the local control and survival rates possible if more uniform high dose was utilized in all patients. Our study also has the standard limitations intrinsic to registry studies: allocation of patients is not random and data collection is less robust than randomized clinical trials. There are ongoing prospective trials that will hopefully answer if a survival benefit is found for patients treated with SBRT for oligometastases. Only one prospective trial from MD Anderson reported in abstract form found a median PFS advantage with three or less sites of oligometastases treated with SBRT, conventional external beam radiation or surgery for non-small cell lung cancer from 3.9 to 11.9 months (p = 0.005) [31]. In addition, other prospective trials continue to accrue including the SABR-COMET, NRG BR002, and the UK CORE trials. Until results of those trials are reported, we will have to rely on prospective registry series like the current study to guide treatment decisions. Conclusions SBRT provides extended survival in patients with lung metastases, with the current study providing a 5-year actuarial survival of 21.8%. The 1-, 3-, and 5-year LC rates were 80.4%, 58.9%, and 46.2%, respectively. Smaller tumor size and the primary tumor type (H&N/colon/ breast) were associated with prolonged survival. High dose BED (≥100Gy 10 ) and smaller tumor size were associated with prolonged local control.
v3-fos-license
2018-12-15T10:29:37.333Z
2016-01-01T00:00:00.000
56415597
{ "extfieldsofstudy": [ "Engineering" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2016/01/e3sconf_icsc2016_01011.pdf", "pdf_hash": "69379443fc672c8e68611284800fea12445633d9", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42520", "s2fieldsofstudy": [ "Environmental Science", "Education", "Engineering" ], "sha1": "69379443fc672c8e68611284800fea12445633d9", "year": 2016 }
pes2o/s2orc
Role of the University in city agglomerations sustainable development It is shown that existing models of sustainable cities and entrepreneurial universities development do not take mutual influence into consideration. In this work there are offered approaches to the models of interconnected cities and universities on the basis of the principle «wonwon», principle of scale accountancy and dynamics of university activity results influence on a sustainable city development is validated, systems of conjugated markers and indicators are elaborated. Introduction Chain of crisis phenomena shaking global economics during the last 15 years, accelerated development of technics and technologies, change of social processes tell about the growth of economic formation which is more and more often called as postindustrial society.The brightest characteristic of a new civilization development period is given by E. Toffler [1], who characterized it as "The Third Wave": "The Third Wave brings with it a genuinely new way of life based on diversified, renewable energy sources; on methods of production that make most factory assembly lines obsolete; on a novel institution that might be called the "electronic cottage";, on and on radically changed schools and corporations of the future».Transition to a new formation goes through acute aggravation of demographic ecological, technological, social cultural problems of society.One of the answers to these challenges is the idea of «sustainable development», which was first stated in 1987 in a form of the concept in UNO Committee report on environment and development [2] and it is performed as 27 principles of sustainable development in 1992 in UNO Declaration on environment and development [3]. At its core the concept of sustainable development has an anthropocentric character, defining actions in three main directions: ecological, social and economic.In Russia the concept of sustainable development was accepted in the form of Decree by the President of the RF «Main statements of RF strategies on environmental protection and sustainable development» [4].Principles of sustainable development are successfully applied to elaborate strategic documents for the countries, separate territories and cities [5][6][7], to implement the development programs management tools are intensively worked out.It includes as a rule structured lists of indicators for strategic and operating planning and responsivity [8][9][10]. In conditions of postindustrial society special role is played by knowledge as the source of unique competitive advantages for economical agents, territories and states [11].In this case the role of universities as generators of knowledge, human capital assets and key elements of national innovation systems rises sharply.Transformation of universities from «cathedral of science» into a significant factor of competitive struggle on a global level is reflected in the concept of entrepreneurial university [12,13]. From the viewpoint of economic model the University of Entrepreneurial Type is a source of competitive growth, incomes and taxes flow, but not a center of state and business expenditures for science and education.University presence in the area must be a driver of its social economic development.In the work [14] there are 4 directions of transformation offered for entrepreneurial university.They are connected with the availability of competitive development strategies, entrepreneurial type of university management, formation of students and teachers' entrepreneurial culture, income sources diversification.As a matter of fact we are talking about presence of entrepreneurial university strategy connected with commercialization of its key competences in education, scientific research, innovation products and services, providing services not only from the vendor but also from the integrator of large-scale development projects for business, state and society.This task is stated in the definition of «the third mission», in the framework of which universities become «active participants of economic and cultural processes development; convert to organizations closely connected with industry and society as a whole» [15]. Analyzing the systems of indicators which characterize universities entrepreneurial potential, performed in a series of works [16,17] shows that they are focused mainly on revealing inner factors of entrepreneurial medium development though а detailed analysis of university activity results influence on economics and society is absent. Historical way of universities development shows that they are predominantly localized in city environment.Nevertheless in sustainable cities development concepts and strategies universities are positioned in their classical role only as a source to get a higher education for population.On the other hand entrepreneurial universities are basically aimed to achieve competitiveness in global markets and little attention is paid to the tasks of local territories and cities development.To some extent university influence on social-economic territory development happens automatically proceeding from natural «congruence» of the university as a part of the city.Although efficiency growth of entrepreneurial university as the city development driver requires transition to strategic management, where elaboration of city and university sustainable development inclusive model is needed.The aim of the present work is formation of approaches to create associated indicative models of city and entrepreneurial university sustainable development. Analysis of approaches for measurement of city and entrepreneurial university sustainable development To assess sustainable cities development mainly an indicative approach is used.It is based either on indicators offered by committee on sustainable UNO development either on authorial concept of social-economic development strategies vendors (see, for example, [7]). In RF the system of sustainable development indicators is elaborated [18], it includes 4 groups, 18 subgroups and 132 indicators: x social indicators (antipoverty activity, demography, health protection, education, transportation); x economic indicators (revenues, consumption, finance); x ecological indicators (water, land and other resources, waste, atmosphere); x institutes (presence of sustainable development strategy and population involvement in its implementation).For rating score of cities sustainable development indicators systems made by ranking agencies are used.In McKinsey model [19] 5 directions, 18 factors and 18 indicators are performed: x main needs of the population (water supply, accommodation, health protection, education; x resources utilization efficiency (power, water, waste, industrial load); x environment cleanness (pollution, purification, waste recycling); x city infrastructure (transportation, population density, planting, buildings power efficiency; x focus on sustainable future development (investments to environmental protection and ecologically clean work places).In the rating of sustainable cities development, worked out by the company SGM [20] there are 4 groups which include 31 indicators (table 1): x demography and population; x social infrastructure; x city infrastructure; x economic development; x ecology.Analysis of sustainable development indicators system shows that universities are present in capacity of social subsystem element only in the model of SGM company (number of higher education establishment students for 10000 inhabitants).In all other models attention is focused on secondary education availability.At this rate the role of universities as drivers of sustainable cities development in majority of existing models is not reflected, although expectations connected with them are present in strategies of the Russian Federation regions social-economic development (see, for example [21]). Let us consider systems of entrepreneurial universities development indicators.Most of the entrepreneurial universities models are founded on 5 principles by B. Clark.According to this work [22] in the frame of entrepreneurial universities research in Europe the model including 7 branches was used.These branches are: x leadership and management; x organizational potential, people, inducements; x development of entrepreneurship in teaching and knowledge mastering; x development possibilities for entrepreneurs; x collaboration between university and business; x international communications; x level of university influence on external environment development. In the work [16] in the frame of Russian universities research entrepreneurial potential modified model was used [22], which consists 6 characteristics groups. Strategy: • understanding of entrepreneurship development as a strategic aim of the higher education establishment; • entrepreneurship activity growth support; • inducements and awards for the teachers and researchers who actively support students' entrepreneurship. Finance resources: • long-term financial support of student enterprises as an approved part of university budget; • self-sufficiency of the university in the sphere of inner entrepreneurship support. Human resources • higher-education teaching personnel career enrolment and development with consideration of stance on entrepreneurship, presence of corresponding behavior and experience and also entrepreneurship activity support; • development of human resources necessary for initial support of entrepreneurship projects lead by students and employees. Entrepreneurship and start ups support • presence of special entrepreneurship support infrastructure, conditions for business-hatching in campus or help in providing access to external objects; • close collaboration with development institutes; • integration of entrepreneurship education and support processes; • including the university to the channels of access to private financing; • mentoring from the viewpoint of teachers and entrepreneurs; • integration of entrepreneurship support processes inside and outside. Entrepreneurship education • including entrepreneurship education in curriculum programs; • wide offering of different forms of fast and effective education; • creative methods of teaching, students and post-graduate students requirement accountability; • dividing offers for burgeoning entrepreneurs and owners of growing business; • students and graduates business support as a key component of education; • integration into entrepreneurship training the results and research of its condition and growth. Assessment • regular inventory counts and entrepreneurship activity efficiency audit; • formalized assessment of entrepreneurship activity, which includes long term monitoring of graduates entrepreneurship activity development.As it is seen from the described approaches to the entrepreneurial university assessment, the main focus is on the entrepreneurship training and students and teachers entrepreneurial activity support which is expressed in creation and development of small enterprises on the basis of university elaborations.In a wider understanding on entrepreneurial strategy, offered in work [14], it is offered to assess entrepreneurial behavior as strive for key competences commercialization not only in the innovations sphere but also in education and scientific research. Analysis of main assessment models of university entrepreneurial strategy shows that university is observed as an open system collaborating with external environment via results of its activity and influencing on it in a significant way.During the analysis of external environment development in the work [22] it is assessed how university entrepreneurial activity influenced on its surrounding and graduates, in the work [16] it is recommended to lead regular monitoring of graduates entrepreneurial activity.While evaluating the universities entrepreneurial potential the main is an expert method realized via target groups questionnaire with further results processing with the help of grade-rating assessment.In Russia an indicative method of entrepreneurial potential assessment is almost impractical, because entrepreneurial universities activity is weakly reflected in governmental system of their efficiency monitoring indicators.There is an interesting entrepreneurial and innovation universities activity level rate assessment model by Interfax agency [24], where 7 indicative markers are included: 1. Innovation entrepreneurship development level in higher educational establishment.2. Volumes of patents network (national and international), supported by higher educational establishment by the end of 2014. 5. Educational programs (mainstream and supplementary education) performed by higher educational establishments in the sphere of entrepreneurship. 6. Volume of economic-contractual works in budget of higher educational establishment in 2014. 7. Participation of academic staff and learners of the higher educational establishment in objects of innovation infrastructure. Ranking is performed via questionnaires (expert assessments) and application of state statistics data (indicative rates). As it is seen from provided analysis, entrepreneurial universities are aimed to measurement of it influence on surrounding although development of the city and the territory as a life environment are not emphasized as a priority strategy in entrepreneurial university indicators system.Unfortunately, neither cities nor universities witness each other as natural partners who create competitive advantages for each other in case of elaboration and realization of congruent strategies of sustainable development.Working out a mutually connected system of university and city stable development is an important task, which solution will allow assessing the degree of mutual influence and compound documents of strategic character defining forms, mechanisms and results of such collaboration. Opportunities for creating of stable city and entrepreneurial university development congruent indicative models In this work the analysis of activity results congruency of entrepreneurial university on main cities stable development indicators was performed.As a basis the model offered in work [20] was taken, it is performed in table 1.The authors led a qualitative expert evaluation of university activity influence on a current indicators level as well as on their alterations rates.Assessment was made on the example of Ural Federal University (UrFU has more than 30 thousand students and 7 thousand employees, 9 th place among the universities of Russia according to the Interfax ranking «Innovations and entrepreneurship» in 2014) on corresponding indicators of Ekaterinburg.For assessment the following scale was used (0 -influence is absent, 1 -weak influence, 2 -average influence, 3 -strong influence).Near indicators in table 1 explanation on UrFU activity results indicator influence are given.Average values of UrFU on current indicators condition and rates of their alteration comprise 1.29/1.51.The greatest impact is made by university on education system development and city economics.From table 1 it is seen that the following list of indicators reflects not all the opportunities of university influence on the city environment.It is suggested to update the model with 3 new groups of indicators: -development of small and medium businesses (share of small and medium enterprises (SME), volume of products issued by SME on the unit of economically active population (EAP)); -innovation activity of enterprises (share of innovation -active enterprises, share of innovation production in manufacturing volume, level of new technologies application in goods and services production); -youth policy (relative share of employed youth, level of youth access to entrepreneurship, level of youth activity in healthy way of life and ecological initiatives); Assessment of the offered indicators influence in the system of table 1 increases correlation of university activity results influence on stable city development up to 1,53/1,90.Further indicators system development for the cities in the context of collaboration with the university will allow shaping congruent pictures of competitiveness, by the reference to a principle «won-won».In the models of entrepreneurial university, in the framework of the models considered in the article, there is a unit of indicators connected with the assessment of university influence on economics and society.It is offered to supplement them with the following groups and elements: -scale of university influence on social, economical and ecological development of the city in measuring employment, goods and services production volume, taxation base growth, salary growth, demographic indicators improvement; -level of impact on the indicators of innovation development, connected with the transfer of new technologies in the sphere of industry and services, development of high technological clusters, increase of innovation products output and export, increase of economics research intensity, development of labour productivity, intellectual property objects application; -level of influence on entrepreneurial climate of the city is defined by number of small and medium enterprises, created with the help of the university, development of technical parks and business-incubators, increase of entrepreneurial culture and education level; -level of impact on population social and ecological activity is connected with active youth policy of the university, influence on public organizations work, expert community, leading of cultural and ecological events.Expert assessment (according to 5 grade scale) of UrFU entrepreneurial potential in 2015 in the framework of the work model [16] with consideration of offered influence indicators is shown in picture 1 on the basis of three target expert groups cancassrepresentatives of the university, business and authorities. Conclusions Existing models of sustainable cities development and entrepreneurial universities do not take their mutual influence into consideration and do not allow to work out inclusive strategies of competitive advantages shaping taking into account mutual opportunities. As regards growing universities role in cities development, the work offers approaches to creation of congruent cities and universities models on the basis of the principle «wonwon», the principle of considering scale and dynamics of university activity influence on sustainable city development, systems of conjugate exponents and indicators were worked out. For further application of this approach it is necessary to develop the system of quantitative indicators, reflecting entrepreneurial university growth, methods of their accounting and analysis, performance of cities indicators research, those cities where entrepreneurial universities are present. of higher educational establishment in technological platforms development, in Programs of high-technological companies' innovation development.4. Participation of higher education establishment in technological entrepreneurship development programs. Fig. 1 . Fig. 1.UrFU entrepreneurial potential assessment.Model of sustainable city development with new indicator groups offered in the article, and model of entrepreneurial university with the unit of university indicators of impact on economics and society are congruent, i.e. accounting scale and dynamics of mutual influence.Substantial disadvantage is a different nature of indicators in these models: indicative for cities and expert for universities, which requires working out indicative markers of entrepreneurial university potential development. Table 1 . Entrepreneurial university influence on sustainable city development indicators (on example of UrFU and city of Ekaterinburg)
v3-fos-license
2022-05-20T15:11:04.110Z
2022-05-01T00:00:00.000
248904963
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2304-8158/11/10/1468/pdf?version=1653311833", "pdf_hash": "08e0163bb0562dd148b27df7b3c231f3357b1c36", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42521", "s2fieldsofstudy": [ "Economics" ], "sha1": "9e24e1fd2ace566e67cecdca880f29765cc82d77", "year": 2022 }
pes2o/s2orc
Characterizing the Structural Evolution of Cereal Trade Networks in the Belt and Road Regions: A Network Analysis Approach Cereal trade is essential for economic and commercial cooperation among countries along the “Belt and Road” (BRI). It helps ensure food security and contributes to building a community of interests and destinies for the BRI countries. Based on the UN Comtrade database, this study, using a network analysis approach, investigates the structural characteristics and spatiotemporal dynamics of cereal trade networks among the “Belt and Road” countries. Results show that: (1) The cereal trade among the BRI countries has formed well-connected and complex trade networks, and the “Belt and Road” initiative has significantly promoted cereal trade networks among the BRI countries. (2) The backbone structures of cereal trade networks along the BRI are in geographical proximity. India, Russia, and Ukraine are the most important trading partners and absolute core nodes in the trade networks, influencing the entire cereal trade networks. (3) The BRI cereal trade networks exhibit significant core-periphery structures, with considerable power asymmetries between the countries reflecting food supply and demand differences. In general, the BRI cereal trade networks have developed from relatively diversified to polarized. Supply chains in the cereal trade network are dominated by a few large countries and are fragile, with weak resilience and low resistance to risk. Therefore, governments should continue to strengthen regional cooperation, optimize cereal trade network structure, enhance their reserve capacity, and build a stronger system to guarantee food security and prevent risk. All these measures will support the food security of the “Belt and Road” countries. Introduction Food is fundamental to human survival and development, and its abundance-or lack thereof-directly affects a country's social stability and economic development.Therefore, topics related to food security, such as food production, distribution, trade, and storage, have long been a central concern of government departments and academics worldwide.Food production is influenced by soil and water resources, climatic conditions, and production technology, and has deep regional roots [1][2][3].Globally, the distribution of food production and per capita food production in different countries is highly uneven.However, the spread of economic globalization and the free trade of cereal have led to a worldwide redistribution of the agricultural resources on which cereal production depends, providing a meaningful way to regulate the geographical imbalance between food supply and demand [4][5][6][7][8].The volume of cereal traded internationally has more than doubled over the past three decades, with a rising number of countries now importing or exporting cereal [9,10].As a result, the food security status of a particular country or region is interlinked with many others.Therefore, maintaining the stability of the international cereal trade network has an important role in promoting the security and stability of countries around the world and building a community of human destiny. Social network analyses can quantitatively assess complex linkages in socio-economic development [11].They align with research approaches that explore changes in the patterns and processes of geographical phenomena.Recently, network analysis tools have been used widely to explore the increasing connectivity of global food trade networks [12][13][14], with the study of agricultural trade networks as the most common research object.For instance, Cai and Song applied a complex network to analyze the relationship between agricultural and national trade and found that agricultural trade facilitates international commerce [15].Wang et al. studied 57 agricultural products from the six categories of cereal, oilseeds, fiber, sugar, fruits and vegetables, and meat.They noted the rising density of global agricultural trade networks and their increasing diversification [16].Shutters and Muneepeerakul applied social network analysis to investigate international agricultural trade networks and their relationship to patterns of international development [17].As research continues, further trade network studies of various food types have emerged, including those focusing on seafood [18], meat [19,20], cereal [21], and soybean [22].Research into the growing segmentation of trade networks involving wheat [23,24], maize [25], and rice [26] provides important theoretical support for developing more beneficial, evidence-based food security strategies.In terms of research methods, many studies have used quantitative measures of network characteristics, such as node strength, network density, betweenness centrality [27][28][29], community detection [30,31], and core-periphery structure [32,33].In addition to these fixed research methods, researchers have applied invariant subnetwork structures [34], complex network research frameworks that combine trade efficiency and elasticity trade-offs, and other combinatorial research methods [35] that tap into specific network structures. The "Belt and Road" initiative (BRI) aims to build a new platform for international cooperation and exchange that is open, inclusive, equal, and mutually beneficial [36,37].After nearly 10 years of promotion and construction, the BRI has achieved remarkable results in promoting economic and trade exchanges and open cooperation among countries along the route [38,39].As an important element of these ties, the cereal trade along the BRI has also captured the attention of scholars.Successive studies have investigated the utilization of cereal markets and resources in the BRI region [40], cereal trade and its implied virtual water and soil resources [41], and spatiotemporal patterns of cereal production and consumption [42].The agricultural development of BRI countries is characterized by limited use of technology, land-use efficiency, and production organization.Some of these countries struggle to feed their populations as a result, strengthening the demand for trade in cereal between them [43,44].Therefore, further research on transnational agricultural cooperation and food trade networks is important to meet the BRI countries' objective needs and to provide insights on integration points for countries along the route to building a community of interests and destiny together. Existing research has made various advances that enrich and expand the knowledge of international cereal trade networks.However, a synthesis of the available studies exposes several deficiencies.First, compared with research output on BRI trade networks in general, studies focusing on cereal are relatively scarce, and relevant knowledge is limited.Earlier studies of cereal trade networks explored soil and water resources that are implied by the pattern of trade.Moreover, they analyzed spatiotemporal food production and consumption patterns, while network studies of structural changes in regional cereal trade along the BRI have yet to be supplemented.Second, while the characteristic pattern and topological relationships of cereal trade networks have been examined, the unevenness of cereal trade network structures and the large differences in trade volume and linkages between different countries have been neglected.Furthermore, few studies have employed methods that reveal the inner structural characteristics of cereal trade networks and their evolutionary processes.An in-depth understanding of the structure of the cereal trade network will provide important practical guidance for enhancing and expanding trade and agricultural cooperation among the BRI countries.Finally, despite the importance of the cereal trade to preserving regional food security, studies emphasized the portrayal of patterns rather than trade network issues and risk assessment.This research was theoretical rather than practical, limiting its ability to guide users faced with the dynamics and instabilities of the food security situation. To fill this gap, this paper describes an approach that integrates a top network method, centrality analysis, and core-periphery profile to portray the patterns of cereal trade networks among countries along the BRI since 2001.It analyzes the main structures of the cereal trade networks between these countries, identifies the evolution of their core-periphery structures, and details their structural evolution.The research findings are expected to enhance scientific understanding of the structural evolutionary characteristics of the BRI cereal trade network and inform cereal trade cooperation, building a community of cereal production and consumption, and ensuring the food security of countries along the BRI. Study Area The BRI is committed to building an open and inclusive platform for cooperation and is not limited to specific regions [38,39,45].However, according to the actual research needs and with reference to the geographical scope of previous related studies, the BRI countries in this paper refer to the traditional 65 countries along the BRI.Table 1 lists the specific countries and regions. Analytical Framework The BRI plays an essential and positive role in promoting regional trade integration, while agricultural cooperation, especially food trade, has been a critical element of trade cooperation among countries along the Silk Road since ancient times.This paper employed top network, centrality, and core-periphery profile algorithms to quantitatively evaluate the structural characteristics and spatiotemporal dynamics of cereal trade networks in countries along the BRI.First, we constructed a trade network matrix of 65 countries along the BRI, visualized the spatial network of cereal trade in countries along the BRI, and revealed the overall pattern evolution of the cereal trade network from a geographical perspective.Second, we extracted the top network of the BRI cereal trade network to capture the backbone trade structure constituted among the largest trading countries and visualized the representation using Gephi software.Moreover, we measured the trend of centrality evolution of BRI cereal trade top network nodes by applying characteristic metrics in network science, such as degree centrality, betweenness centrality, and eigenvector centrality.Finally, we use the core-periphery profile algorithm to analyze the core and peripheral structures of the full cereal trade networks and identify the most influential and peripheral countries and regions in the BRI cereal trade network.Based on the above-described analytical framework, we comprehensively explore the spatiotemporal dynamics of the cereal trade network structure along the BRI. Data Processing As the primary food source for humans, cereal plays a fundamental role in feeding populations and accounts for the largest share of all agricultural trade.Therefore, in this paper, we adopt the narrow sense of food, as mentioned in this paper, which refers to cereal.Consequently, we select "cereal" (HS10) as specified in the International Convention for Harmonized Commodity Description and Coding System, which includes wheat and meslin (HS1001); rye (HS1002); barley (HS1003); oats (HS1004); maize (HS1005); rice (HS1006); grain sorghum (HS1007); and buckwheat, millet, and canary seed (HS1008).The trade flow data were derived from the "cereal" data in the UN Comtrade database.We considered countries as nodes in the network and trade flows between countries as edges.First, we constructed a 65 × 65 trade network matrix based on the spatial scope of the BRI.Second, we matched the trade flows between countries to the network matrices and constructed directed and weighted trade network matrices.Finally, we transformed the directed and weighted matrices into undirected and weighted matrices, and then constructed the BRI cereal trade network dataset since 2001. Considering year-on-year variation in international cereal trade, this paper selects and compares four representative years highlighting how the BRI cereal trade networks have evolved in different contexts.China's accession to the World Trade Organization in 2001 dramatically altered global trade patterns and was, therefore, the starting point of the study.Furthermore, the 2008 global financial crisis and China's proposal of the BRI in 2013 profoundly affected the global economy and the division of labor, significantly impacting the cereal trade network across these countries.Finally, the COVID-19 pandemic that began at the end of 2019 exerted a considerable impact on the world economy and global trade [46].Therefore, 2019 was considered as the end year of this study.The selection of these four representative years is, on the one hand, conducive to revealing the key characteristics of the international cereal trade system prior to the COVID-19 pandemic and summarizing general patterns, and on the other hand, guarantees the timeliness of the study and provides a comparison and reference for subsequent studies on the impact of global crisis events on trade networks.For these reasons, we focused on 2001, 2008, 2013, and 2019 as the representative years for the study. Top Network The top network is a simple and effective method for extracting the backbone structure in a network.It is a subnetwork that consists of the strongest connections of each node from the complete network, which can reduce the masking of many weak connections within the network to the overall information of the network structure [38,47].Top1 network refers to the fact that only trade links between countries with the highest trade volume in each country are retained in the entire trade network.Top2 network refers to the trade links between only the top two countries in terms of trade volume in each country in the entire trade network.Although the construction of the top network omits certain trade connections, it retains the backbone connections of the whole trade network and achieves the feature of portraying most of the global trade network structure with fewer trade connections.Based on the analysis of the structural characteristics of the cereal trade network, we selected the top network to characterize the evolution of the cereal trade network of the BRI countries in the years selected for this study. Network Centrality (1) Degree Centrality Degree centrality refers to the number of nodes directly connected to a particular node in a network.As an indicator, it portrays the strength of a node's connection to other nodes [21].In general, the greater the degree centrality, the more connections are established with that node in the network and the greater the importance of that node within the network. (2) Betweenness Centrality Betweenness centrality measures the importance of nodes as intermediaries in a trade network.The higher the betweenness centrality, the stronger a node's role as a bridge in a network and the greater its potential ability to control other nodes.In a network with N nodes, the shortest path between nodes j and k will access certain nodes; if node i is passed by many shortest paths, it indicates that the node is important in the network.The importance can be expressed in terms of the betweenness centrality B.C. (i), which is calculated as [27]: where n jk is the number of shortest paths between nodes j and k, and n jk (i) is the number of nodes i through which the shortest path between nodes j and k passes. (3) Eigenvector Centrality The importance of a node in a network depends not only on its centrality, but also on the number and centralities of its neighboring nodes.Eigenvector centrality measures the centrality of a destination node in terms of the centrality of the connected nodes around the destination node.It is an indicator of the connectivity of the nodes in a network.A node with a high eigenvector is connected to many nodes that themselves have high eigenvectors.With reference to Ge et al.'s [48] study, eigenvector centrality is defined as: where A is an n × n adjacency matrix composed of a ij , X = (x 1 , x 2 , x 3 , . . ., x n ) T denotes the degree centrality of each node, respectively, and Z i is the eigenvector centrality value, while a ij denotes the contribution of node i to the status of node j.C (e)i denotes the eigenvector centrality of node i. Core-Periphery Profile The portrait of a network as divided into a dense core and a sparse periphery, referred to as a core-periphery structure, originated from scholars in social sciences in the 1990s, and the paradigm has since been extended to other disciplines [49].To identify the coreperiphery structures in networks, various algorithms have been successively proposed, including block-modelling [50], k-shell decomposition [51], and centrality [52].However, most of the proposed algorithms are incapable of dealing with weighted networks, and their robustness still needs to be verified.Against this background, Della Rossa et al. recently proposed the algorithm of core-periphery profile [53], disclosing the overall network structures and the peculiar roles of specific nodes. In a network with an ideal core-periphery structure, peripheral nodes (p-nodes) are allowed to link to core nodes only.In other words, no connectivity exists among p-nodes.However, in most real-world networks, the structure is not ideal, although the core-periphery structure is evident: A weak (but not null) connectivity exists among the peripheral nodes.This calls for the generalized definition of α-periphery, which denotes the largest subnetwork S with the persistence probability α S ≤ α. We define the core-periphery profile α k , k = 1, 2, . . ., n, of the network using the following algorithm [13]: We start with the node i with the weakest connectivity, and generate a sequence of sets {1} = P1 ⊂ P2 ⊂ . . .⊂ Pn = N by adding, at each step, the node that attains the minimal increase in the persistence probability.Correspondingly, we obtain the core-periphery profile, that is the sequence 0 the persistence probabilities of the sets P k . The above algorithm provides, as byproducts, two other important tools of analysis, including centralization and coreness.We define the centralization C for a core-periphery profile α k as the complement to 1 of the normalized area, namely [13]: Therefore, we can quantify this similarity by measuring the area between the α kcurve of a given network and of the star network and normalizing to assign C = 1 to the star network itself (maximal centralization) and C = 0 to the complete network (no centralization).If a network displays a definite core-periphery structure (large C), then the sequence α k naturally provides a measure of coreness of each node.We have α k = 0 for all p-nodes (the periphery in the strict sense), whereas the coreness of the last inserted node is maximal and equal to α n = 1. Cereal Trade Patterns of the BRI Region Considering countries as nodes and cereal trade flows of each country to other countries as edges, we constructed complete undirected and weighted cereal trade networks among 65 countries along the BRI during 2001-2019.Network weights refer to the trade flows between countries, reflecting the scale of trade between pairs of countries; the thickness of the edge lines represents weights.The cereal trade networks between countries along the BRI are visualized in Figure 1.Overall, the cereal trade flows among countries along the BRI formed dense and complex trade networks over the period, and the BRI significantly promoted these linkages from 2013. The cereal trade in the BRI region continued to grow in value, from $3.809 billion in 2001 to $28.51 billion in 2019.The most remarkable growth occurred between 2001 and 2008, when trade increased nearly four-fold.More moderate growth of only $1.028 billion was recorded between 2008 and 2013.However, after 2013, the growth of the cereal trade in the BRI region accelerated, increasing by $5.084 billion in 6 years, which was almost five times the increase recorded between 2008 and 2013.Therefore, the density and complexity of the cereal trade network across the BRI countries grew enormously between 2001 and 2019, reflecting the vastly increased volume and flow of trade in these commodities.Since 2013, in particular, the interdependence of cereal trade between the countries deepened, backbone connections grew, and network structures developed significantly.To further reveal the scale characteristics of cereal trade along the BRI, we now explore two aspects in detail: Total trade volume, which refers to the total import and export of cereals within one country, and trade flow, which denotes the total trade of cereals between two countries where cereal trading occurs.There has been a strengthening of the trade connections between countries along the BRI, leading to a significant increase in trade flow.In 2001, the largest trade link was the cereal trade between India and Saudi Arabia, with $241 million.Cereal trade flows in the BRI countries have increased in line with the deepening of economic globalization and the increasing openness of countries.Indeed, by 2019, the largest cereal trade link between countries was established between Turkey and Russia, with a trade value of $1.658 billion.Furthermore, the backbone trade linkages have evolved from a single structure to a diversified one, increasing the cereal trade interdependence between countries.The largest trade connection in 2001 had a value of less than $300 million; this was also significantly greater than the trade links between other countries along the route.However, by 2008, there were three connections with more than $1 billion in trade.In 2013, the number of connections with more than $1 billion in trade remained the same; the size of the trade connections at different levels had increased.Since the BRI was proposed, the intensity of the BRI cereal trade links has increased significantly; six trade links have a flow of more than $1 billion, namely Turkey and Russia, Russia and Egypt, Iran and India, Ukraine and Egypt, Saudi Arabia and India, and Ukraine and China, with a significant trend toward the diversification of backbone links. Structural Evolution of the Top Networks Based on the cereal trade networks, we calculate the top1 and top2 networks from the original trade network, respectively.With the change of years, the top1 and top2 networks only account for 1.5% and 3.1% of all network links, but the trade values account for 50% and 70%, respectively.Therefore, we select the top2 network to analyze the main features of the whole network.Furthermore, the top2 networks are then visualized using Gephi software to portray the structural evolution of the backbone structures formed among the largest trading partners.In the backbone structure (Figure 2), nodes denote individual countries, and edges denote trade links between countries.The size of the nodes is proportional to the number of external trade relations of the countries; the larger the nodes, the more external links the country has in the backbone network. First, the results show that the backbone structures of the cereal trade networks in the BRI region were characterized by geographical proximity throughout the period.The five Central Asian countries, Mongolia, and Russia have formed regional trade blocs with Kazakhstan and Russia as trade centers.Similar regional blocs were formed by the Central and Eastern European countries, with Hungary and Ukraine as centers.Due to its weight and volume, cereal is a freight-sensitive commodity, and geographical distance continues to play a vital role in regional trade interactions.Second, India, Russia, and Ukraine were the most crucial cereal trading partners among the BRI countries, and as time progresses, Russia's trade position is becoming increasingly important.Over the 18-year period, the above three countries have become the top two trading partners of nearly 10 countries, and their number of partner countries in the top2 trade network has been growing.Russia's partner countries in the top2 trade network have grown from 9 in 2001 to 19 in 2019, topping all countries.In addition, the main trading partners of some countries changed throughout the period, indicating the overall dynamism and competitiveness of the cereal trade market along the BRI.As Figure 2 shows, the core trading countries evolved from relatively pluralistic to polarized, except for some core nodes that held leadership positions for a long time.However, countries, such as Thailand and Hungary, have gradually lost their original influence within the trade networks. Centrality Characteristics of the Trade Networks In this section, we identify the hierarchical structures of the top2 networks of the BRI cereal trade with the help of three indicators: Degree centrality, betweenness centrality, and eigenvector centrality, along with the specific attributes of the network structures that each indicator can tap. Table 2 shows the top 10 countries and their values for each centrality indicator in 2001, 2008, 2013, and 2019.The most significant centrality characteristics of individual countries are now described.First, Russia, India, and Ukraine were the absolute centers of the cereal trade networks in the BRI region.These three countries consistently recorded the highest values for the three types of centrality indicators, and thus controlled the entire trade network structures.Second, China and Kazakhstan were also important to the BRI cereal trade networks due to their high and stable degree and betweenness centrality indexes.They exerted significant influence across the entire network and played an important bridging role in the cereal trade between countries.Third, two countries in Central and Eastern Europe-Romania and Hungary-assumed the important role of intermediaries or gatekeepers.Although their degree centrality and influence within the whole trade network were limited, their betweenness centrality ranked among the highest of all countries along the route, strengthening these countries' potential for controlling the cereal trade of other nodes.Finally, Pakistan gained a more prominent influence within the entire trade network, while its degree centrality was among the highest of the BRI countries.However, the dynamics of Pakistan's betweenness centrality changed frequently, and it played a more restricted role in the trade network.Over time, the positions of some countries in the cereal trade networks along the BRI shifted significantly.First, the influence of Southeast Asian countries gradually decreased, with Thailand's degree centrality decreasing from 14 in 2001 to 7 in 2019.Its ranking among the BRI countries also dropped from first to seventh position, while its betweenness centrality fell from first to ninth.Thailand's declining trade status can be explained by the influence of the rice pledge policy under the Yingluck government, which sought to take advantage of Thailand's monopoly in rice exports to raise the export price and earn more revenue.However, Thailand's place in the international rice market was quickly occupied by other major cereal trading countries.As a result, it lost many export markets and its central position in the cereal trade.Similarly, Vietnam's degree centrality fluctuated, dropping from the seventh position among the BRI countries in 2001 to fifth in 2019 as its trade influence gradually declined.Second, Turkey's trade centrality shifted over the period, with a significant increase in its trading status after 2013.Its degree centrality ranking decreased from four in 2001 to two in 2008 before rapidly increasing after 2013 to reaching the seventh-highest ranking in 2019.During this time, Turkey's betweenness and eigenvector centrality both rose, ranking it third among 65 BRI countries by the end of the period.Therefore, Turkey's influence and connectivity within the trade network significantly increased, and its bridging role grew considerably.Turkey has seen a significant increase in its level of economic development since 2000 due to its government's "neoliberal" economic policy, and the demand for imported cereal has grown in both scale and diversity.Russia is Turkey's main source of cereal imports, and the scale of cereal trade between the two countries has increased in recent years.Trade flows between the two countries reached $900 million in 2013, the fourth-highest among the BRI countries. By 2019, they had risen to $1.6 billion, the highest bilateral volume among all 65 countries, underlining Turkey's position as a trade hub. Core-Periphery Structures of the BRI Region Based on the full cereal trade networks, we further employed the algorithm of the core-periphery profile to measure the polarization effect of the core-periphery structures in the BRI cereal trade networks.In addition, the coreness of each country reflects the position and role of nodes in the network.Since 2001, the coreness of the cereal trade networks along the BRI has fluctuated from 0.79 to 0.84, indicative of a significant core-periphery structure in which some node countries occupied essential network positions.Over time, the trade agglomeration effect of the core countries within the trade network strengthened considerably.Figure 3 demonstrates a gradual divergence in the nodes' coreness around the point where their ranks reach 40.The curves are also "J"-shaped, i.e., the nodes with a high rank have extremely high coreness and small numbers, while the nodes with a low rank generally have low coreness and large numbers.The top part of the "J" curve tends to expand outward over time, indicating a decrease in the coreness of some nodes in the high order.To further analyze the core-periphery structures of the BRI countries in the cereal trade networks, we classify the position of countries in the network into four levels based on their coreness.Countries with a coreness greater than 0.3 are classified as the core structure, countries with a coreness between 0.1 and 0.3 are classified as sub-core structure, countries with a coreness between 0.01 and 0.1 are classified as sub-periphery structures, and countries with coreness less than 0.01 are classified as periphery structure (Figure 4).Based on these results, there are five points we wish to highlight.First, the number of nodes in the different hierarchical structures changed little over the period.The number of nodes in the core structure decreased from 9 in 2001 to 7 in 2019, while the number of countries in the periphery structure increased slightly from 37 to 39.Second, Ukraine, Kazakhstan, India, Thailand, and Vietnam were permanently part of the cereal trade network's core structure and export core.Ukraine is a major global cereal exporter, with its extensive black soil, flat and open terrain, and well-developed commercial cereal agriculture making it the world's leading exporter of wheat.Kazakhstan's agricultural sector centers on cereal production, and the country also ranks among the world's top 10 wheat exporters.In contrast, India, Thailand, and Vietnam are the world's leading rice exporters and control more relationships and resources in the trade network, maintaining their connectedness.Third, Russia's trade position strengthened to make it the absolute core of the trade network, with its coreness climbing from 0.27 in 2001 to 1 in 2019, bringing it from tenth to first place in the rankings.After Russia joined the World Trade Organization at the end of 2011, the scale of its cereal trade increased, and it became the world's largest wheat exporter.Fourth, China's trade position shows a weakening trend and a falling dependence on imports.While China has traditionally been a cereal importing country, its self-sufficiency in rice and wheat has grown in recent years: Cereals are imported to transfer surplus and enrich domestic cereal consumption options.Finally, most cereal import-dominated countries were within sub-periphery structures, with Saudi Arabia, UAE, Egypt, Iran, and Indonesia the leading cereal importers along the BRI.Due to their spatial concentration and large trade volume with the major cereal trading countries, these countries enjoyed fewer and less diverse links with other countries, and thus occupied peripheral positions within the trade network. Understanding of Cereal Trade Network Structures Network structures are characterized by complex systems with highly diverse connections, structural complexity, and dynamic evolution.Adopting appropriate methods to clarify the network, extract the backbone structure, and identify the inner order allow the network structure to be fully understood and optimized, improving its performance and enhancing its resilience.To counter the masking of comprehensive information on the network structure by the minor weighted connections, this study applied the concept of top network structure.We extracted the top2 structures of the cereal trade network of the BRI countries, identifying the main trade network of each country's two most important trading partners.The top network approach clearly and intuitively reveals the structural characteristics of the main trade connections, the factors involved at each level of nodes, and their spatiotemporal dynamics.We found that the BRI facilitated the interdependence development of regional l cereal trade networks, with Russia, India, and Ukraine at their cores, while China's trade position weakened and its external dependence decreased.These results effectively validate Chen's findings of a complete non-directional trade network based on the BRI, whether from a global, local or individual country network dynamics perspective [32].Furthermore, our top network analysis indicated that the main trading networks of some low-volume countries evolved dynamically.To the present time, BRI nations do not have fixed trading partners, and market competitiveness is strong.This finding [32] occupies an unfilled niche in earlier trade network studies, which have previously overlooked relations between countries with smaller trade volumes.Overall, the top network structure demonstrates that the most trading value is produced through trade links with smaller national partners.As an approach, top network identification offers a comprehensive means of capturing the connectivity, high weights, and complexity of cereal trade networks. Similar to the findings of global cereal trade networks [21], the cereal trade networks of the BRI countries have a significant core-periphery structure.However, there are some differences between the core countries.The core countries of the BRI cereal trade network are Russia, India, and Ukraine, but these countries are not prominent in the studies of global cereal trade networks and are replaced by the United States, the United Kingdom, France, and Canada.This reflects the spatial scale dependence effect of international trade relations [54,55], i.e., that the influence of different levels of network cores is differentiated at the global and regional scales.The characteristics of geographical proximity [56] in the cereal trade are confirmed by this paper.The regional cooperation and degree of regional integration profoundly affect international trade relations, especially in cereal trade [21].Food security requires a global vision, but it is also crucial to attend to the regional cooperation.By identifying the regional industrial chain and the core forces influencing the regional trade, it is possible to construct a more precise control system that minimizes risk and guarantees the resilience of food security. Implications for Promoting the BRI Cereal Security Economic globalization has enabled the international cereal trade and agricultural cooperation to develop.This has guaranteed the global food supply, met the diverse nutritional needs of populations, and promoted national food security [57][58][59][60].However, the distribution of power in food trade networks and the stability and resilience of supply chains profoundly affect the coherent relationship between trade flows and food security.Significant power asymmetries exist in the actual development of countries due to differences in food supply and demand differences [61].Food-exporting countries occupy the central positions in trade networks by participating autonomously in trade transactions and controlling more relationships and resources, while import-dependent countries assume more passive positions due to their constant need to ensure the security of their food supply.Therefore, the presence of fewer and more polarized core countries in the food trade network leads to a greater concentration of resources in the trade network, more asymmetrical power relations, and weaker network resilience [62].However, our study of the BRI cereal trade networks has found that the trade cores have evolved from a relatively diversified to a polarized state.A few core cereal-exporting countries dominate these networks, such as Russia, India, and Ukraine, while the sub-periphery is occupied by cereal-importing countries.This finding aligns with the earlier research into global food trade networks carried out by Wang [9], who found that asymmetrical power relations in trade networks were significant, both globally and among BRI countries.In this type of trade network structure, any instability that affects food production or exports within the core countries, such as natural disasters, political unrest, major public health events, or even the outbreak of war, may affect the food supply security of the entire networks [30,63]. The UN's Sustainable Development Goals (SDGs) emphasize the need to eliminate hunger and achieve food security [63][64][65].The global recession triggered by the COVID-19 pandemic at the end of 2019 has caused the international cereal trade to contract, and prices are already at historically high levels.The current war between Russia and Ukraine involves two of the world's leading cereal producers and exporters.War profoundly affects the production and export of cereal, causing large short-term price fluctuations, affecting the long-term organization of the industry, and spreading the crisis to additional countries and populations through trade.From the network perspective, the Russia-Ukraine war has disturbed the stability of the two core node countries and will disrupt or even rupture the local food industry and supply chain.Furthermore, the current war will severely threaten the food security of countries that are highly dependent on food imports and increase the vulnerability of the entire trade network, further worsening regional food insecurity. Food security is a fundamental issue for the survival of humankind, and the destiny of all countries is closely intertwined.Ensuring global and regional food security is fully aligned with the development concept of a community of human destiny advocated by the BRI.Cereal trade networks play a crucial role in ensuring the resilience of trade among the BRI countries and should be promoted as a model of global cereal trade partnership in achieving food security.With that in mind, the following policy recommendations are made to strengthen the cereal trade within the BRI.First, the trade network structure should be optimized, and the diversification of the cereal trade should be promoted.More countries should be encouraged to participate fully in the BRI cereal trade networks, thus increasing the abundance of the network system.Countries with high cereal import dependence or concentrations should seek to expand the scope of their trade partnerships and moderately diversify their sources of cereal supply.Second, the capacity of countries to increase food production should be enhanced, and food reserve capacity should be strengthened.This entails a gradual change from promoting cereal production to supporting the construction of the whole cereal supply chain in multiple directions in order to build a comprehensive supply chain system.Third, the concept of inclusive cooperation should be upheld, and a regional food security community should be built.The BRI countries are advised to implement the concept of inclusive globalization actively, promote policies and measures to facilitate smooth trade flows, and call for the establishment of multilateral commitments among countries to refrain from imposing restrictions on the export of agricultural raw materials, such as food and fertilizers.They should also improve the structural connectivity of regional cereal trade networks, strengthen special cooperation mechanisms between countries in food-related fields, and build a close-knit community of destiny.Finally, risk prediction and assessment mechanisms should be established to improve the risk resistance of trade networks.The academic community should conduct additional studies on the cereal trade networks of countries in the BRI region.The cereal production and circulation systems and regional political, economic, and security situations should also be closely monitored to prevent and mitigate the impact of unexpected situations on the trade network system. Conclusions The BRI plays an essential and positive role in promoting regional trade integration.Since ancient times, agricultural cooperation and trade have been key to trade cooperation among countries along the Silk Road.Based on the cereal trade networks database, we have examined the structural evolution of cereal trade networks in the BRI region using a systematic network research approach and measured changes in the relative position of different countries between 2001 and 2019. Our findings suggest that the cereal trade of the BRI countries continued to grow throughout the period, from $3.809 billion in 2001 to $28.508 billion in 2019.As a result, the cereal trade in the BRI region formed closely linked and complex trade networks.The BRI significantly boosted cereal trade in participating countries, deepening the interdependence of cereal trade between these countries and strengthening their backbone connections.Geographical proximity is significant to the backbone network structure of cereal trade along the BRI.India, Russia, and Ukraine were the most important cereal trading partners and the absolute centers of the cereal trade network along the BRI, influencing the entire cereal trade network structure.During the period, the main trade partners of certain countries shifted, indicating that fixed international cereal trading partnerships had not yet been established, and competition among suppliers was apparent. Cereal trade networks along the BRI possess significant core-periphery structures, and the trade agglomeration effect of the core countries has increased over time.Since 2001, the core has developed from relatively diversified to polarized, with significant power asymmetries among countries due to the relationship between cereal supply and demand.Major cereal exporters have long been at the core, controlling more relationships and resources in the trade network, while cereal import-dominated countries generally occupy a less significant trading position within the sub-periphery structures. Since the COVID-19 pandemic at the end of 2019 severely disrupted the international food trade system and continues at the time of writing, any subsequent changes are outside the scope of this study.Therefore, we only conducted cereal trade studies up to 2020 and did not explore the time period of 2020 and beyond.Moreover, the ongoing Russia-Ukraine war will profoundly impact the international cereal trade system.Current cereal trade patterns within and beyond the BRI are likely to undergo dynamic adjustments and reconstruction.In future research, we will conduct studies to assess the impacts of the crisis, such as the COVID-19 pandemic and the Russia-Ukraine war on global and regional food security and develop the corresponding strategies for risk prevention and control, crisis management, and resilience assessment of the BRI cereal trade networks in different contexts. Figure 1 . Figure 1.The cereal trade networks in the BRI region (a) in 2001, (b) in 2008, (c) in 2013, and (d) in 2019.Data source: https://comtrade.un.org/data/ (accessed on 14 December 2021), illustrated by the authors.The trade value of the BRI countries has significantly increased, and core nodes have great influence.From 2001 to 2019, the cereal trade of the first node grew in value from $729 million to $6683 million, while the number of nodes with total trade in excess of $100 million in 65 countries grew from 23 to 52.India, Russia, and Ukraine are the core nodes of cereal trade along the BRI, and the scale of their trade has long been among the top countries along the route.Moreover, the clustering characteristics of cereal trade in countries along the BRI are evident, and the primacy of the network system is increasing.The trade value share of the top node fluctuated from 19.14% in 2001 to 23.44% in 2019, and the cumulative share of the trade value of the top five nodes has continued to grow from 69.31% to 87.73%, with significant polarization characteristics of cereal trade. Figure 2 . Figure 2. The top network structures of cereal trade networks in the BRI region (a) in 2001, (b) in 2008, (c) in 2013, and (d) in 2019.Data source: https://comtrade.un.org/data/ (accessed on 14 December 2021), illustrated by the authors. Figure 3 . Figure 3.The relationship between coreness and countries' ranking in the BRI cereal trade networks.Data source: https://comtrade.un.org/data/ (accessed on 14 December 2021), illustrated by the authors. Table 1 . The scope of the BRI in this study. Table 2 . The centrality indicators of international cereal trade of the BRI countries.
v3-fos-license
2018-12-27T08:17:40.343Z
2014-06-08T00:00:00.000
73528788
{ "extfieldsofstudy": [ "Geography" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.ajol.info/index.php/sajg/article/download/107009/96916", "pdf_hash": "db7ad74a2d64798013f99d6d80dc407ffaa90791", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42522", "s2fieldsofstudy": [ "Environmental Science", "Mathematics" ], "sha1": "db7ad74a2d64798013f99d6d80dc407ffaa90791", "year": 2014 }
pes2o/s2orc
A comparison of Normalised Difference Snow Index ( NDSI ) and Normalised Difference Principal Component Snow Index ( NDPCSI ) techniques in distinguishing snow from related land cover types Snow is a common global meteorological phenomenon known to be a critical component of the hydrological cycle and an environmental hazard. In South Africa, snow is commonly limited to the country’s higher grounds and is considered one of the most destructive natural hazards. As a result, mapping of snow cover is an important process in catchment management and hazard mitigation. However, generating snow maps using survey techniques is often expensive, tedious and time consuming. Within the South African context, field surveys are therefore not ideal for the often highly dynamic snow covers. As an alternative, thematic cover–types based on remotely sensed data-sets are becoming popular. In this study we hypothesise that the reduced dimensionality using Principal Components Analysis (PCA) in concert Normalized Difference Snow Index (NDSI) is valuable for improving the accuracy of snow cover maps. Using the recently launched 11 spectral band Landsat 8 dataset, we propose a new technique that combines the principal component imager generated using PCA with commonly used NDSI, referred to as Normalised Difference Principal Component Snow Index (NDPCSI) to improve snow mapping accuracy. Results show that both NDPCSI and NDSI with high classification accuracies of 84.9% and 76.8% respectively, were effective in mapping snow. Results from the study also indicate that NDSI was sensitive to water bodies found on lower grounds within the study area while the PCA was able to de-correlate snow from water bodies and shadows. Although the NDSI and NDPCSI produced comparable results, the NDPCSI was capable of mapping snow from other related land covers with better accuracy. The superiority of the NDPCSI can particularly be attributed to the ability of principal component analysis to de-correlate snow from water bodies and shadows. The accuracy of both techniques was evaluated using a higher spatial resolution Landsat 8 panchromatic band and Moderate Resolution Imaging Spectroradiometer (MODIS) data acquired on the same day. The findings suggest that NDPCSI is a viable alternative in mapping snow especially in heterogeneous landscape that includes water bodies. Introduction Snow is a common global meteorological phenomenon.On some of the earth's higher grounds, snow is known to be a valuable source of fresh water and therefore regarded as an important component of the hydrological cycle (Brown 2000;Yang et al., 2003;Zhou and Li 2003;Tong and Velicogna 2010).According to Bonan (2002), snow plays a significant role in influencing heat regimes and local, regional and even global radiation balance.Snow is also known to strongly influence regional soil characteristics, plant composition and plants community structure (Darmody et al., 2004;Löffler 2005).At local levels, snow cover is known to affect several soil parameters such as permeability, temperature, moisture, microbial activity and carbon sequestration (Monson et al., 2006;Isard et al., 2007) According to Lu et al., (2005), snow's distinct high surface reflectance and low thermal conductivity are believed to influence biological, chemical and geological processes.A number of studies (Chinn 2008;Kargel et al., 2005 among others) note that snow is a sensitive indicator to climate change. In South Africa, snow is mainly experienced during winter months (June to August) and is common in the Western Cape mountains, western parts of the Northern Cape Province, interior high grounds of the Eastern Cape and the Drakensburg mountains in KwaZulu-Natal.Whereas the prevalence of snow in South Africa is not as high as the northern hemisphere's mid-latitudes and Polar Regions, the effects of annual snow is well documented.Like in other parts of the world, in South Africa, snow causes substantial ground transport disruptions, particularly on the country's high ground mountain passes, agricultural damage, overload on utilities like electricity and solar power output among others (Cheshire 1997;Andrews and Pearce 2012).Miller (1998) suggests that snow is a valuable physical process that promotes soil water infiltration that re-invigorates grasslands and other natural vegetation.In this regard, mapping snow is critical for sustainable utilisation of catchments, water points as well as planning and mitigation of associated disasters. Traditionally, field surveys have been used to generate snow maps (Brown and Braaten 1998).However, generating snow maps using survey techniques is often expensive, tedious and time consuming (Kavzoglu and Colkesen 2009).Field surveys are therefore not ideal for the often quick melting snow covers ( Lu et al., 2005).Therefore, thematic cover-types based on remotely sensed data-sets are becoming popular (Foody, 2002;Gillanders et al., 2008).Remotely sensed data-sets are particularly well suited for measurement of snow cover due to their uniquely high incident radiation, which contrasts with most natural and artificial surface types (Stroeve et al., 2005).The suitability of remotely sensed datasets in snow cover mapping is further facilitated by repetitive temporal coverage, wide swath width, improved classification algorithms and acquisition of data from remote and inaccessible sites (Foody 2002). To date, a number of techniques have been exploited by scientists to reliably map snow at various scales.Köning et al., (2001), Foppa et al., (2007) and Lu et al., (2005) provide a detailed overview of some the common remote sensing datasets and methods used in snow mapping.One of the most successful image based snow mapping techniques is the Normalized Difference Snow Index (NDSI) proposed Hall et al. (2001).This technique exploits the ratio between snow's high reflectance and strong absorption in the visible and short-wave infrared sections of the electromagnetic spectrum respectively (Hall et al., 2001).Like most ratios, Salomonson and Appel (2004) notes that one of the major advantages of NDSI is its resilience to atmospheric effects and influences caused by viewing geometry.In this regard use of NDSI has been widely adopted by the remote sensing community (see; Tong and Velicogna 2010;Andreassen et al., 2008;Aniya et al., 1996;Sidjak and Wheate, 1999;Corripio 2004 andLu et al., 2005 among others). As aforementioned, accurate mapping of snow cover is valuable for planning, management and the mitigation of adverse bio-physical and social process.Whereas the use of NDSI has been widely used in mapping snow, the reliability of such maps is often compromised by its reflectance similarity with other cover types like water, shiny rock surfaces and even vegetation covers.According to Hall et al., (2002), such features are characterised by low reflectance due to their high absorbance ability and low NDSI denominator.Under such circumstances, even a small increase in in the infra-red band may ultimately increase the NDSI and therefore a misclassification of the alternative land-cover's type pixel as snow (Hall et al., 2002).Köning and Sturm (1998) note that there is still a need for techniques that can be used to improve the classification accuracy of snow cover maps.One of the techniques with great potential is the use of imagery principal components, also known as Principal Component Analysis (PCA).The PCA is a multivariate statistical technique used in remote sensing to reduce the number of spectral components to fewer principal components with most of the variance contained in the original multispectral images (Singh, 1989).Typically, remotely sensed image dataset are characterised by multiple bands.However, some of the bands within the dataset are often highly correlated and therefore redundant for land-use-landcover mapping (Muchoney and Haack 1994;Munyati 2004).In PCA, spectral bands are combined into a new set of less correlated eigen images (Jackson 1983).In this study we hypothesise that the reduced dimensionality using PCA in concert NDSI is valuable for improving the accuracy of snow cover maps.Whereas there is a huge body of literature on the use of PCA in land-use-land-cover mapping, there is paucity of literature on the integration of PCA in NDSI for snow cover mapping.Using the recently launched 11 spectral band Landsat 8 dataset, we propose a new technique that combines the imagery principal components generated using PCA with commonly used NDSI, referred to as Normalised Difference Principal Component Snow Index (NDPCSI) to improve snow mapping accuracy. Study Area The Koue Bokkeveld mountain range is located north of Ceres town in the Western Cape Province of South Africa (Figure 1).The Koue Bokkeveld has an elevated escarpment which extends to about 1,600m above sea level.At this altitude, it is one of the coldest places in the Western Cape and experiences snow fall every winter season.The mountain range forms part of the Koue Bokkeveld mountain catchment draining into the Olifants and Doring River networks.The area is regarded as an important natural ecosystem and a critical water source that is used to irrigate approximately 50% of the agricultural fields within the catchment (IWRM Report 2011).This area was selected due to the significance of snow to the catchments hydrological system and other aforementioned reasons. Data and Methods The area's Landsat 8 level 1A dataset was acquired on the 9 th of June 2013 from the South African National Space Agency (SANSA).This image was captured on the earliest cloud free day after a snow fall.The dataset is characterised by 11 spectral bands operating in the visible, nearinfrared, shortwave infrared and thermal infrared spectral regions.The imagery is further characterised by 30m spatial resolution bands from the visible and shortwave infrared, 100m for thermal infrared and 15m spatial resolution for the, panchromatic band.For this study we only used the 30m spatial resolutions bands for deriving the snow indices and the 15m panchromatic band for validation.A detailed description of the characteristics of Landsat 8 imagery is available on the United States Geological Survey (USGS) Landsat Missions website at http://landsat.usgs.gov. The level 1A Landsat image acquired for this study consisted of quantized and calibrated scaled digital numbers representing multispectral image (USGS 2013).The digital numbers of the images were converted to surface reflectance to obtain the Top of Atmosphere (TOA) reflectance as suggested by Smith et al. (2013).To exploit the high reflectance of snow in the visible and the shortwave infrared wavelength regions, and to accentuate the presence of snow in the study area, we employed the NDSI.This technique was preferred over other snow identification methods such as the Relative Spectral Mixture Analysis (RMSA) and the Relative Multiple Endmember Spectral Mixture Analysis (RMESMA).Unlike NDSI, Shreve et al (2009) noted that RMSA and RMESMA yield inferior snow classification accuracy.In this study, NDSI was computed by dividing the difference in reflectance observed in the Landsat 8 green band (0.53-0.59µm) and the shortwave infrared band (1.57-1.65µm)with the sum of the two bands as per equation 1. 𝑁𝐷𝑆𝐼 = 0.53 − 1.65 0.568 + 1.65 [1] As recommended by Hall et al., (1995), Kulkarni et al., (2006) and Xiao et al., (2002), we used a NDSI threshold of greater than 0.4 to highlight the presence of snow.As suggested by Kulkarni et al., (2006) and Xiao et al., (2002) a near-infrared reflectance value greater than 0.11 was used to mask out water pixels so as to improve NDSI classification accuracy. To accentuate tonal variations and to reduce dimensionality and correlated principal components, we applied the PCA technique to all the Landsat 8's visible, near-infrared and short wave infrared bands.This procedure generated eight spectrally independent principal components.Each principal component was then independently analysed to detect snow.A Normalised Difference Principal Component Snow Index (NDPCSI) was then computed to accentuate the presence of snow using principal component 1 and 2 as illustrated in the equation 2. Validation and evaluation Given the challenge of obtaining same day snow ground reference data for validation in inaccessible mountain terrains, MODIS data was used as reference dataset for accuracy assessment.The MODIS sensor consists of 36 spectral bands between the electromagnetic spectrum of 0.4-14µm with a spatial resolution of 250, 500 and 1000m at nadir (Hall et al., 1995).With such a wide spectral range, MODIS is ideal for mapping snow cover using the visible, infrared and shortwave infrared regions (Hall et al., 1995).Detailed characteristics of MODIS can be found in Huang et al., (2011).Ideally the validation process should be done using higher spatial resolution data.However, it was impractical to get same day reference image and to perform field verification due to the limited time frame as snow melted before a much higher resolution sensor overpass.Therefore to complement the MODIS data for accuracy assessment, a 15 metre higher resolution Landsat 8 panchromatic band was therefore used in evaluating the classification accuracy.Stroeve et al., (2005) and Lydolph (1985) noted that snow reflects more than 80% of the incident radiation.In consistency with these observations, in this study, it was relatively easy to discern snow covers from the course resolution MODIS validation imagery.Consequently, the accuracy of snow covers based on NDSI and NDPCSI were validated using MODIS and the Landsat 8 panchromatic band classified as "snow" and "no snow". Results and discussions In consistency with findings in literature (see; Hall et al., 1995Hall et al., , 2001Hall et al., , 2002;;Kulkarni et al., 2006 andXiao et al., 2002 among others), at a threshold greater than 0.4, the NDSI analysis was successful in distinguishing snow from other land cover types.However, results in this study also indicated that NDSI was sensitive to water bodies found on lower grounds within the study area (Figure 3a).This finding is consistent with Hall et al., (2002) who note such spectral confusion as the major weakness of NDSI in mapping snow in areas with reflective water bodies (see Figure 4).As aforementioned, to improve the snow cover classification accuracy, it was therefore necessary to mask out the visible water bodies from the NDSI image (Figure 3b).The resultant image showed a significant improvement in delineating areas of spectral confusion between snow and water bodies (Figure 3c). Figure 3d shows NDSI's snow covers using a snow/no snow Boolean mask.The 8 PCAs showed different amount of information.The greatest amount of information was contained in principal components (PC) 1 and 2. Generally, there was a higher variation in surface cover types in PC1 than in the rest of the 8 PCs.As shown in Figure 4-PC1, the high reflectance and high absorption for snow and water bodies respectively made them easily distinguishable.However, whereas PC2 had more information than the rest of lower order PCs, it was difficult to distinguish snow from most of the other surfaces.In PC2, the absorption of both snow and water reduced the contrast between the two surfaces (Figure 5 -PC 1 and PC 2).The PC 1 and 2 were therefore chosen for calculation of the NDPCSI as they showed high reflectance and absorption for snow covers respectively.The eigenvalues showed that the two PCs had over 88% of meaningful data (Table 1).The contrast between the two PCs and the therefore the validity of choice for mapping snow was further corroborated using a correlation scatterplot and were found to be spectrally unrelated (Figure 6).Less than 12% of the information was contained in the six higher order PCs (Table 1).These PCs indicated high variances of noise and were therefore not considered for calculation of NDPCSI.The resulting NDPCSI indicated that there was a substantial difference between snow cover and other land cover types in the imagery such as water bodies, agricultural fields, roads, and build up areas.Unlike the NDSI, NDPCSI clearly differentiated snow from water bodies without the need of a water mask.Ultimately, an extract of thick snow covering an area of 7km 2 was delineated from the rest of the classes (Figure 7).As aforementioned, to assess the accuracy of the mapped snow, the validation using NDSI and NDPCSI was done using MODIS data and the Landsat 8 panchromatic band.Figure 8 shows same day acquisition of snow cover in the study area from both MODIS and Landsat 8 spectral and panchromatic imagery.Although MODIS had a courser resolution, areas covered by snow were visible.5 present the results of the confusion matrix obtained from the validating NDSI and NDPCSI methods.Validation of these methods was performed using MODIS data which produced overall accuracies of 76.8% and 84.9% for NDSI and NDPCSI respectively.On the other hand, the panchromatic band produced significantly higher overall accuracy for both methods with 93.7% for NDSI and 94.9 % for NDPCSI.These results indicate how significant these two sets of imagery were in testing NDSI and NDPCSI methods.The results obtained in this study indicate that the accuracy of the mapped snow using NDSI is comparable to NDPCSI.However, whereas the NDPCSI performed better than the NDSI, the computation for NDSI is simple and can be automated.Therefore, the choice of the two methods will be determined by the volume of the images to be processed and the level of classification accuracy required.Overall accuracy: 84.9 % Conclusions This study demonstrated that snow cover can be mapped using Landsat 8 imagery using NDSI and NDPCSI techniques.Although the NDSI and NDPCSI produced comparable results, the NDPCSI produced higher classification accuracy.The superiority of the NDPCSI can be attributed to the ability of principal component analysis to de-correlate snow from water bodies and shadows.The NDSI results confirmed the threshold of 0.4 widely used in literature as ideal for mapping snow.Although with a relatively lower accuracy, a major advantage of NDSI over NDPCSI is the possibility for automation.However, the use of NDPCSI offer great potential to achieve higher accuracy snow maps if the PCs with the greatest de-correlation can be automated.Furthermore, use of NDPCSI eliminates the need to masking features with similar spectral characteristics as snow.This study demonstrates that NDPCSI can be used successfully to map snow and is an alternative to the commonly used NDSI. Figure 1 . Figure 1.Location of the study area. Figure 2 . Figure 2. Processing flow diagram of the study. Figure 3 . Figure 3. NDSI misclassification of snow with water bodiesrectangles (A), water bodies mask (B), NDSI with water masked out (C) and Binary image of areas with and without snow (D). Figure 4 : Figure 4: Scatter plot showing snow and water overlap using NDSI. Figure 5 . Figure 5. Principal components 1 to 8 -PC 1 and PC 2 show snow reflectance and absorption respectively. Figure 8 . Figure 8. MODIS and Landsat 8 multispectral and panchromatic imagery showing snow cover. Table 1 : Eigenvalues representing information from the 8 principal components. Table 2 . Confusion error matrix of the NDSI using MODIS. ( Table 3 . Confusion error matrix of the NDPCSI using MODIS. Table 4 . Confusion error matrix of the NDSI using Landsat 8 panchromatic band. Table 5 . Confusion error matrix of the NDPCSISI using Landsat 8 panchromatic band.
v3-fos-license
2020-11-26T09:05:57.631Z
2020-11-18T00:00:00.000
228884644
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jfpe.13598", "pdf_hash": "a3f438d3c85c2ecbbe9845484ce12cac93973138", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42523", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Engineering" ], "sha1": "1e6c4363c51ee272d0032d94b87c2f72cafb4752", "year": 2020 }
pes2o/s2orc
Oil droplet breakup during pressure swirl atomization of food emulsions: Influence of atomization pressure and initial oil droplet size Correspondence Martha L. Taboada, Institute of Process Engineering in Life Sciences, Chair of Food Process Engineering, Karlsruhe Institute of Technology, Kaiserstrasse 12, 76131 Karlsruhe, Germany. Email: martha.taboada@kit.edu Abstract Atomization of emulsions with pressure swirl atomizers is a common task in food process engineering. Especially in spray drying processes for food materials like dairy products, it is the technology of choice. During atomization, emulsions are subjected to high stresses, which can lead to deformation and breakup of the dispersed droplets. In this study, the influence of atomization pressure (5–20 MPa) and initial oil droplet size (0.26, 3.1, and 20.8 μm) on the oil droplet breakup during atomization of food based oil-in-water emulsions with pressure swirl atomizers was investigated. It was shown that a significant oil droplet breakup takes place upon atomization. The size of oil droplets with an initial value of 3.1 and 20 μm was reduced up to 0.36 μm. No breakup of oil droplets with an initial value of 0.26 μm was observed. The breakup was highly dependent on the atomization pressure. The results were analyzed based on existing knowledge on droplet breakup in laminar flow. A concept to estimate capillary numbers during atomization was developed based on common models from different applications. The results of this study can be used to control the resulting oil droplet size after atomization with pressure swirl atomizers. | INTRODUCTION Spray drying of emulsions is a common task in food engineering for the production of products with encapsulated oily components. Typical examples include products such as coffee creamers, infant formula, and the encapsulation of active ingredients, aroma, and coloring compounds (Reineccius, 2004). The process of spray drying starts with the atomization, by which the liquid emulsion is dispersed into small spray droplets. These droplets are subsequently dried to powder by contact with a hot air stream. Pressure swirl nozzles are widely used as atomization devices in the food industry (Barbosa-Cánovas et al., 2005). After drying, the oil droplets should be encapsulated in a matrix material, which acts like a barrier, providing protection against oxidation or losses. During atomization, emulsions are subjected to intense stresses, which do not only deform and atomize the feed, but can also lead to deformation and breakup of the dispersed droplets therein. A breakup of the dispersed oil droplets results in a modification of a previously adjusted oil droplet size distribution (ODSD). The ODSD affects the stability of the powder and of the reconstituted emulsion, as well as the functional properties of the product. For example, the release and bioavailability of active compounds are directly related to the oil droplet size (McClements & Li, 2010). In addition, the oil droplet size determines the color impression of food coloring powders (Haas et al., 2019). Furthermore, oil droplet breakup during atomization has been correlated to a reduced encapsulation efficiency in the powder (Jafari et al., 2008). Therefore, it is of upmost importance to control oil droplet breakup during atomization. Breakup of oil droplets during atomization of oil-in-water (O/W) emulsions has been already studied for different types of atomizers: Schröder et al. (2012) and Kleinhans et al. (2016) studied oil droplet breakup for atomization with effervescent atomizers and an air core liquid ring atomizer. Munoz-Ibanez et al. (2015) studied oil droplet breakup during atomization with rotary and external mixing pneumatic atomizers. In these studies, the breakup was shown to depend on the energy input of atomization, as well as on the initial oil droplet size and on the viscosity ratio of the emulsions. Few studies using pressure swirl atomizers have also reported breakup of the disperse phase during atomization of emulsions (Bolszo et al., 2010). However, in spite of their wide industrial use, this aspect has not been systematically studied yet for pressure swirl atomizers. Most of the studies found in literature on these atomizers focus on the spray characteristics and not on the changes of the disperse phase (Davanlou et al., 2015;Tratnig et al., 2009). The atomization principle of pressure swirl atomizers is based on the conversion of pressure to kinetic energy. In this type of atomizers the liquid flows through tangential holes or slots into a swirl chamber, and then to a discharge orifice (Walzel, 2003). Due to swirling motion of the liquid, an air core is created that extends from the rear of the swirl chamber to the discharge orifice. In the orifice, a thin liquid film is formed, which then leaves the atomizer in the form of an annular sheet that spreads to a conical hollow spray (Lefebvre & McDonell, 2017). A schematic view of a pressure swirl atomizer is depicted in Figure 1. Acceleration of the liquid due to the diameter contraction is expected to result in elongational stresses in both radial and axial directions. In the liquid film at the atomizer orifice, high shear stresses are expected due to the high velocities and the proximity to the wall. By means of numerical simulations of the internal flow in commercial pressure swirl atomizers, Renze et al. (2011) demonstrated that shear rates up to $100,000 s −1 and elongational rates up to 50,000 s −1 occur in the liquid film close to the nozzle exit for pressures of 0.2 MPa. These stresses can also lead to deformation and breakup of the disperse droplets in emulsions. In order to estimate the stresses in the liquid film at the atomizer orifice, knowledge of the film thickness t is required. This information is not readily available and is not easy to determine experimentally at relevant industrial conditions. Several analytical and empirical correlations are available in the literature to estimate this value. A widely used theoretical model for the estimation of t is given by Suyari & Lefebvre (1986) in the form of Equation (1), where r o corresponds to the nozzle orifice radius, _ m to the mass flow rate, μ to the liquid viscosity, ρ to the liquid density and Δp L to the pressure differential during atomization. According to the original correlation by Rizk & Lefebvre (1985), the constant C is 3.66. The constant was corrected by Suyari & Lefebvre (1986) to 2.7 to better match experimental data. The correlation predicted the film thickness with high accuracy up to pressures of 3 MPa. Other recent studies with different atomizer geometries, pressures, and liquid properties have found that the expression estimates the liquid film thickness fairly well (Laurila et al., 2019;Wimmer & Brenn, 2013). From the theory on breakup of dispersed droplets it is known that for droplet breakup the external forces imparted by the surrounding fluid must overcome the droplet capillary pressure (Karbstein & Schubert, 1995). When the external stresses are of simple shear nature, the breakup is characterized by the capillary number Ca: F I G U R E 1 Schematic representation of a pressure swirl atomizer where μ c is the viscosity of the continuous phase, _ γ the shear rate, x the droplet radius, and σ the interfacial tension between the continuous and the disperse phase. In the case of elongational flow, the shear rate is replaced by the elongational rate _ ε in Equation (2). For breakup to occur, a critical value of the capillary number Ca cri has to be exceeded (Grace, 1982). This value depends on the viscosity ratio between the disperse and the continuous phase μ d /μ c . The denominator is replaced with the emulsion viscosity for emulsions with high disperse phase fractions (Armbruster, 1990). The correlation of Ca cri with the viscosity ratio depends on the type of flow acting on the droplet interface. Grace (1982) Ca cri from Grace (1982). For droplet breakup, it is also necessary that the droplet deformation time τ def exceeds a critical value τ def,cri, that correlates with the droplet viscosity μ d divided by the deformation stress, see Equation (3) (Walstra & Smulders, 1998): In emulsions with high phase content, the resulting droplet size is not only a function of droplet breakup, but also of coalescence. In this study, the effect of coalescence was excluded by working at very low disperse phase fractions. The aim of the present work was to investigate the impact of pressure swirl atomization on the oil droplet size of food emulsions. Specifically, the influence of the atomization pressure and the initial oil droplet size were investigated. For this purpose, the ODSD of emulsions before and after atomization were compared. Additionally, stresses in the atomizer and capillary numbers were estimated in order to analyze the results based on the theory of droplet breakup in laminar flow. In the inspected shear rate range the viscosity of the emulsions and of the oil were found to be independent of the shear rate. The viscosity of the MCT oil at 1000 s −1 was 28.8 ± 0.2 m PaÁs, while the viscosity of the emulsions was 32.3 ± 1.3 m PaÁs. The viscosity of the emulsion was used to calculate the viscosity ratio, as the viscosity of the emulsion and of the continuous phase are virtually the same. Thus, the viscosity ratio of the model system was 0.9, which is in the optimal region for droplet breakup in shear flow (Grace, 1982 Table 2. To ensure that the filter and the pump periphery have no effect on the initial ODSD, emulsion samples were taken right before the nozzle entry. No significant difference was observed between the ODSD of these samples and of the initial emulsions. During atomization, a sample of the spray was taken with a beaker $25 cm below the nozzle exit. The oil droplet size of the emulsion was measured offline with laser diffraction spectroscopy (HORIBA LA950, Retsch Technology After atomization, the size distribution of the spray droplets was measured. Spray droplets are atomized emulsion droplets in which the oil droplets are dispersed (see Figure 1). Spray droplet size distributions were measured inline using a similar setup as in previous studies (Kleinhans et al., 2016). The spray rig was equipped with a laser dif- were processed according to the Fraunhofer theory and a time averaged mean value was calculated. | Oil droplet size after atomization To assess the influence of the atomization pressure on the oil droplet Therefore, the stresses that lead to oil droplet breakup are expected to increase with increasing atomization pressure. It can also be noted from the results shown in Figure 3 that very similar ODSD resulted after atomization of emulsions with SMD i of 3.1 and 20.8 μm at each studied pressure. From these results it is clear that the oil droplets are broken up to the same value independently of their initial droplet size. This effect can be further seen in Figure 4, where the resulting SMD for these emulsions are depicted. Analysis of variance (ANOVA) was carried out to compare the resulting SMD for each pressure, and no significant difference (p < .05) between the SMD of emulsions with different initial droplet size was observed. To further investigate the influence of the initial oil droplet size on the breakup behavior during atomization, emulsions with SMD i of 0.26 μm were also atomized. The resulting SMD are also depicted in Figure 4. In the case of these submicron droplets, the SMD remained unchanged at all studied atomization pressures, indicating that no breakup of the oil droplets took place during atomization. The results imply that in the case of the small, submicron droplets, the capillary pressure is large enough to overcome the external stresses during atomization. In the emulsification literature, the SMD of the disperse phase correlates with the energy input for emulsification according to the expression in Equation (4). In this equation, C is a constant that depends on the viscosity, and the exponent b gives insights on the breakup mechanisms of the disperse phase: for breakup due to laminar stresses, b takes a value close to one. For breakup due to inertial (turbulent) stresses, b takes values between 0.2 and 0.4 (Karbstein, 1994). To evaluate the breakup mechanisms of the oil droplets during atomization, the resulting oil SMD were correlated to the expression in Equation (4). In this study, the energy input for emulsification corresponds to the atomization pressure. The resulting constants C and b for the emulsions with different SMD i , as well as the coefficient of determination R 2 are summarized in Table 3. The resulting fit is also depicted in Figure | Spray droplet size The resulting SMD of spray droplet size distributions at the different atomization pressures are also depicted in Figure 4 for emulsions with SMD i of 3.1 μm. No significant difference in the spray SMD was observed for emulsions with different SMD i at the same pressure (data not shown). A reduction of the spray SMD with increasing atomization pressure is observed. However, it is noticeable that the effect of increasing pressure on the spray droplet size is in relation much lower than on the oil droplet size. In fact, an increase in the atomization pressure from 5 to 20 MPa resulted in a reduction of the oil SMD by 74%, while the SMD of the spray droplets was reduced only by about 26%. To evaluate the breakup mechanism of the spray droplets, the resulting spray SMDs were also correlated to the expression in Equation (4). This expression has been widely used to correlate the SMD of spray droplets with the atomization pressure in pressure nozzles (Lefebvre & McDonell, 2017;Stähle et al., 2017). In this case, b usually takes values between 0.27 and 0.4 (Lefebvre & McDonell, 2017). The resulting constants C and b for the spray droplets, as well as the coefficient of determination R 2 are listed in Table 3. The resulting fit is depicted in Figure 4. In the case of spray droplets in this study, b takes a value of 0.22, which is an indicative of breakup in turbulent flow (Karbstein, 1994). The results imply that different mechanisms underlie the breakup of oil and spray droplets. From the literature on pressure swirl atomization it is known that spray droplets are generated due to the high relative velocity between the liquid and the gas outside of the atomizer. The liquid leaves the atomizer as a conical sheet and disintegrates into spray droplets by Kelvin-Helmholtz instabilities or by turbulence (Walzel, 2003). In the case of oil droplets, laminar shear stresses inside the atomizer, and specifically in the thin liquid film before the atomizer outlet, are expected to dominate the droplet breakup. | Estimation of stresses and capillary numbers To explain the observed dependences of the oil droplet breakup on the atomization pressure and on the initial oil droplet size, the laminar stresses in the atomizer and the capillary numbers are estimated. The dominant stresses leading to oil droplet breakup are expected to occur in the thin liquid film at the atomizer outlet. Shear and elongational stresses can also occur in the slots of the slotted core, as well as in the swirl chamber. However, the named stresses are expected to be much lower in comparison to the stresses in the liquid film close to the atomizer exit (Nonnenmacher & Piesche, 2000;Rezaeimoghaddam et al., 2010). At this point, the small thickness of the liquid film leads to very high liquid velocities and velocity gradients. In the liquid film, shear stresses are expected to dominate. A study of the flow inside similar pressure swirl nozzles confirms this: Renze et al. (2011) showed that the elongational rates in the liquid film at the nozzle outlet had a magnitude of half of the shear rates. It should be noted, however, that the critical capillary numbers for elongational flow are much lower than in shear flow (Grace, 1982). Therefore, oil droplet breakup in elongational flow requires reduced stresses compared to shear flow. A schematic drawing for the liquid film in the orifice of the pressure swirl nozzle (as in Figure 1) is depicted in Figure 5. A model of the flow profile for the estimation of the shear rate in the liquid film is also depicted. A simplified linear flow profile is assumed, in which the liquid velocity at the wall u w is zero and the maximum velocity u a occurs at the air-liquid interface. In reality, the maximum liquid velocity occurs probably somewhere before the interface, as the liquid is slowed down by the air. This discrepancy is however not expected to change the rough magnitude of the shear rate estimation. Another important assumption is, that due to the high magnitude of the axial T A B L E 3 Constants C and b as well as coefficients of determination R 2 for oil and spray droplets for the fit using Equation (4) F I G U R E 5 Model for the flow profile in the liquid film in the atomizer orifice. r o : nozzle inner radius; t: liquid film thickness; u a : velocity at the interface with the air core; u w : velocity at the wall velocity, the radial velocity in the liquid film can be neglected (Rezaeimoghaddam et al., 2010). The definition of the shear rate _ γ is presented in Equation (5), while the solution for our model system is presented in Equations (6) and (7). In these equations u is the axial velocity and y is the coordinate perpendicular to the flow direction. t corresponds to the thickness of the liquid film. To solve Equation (6), the following boundary conditions are applied: the velocity at the wall u w is equal to zero, while the velocity at the interface with the air core u a is twice the average velocity u of the liquid. The average velocity u is calculated according to Equation (8), in which Q L corresponds to the experimentally measured volume flow and A L is the flow area of the liquid. The flow area is calculated from the area of the nozzle orifice minus the area of the air core (Equation (9)), in which r o corresponds to the radius of the nozzle Solving Equation (9) requires knowledge of the liquid film thickness inside the atomizer. Equation (1) has been used to estimate the liquid film thickness in this study with a constant C value of 2.7. Unfortunately, no study has been found in literature, which validates the use of this correlation in the pressure range of this study. However, the correlation is based on a theoretical analysis of the flow conditions in the atomizer, which are expected to be valid at high pressures too. The estimated film thickness, mean velocity, and shear rate in dependence of the atomization pressure are summarized in Table 4. As expected, the film thickness decreases and the velocity of the liquid increases with increasing pressure. By this, an increase in the calculated shear rate with increasing atomization pressure is observed. The relatively high values of velocities are expected for atomizers with small orifices, as in this study (Wimmer & Brenn, 2013). The corresponding capillary numbers for shear flow for the different initial oil droplet sizes and atomization pressures were calculated by means of Equation (2) and are summarized in Table 5. These values are compared to the critical capillary number from Grace (1982) for droplet breakup in shear flow, also shown in Table 5. The estimated capillary numbers for the emulsions with a SMD of 3.1 and 20.8 μm are, for all atomization pressures, well above the critical capillary number. Therefore, oil droplet breakup in shear flow is possible for these emulsions at the studied atomization conditions. In the case of emulsions with a SMD of 0.26 μm, the capillary numbers are below the limits of droplet breakup in shear flow calculated by Grace (1982). In the case of the smaller oil droplets, the shear stresses during atomization are not high enough to overcome the capillary pressure and no oil droplet breakup is possible. According to the work of Renze et al. (2011), the elongational stresses in the liquid film at the outlet of the atomizer are estimated to have a magnitude of half of the shear stresses. Based on this information and on the estimated shear rates listed in Table 4, the capillary numbers for elongational flow were also estimated and are listed in Table 5. Similar to the previous analysis on shear flow, the capillary numbers are compared to the critical capillary number from Grace (1982) for elongational flow. From Table 5 it can be seen, that for emulsions with a SMD i of 3.1 and 20.8 μm the estimated capillary numbers are well above the critical capillary number for all atomization pressures. Therefore, in spite of the lower magnitude of the elongational stresses, oil droplet breakup due to elongational flow is also possible for these emulsions. The lower values of critical capillary numbers for elongational flow, compared to shear flow (Grace, 1982) explain this. In the case of emulsions with a SMD of 0.26 μm, the cap- conditions. These results may be explained by several reasons. First of all, the critical capillary numbers from Grace (1982) To explain the fact that the same oil droplet size after atomization was achieved with emulsions with SMD i of 3.1 and 20.8 μm, the residence time in the high stress area was estimated. From the emulsion theory it is known that larger droplets require longer stressing times for reaching the deformation state that corresponds to the stress applied (Walstra, 1993). Therefore, to achieve the equilibrium value of oil droplet size, the residence time in high stress areas must be long enough to allow deformation and breakup of the large droplets. By means of Equation (3) | CONCLUSION The experimental study of atomization of O/W emulsions with pressure swirl atomizers showed that a significant oil droplet breakup takes place during atomization. The oil droplet breakup is highly dependent on the atomization pressure, as the stresses in the liquid film of the atomizer orifice correlate with the atomization pressure. The impact of the pressure on the spray droplet size is relatively low compared with the impact on the oil droplet size. These results have the practical implication that an increase in the atomization pressure to achieve an adequate spray droplet size for the spray drying process, will necessarily lead to a reduction of a previously adjusted oil droplet size in emulsions. The results also suggest that oil droplet breakup occurs under laminar flow conditions, whereas spray droplet breakup is dominated by turbulent flow. In addition, the results suggest that the stresses in the atomizer and the residence time of the droplets are large enough to reduce the SMD to submicron values, even when emulsions with large initial oil droplet sizes are atomized. Therefore, the oil droplet size after atomization can only be controlled to a limited extent with the initial oil droplet size. A theoretical approach for the estimation of stresses and capillary numbers during atomization with pressure swirl atomizers was developed. The good agreement of the experimental results with the theory on droplet breakup in laminar flow supports the hypothesis, that the stresses in the liquid film at the atomizer outlet dominate oil droplet breakup during atomization. The results also indicate that both shear and elongational stresses can lead to oil droplet breakup under the studied conditions. To confirm this, detailed stress-time profiles in the atomizer should be analyzed. The concept developed in this study for the estimation of the capillary numbers can be used to control oil droplet breakup during atomization under given process conditions. The findings suggest that to avoid oil droplet breakup, the emulsion properties and operating conditions must be adjusted to obtain capillary numbers below the critical values. This concept could be used as a tool to control oil T A B L E 5 Estimated capillary numbers and critical capillary number after (Grace, 1982) for shear and elongational flows writing-review and editing.
v3-fos-license
2018-01-29T20:29:58.585Z
2018-01-17T00:00:00.000
3786731
{ "extfieldsofstudy": [ "Computer Science", "Geology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-4292/10/1/119/pdf?version=1516179363", "pdf_hash": "9d01102dca50bfafda84c75495338bd1316b31f7", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42526", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "sha1": "9d01102dca50bfafda84c75495338bd1316b31f7", "year": 2018 }
pes2o/s2orc
Fine-Resolution Precipitation Mapping in a Mountainous Watershed : Geostatistical Downscaling of TRMM Products Based on Environmental Variables Accurate precipitation data at a high spatial resolution are essential for hydrological, meteorological, and ecological research at regional scales. This study presented a geostatistical downscaling-calibration procedure to derive the high spatial resolution maps of precipitation over a mountainous watershed affected by a monsoon climate. Based on the relationships between precipitation and other environmental variables, such as the Normalized Difference Vegetation Index (NDVI) and digital elevation model (DEM), a regression model with a residual correction method was applied to downscale the Tropical Rainfall Measuring Mission (TRMM) 3B43 product from coarse resolution (25 km) to fine resolution (1 km). Two methods, geographical difference analysis (GDA) and geographical ratio analysis (GRA), were used to calibrate the downscaled TRMM precipitation data. Monthly 1 km precipitation data were obtained by disaggregating 1 km annual downscaled and calibrated precipitation data using monthly fractions derived from original TRMM data. The downscaled precipitation datasets were validated against ground observations measured by rain gauges. According to the comparison of different regression models and residual interpolation methods, a geographically-weighted regression kriging (GWRK) method was accepted to conduct the downscaling of TRMM data. The downscaled TRMM precipitation data obtained using GWRK described the spatial patterns of precipitation reasonably well at a spatial resolution of 1 km with more detailed information when compared with the original TRMM precipitation. The results of validation indicated that the GRA method provided results with higher accuracy than that of the GDA method. The final annual and monthly downscaled precipitation not only had significant improvement in spatial resolution, but also agreed well with data from the validation rain gauge stations (i.e., R2 = 0.72, RMSE = 161.0 mm, MAE = 127.5 mm, and Bias = 0.050 for annual downscaled precipitation during 2001 to 2015; and R2 = 0.91, RMSE = 22.2 mm, MAE = 13.5 mm, and Bias = 0.048 for monthly downscaled precipitation during 2001 to 2015). In general, the downscaling-calibration procedure is useful for complex mountainous areas with insufficient ground gauges. Introduction Precipitation plays a significant role in hydrological and ecological processes as a key component of mass exchange and energy balance [1,2].The spatial variations of precipitation generally influence vegetation distribution, soil moisture, and surface hydrology [3,4].Accurate high-spatial-resolution precipitation data (HSPD) are crucial for different meteorological and hydrological applications, especially in data-sparse regions.For example, high-resolution estimates of spatial variability in rainfall fields can identify locally intense storms that could lead to floods [5].Meanwhile, the HSPD can capture more spatially detailed information in drought monitoring when compared to rain gauge observations [6].In addition, HSPD provides a valuable data source for hydrologic modeling and water resources planning purposes [7,8]. Traditionally, precipitation data are acquired from rain gauges that rely heavily on field observations and might be subject to measurement error and heterogeneities in data collection [9].The high spatial-temporal variability of precipitation and the relatively sparse distribution of rain gauges makes it difficult to provide fine-resolution precipitation data [10][11][12].Various interpolation methods have been applied to map precipitation distribution based on a limited number of gauge data.These methods range from simple techniques such as Thiessen polygons, inverse distance weighting (IDW), or spline [5,13] to complex and computationally intensive approaches, such as geostatistical kriging [14,15].The more complex approaches often consider orographic and/or atmospheric effects and spatial covariance on precipitation occurrences, for example, Parameter-elevation Regressions on Independent Slopes Model (PRISM) [16], Orographic Precipitation Model (OPM) [17], and the Auto-Searched Orographic and Atmospheric Effects Detrended Kriging (ASOADeK) model [18].These studies have revealed that the choice of an appropriate interpolation method depends on the time step used in the analysis, the density of the precipitation network, the precipitation regime, and the catchment's geomorphologic characteristics [19]. With the development of advanced satellites, remote sensing has become the main tool for estimating precipitation over areas lacking rain gauge networks [20].In recent years, a series of rainfall datasets have been developed at both the regional and global scales.For example, datasets have been developed by the Global Precipitation Climatology Project (GPCP) [21], the Global Satellite Mapping of Precipitation (GSMaP) project [22], the Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS) [23], the Multi-Source Weighted-Ensemble Precipitation (MSWEP) [24], the Precipitation Estimation from Remotely-Sensed Information using Artificial Neural Networks-Climate Data Record (PERSIANN-CDR) [25], the Tropical Rainfall Measuring Mission (TRMM) [26,27], and the Global Precipitation Measurement (GPM) [28].Among these satellite precipitation datasets, TRMM has had the specific goal of measuring precipitation over the oceans and tropics.This dataset has resulted in the development of a series of rainfall products in the last two decades, and the spatial resolution obtained has reached 0.25 • × 0.25 • .TRMM data have been used extensively for inter-disciplined investigations and applications such as land surface modeling [29], drought monitoring [30], and hydrological simulation [31].However, it is still too coarse when applied to the hydrological study of local basins and regions, which generally need higher-resolution inputs [32].Thus, a robust downscaling procedure below the pixel resolution is needed to meet the requirement of a high spatial resolution. Precipitation as a natural phenomenon is related to other environmental variables, such as topography and vegetation.Complex topography causes the perturbation and deformation of atmospheric fluxes that influence precipitation patterns by mechanically altering the wind patterns and microphysical processes, which impact condensation of the precipitable water [33].Precipitation distribution in mountainous regions strongly depends on the terrain altitude and steepness, as well as the orientation of the slopes [34].In theory, an increase of elevation could increase the relative humidity of air masses by expansion and cooling as the air masses rise, resulting in precipitation [35].The precipitation-topography relationship is also largely dependent on the fluctuations of the terrain [36].For example, the barrier effect on the airflow is also closely related to the aspect of mountains, as well as the direction of the airflow.Aspect is linked to the prevalent wind orientation, thus determining the potential relative water excess or deficit [37].These orographic alteration processes yield large spatial variations with, in general, enhanced precipitation in the windward side of the mountain barriers and reduced precipitation in the lee side [38].As for slope, a gradient in the speed of vertical air movements may control the intensity and area of precipitation [39].Therefore, topographic information is useful to downscale the precipitation over mountainous regions.Jia et al. [2] improved downscaling results after the inclusion of elevation data.Guan et al. [40] downscaled the resolution of Next Generation Radar (NEXRAD) from 16 km to four km by considering the orographic effects on precipitation distribution.Fang et al. [11] developed a statistical spatial downscaling scheme based on the relationships between precipitation and environmental factors, such as slope, aspect, and terrain roughness extracted from a digital elevation model (DEM), as well as humidity and temperature data from meteorological conditions. The vegetation response on an annual scale has been found to have a positive relationship with precipitation.Many studies used Normalized Difference Vegetation Index (NDVI) as an independent variable to downscale the coarse TRMM precipitation data [32,36,[41][42][43].Although the response of NDVI to precipitation usually lagged by two or three months [41,44], it was also feasible to downscale monthly, weekly, or daily satellite precipitation datasets by considering lag time.For example, Quiroz et al. [45] applied wavelet transform analysis using NDVI data to improve daily rainfall estimates at meteorological stations located on the Andean Plateau.Hunink et al. [46] assumed a lag time of one week in the regression models to mimic the response of vegetation to precipitation as well as to estimate spatial distributions of precipitation at a high spatial resolution with a weekly time step in a tropical mountainous region in Ecuador. Statistical downscaling is a recently developed approach in obtaining the high spatial resolution of variables based on correlations between the variable at a coarse resolution and geospatial predictors at a finer resolution [41].Great efforts have been made to advance the spatial downscaling algorithms of TRMM precipitation products based on the relationships described above.Immerzeel et al. [32] proposed an algorithm for downscaling TRMM datasets using the regression between precipitation and an NDVI based on the exponential function.Jia et al. [2] developed an algorithm using a multiple linear regression (MLR) model and introduced both NDVI and DEM as independent variables.Furthermore, Alexakis [47] implemented a machine learning algorithm, termed an artificial neural network (ANN), to downscale the TRMM 3B42 precipitation fields from 0.25 • to 1-km spatial resolution.Chen et al. [48] and Xu et al. [49] constructed a geographically weighted regression (GWR) model to obtain higher resolution precipitation datasets based on the assumption that the relationship of rainfall-geospatial factors varied spatially, but was similar within a region.These studies indicated that the HSPD obtained through downscaling algorithms not only better presented the spatial variability, but had higher accuracy than the original TRMM datasets. Previous studies have shown that monthly rainfall estimates from the TRMM satellite contained errors (±8% to ±12% per month) due to discrete temporal sampling.As the TRMM satellite is a low Earth orbiting satellite, its rain sensors sample the regional atmosphere only at discrete time intervals, sometimes missing short duration storms [50].Moreover, regional sampling errors (approximately 30%) could be expected in TRMM rainfall estimates [51,52].Since all downscaling procedures are based on the original TRMM 3B43 data, these errors would be inevitably introduced into the downscaled precipitation data.Reliable calibration methods combining satellite rainfall data with rainfall data from rain gauge stations (RGS) can greatly improve the accuracy of rainfall estimates [53].For example, better agreements between downscaled precipitation data and observations from RGS were achieved by using geographical difference analysis (GDA) in the study of Duan and Bastiaanssen [41]. Although a large number of algorithms have been developed and applied for the downscaling of satellite-based precipitation data and improvements in accuracy [36], there exists a challenge to generate accurate precipitation in mountainous watersheds due to the sparse gauge network and high spatial-temporal variability of precipitation.In this study, our main goal was to map annual and monthly precipitation with a high spatial resolution over a mountainous, monsoon driven watershed.More specifically, we had four sub-objectives: (1) to test the applicability of the original TRMM 3B43 V7 precipitation data; (2) to compare three different regression techniques (MLR, ANN, and GWR) of estimating precipitation at a coarse resolution and five different interpolation methods of regression residual, choosing one method with the best performance to downscale TRMM precipitation data; (3) to map the annual precipitation at a 1-km resolution based on the integrated downscaling-calibration procedure; and (4) to produce maps of monthly precipitation by disaggregating annual downscaled precipitation data using a simple fraction disaggregation method. Study Area The Red River Basin is located in Southwest China, with a geographical location of 22 • 21 -25 • 36 N and 100 • 07 -105 • 42 E (Figure 1).The main stream is 677 km long and the basin area is approximately 7.6 × 10 4 km 2 .The elevation of the catchment ranges from 79 m to 3142 m above sea level, decreasing from the Northwest to the Southeast.The variability of terrain, coupled with the influences of the Indian and East Asian monsoons, means precipitation over the Red River Basin has a complex spatial and temporal variation [54].The annual average precipitation was approximately 1056 mm, ranging from 854 mm to 1303 mm.Additionally, about 85% of the annual precipitation is concentrated in the rainy season (May to October), with less than 15% of the annual precipitation observed in the dry season (November to April).The mean annual temperature ranges from 15 • C to 24 • C, with June being the warmest month, and the coldest month being January.The precipitation presents a decreasing trend with a latitude and altitude increase (Figure 2).The vegetation of the Red River Basin consists of broad-leaved evergreen forests, coniferous forests, grasslands, and shrubs.Strong correlations between NDVI and precipitation are found in a previous study [55].Therefore, it is feasible to develop a spatial downscaling-calibration procedure for low-resolution satellite-based precipitation datasets based on NDVI and terrain factors in this area. of estimating precipitation at a coarse resolution and five different interpolation methods of regression residual, choosing one method with the best performance to downscale TRMM precipitation data; (3) to map the annual precipitation at a 1-km resolution based on the integrated downscaling-calibration procedure; and (4) to produce maps of monthly precipitation by disaggregating annual downscaled precipitation data using a simple fraction disaggregation method. Study Area The Red River Basin is located in Southwest China, with a geographical location of 22°21′-25°36′N and 100°07′-105°42′E (Figure 1).The main stream is 677 km long and the basin area is approximately 7.6 × 10 4 km 2 .The elevation of the catchment ranges from 79 m to 3142 m above sea level, decreasing from the Northwest to the Southeast.The variability of terrain, coupled with the influences of the Indian and East Asian monsoons, means precipitation over the Red River Basin has a complex spatial and temporal variation [54].The annual average precipitation was approximately 1056 mm, ranging from 854 mm to 1303 mm.Additionally, about 85% of the annual precipitation is concentrated in the rainy season (May to October), with less than 15% of the annual precipitation observed in the dry season (November to April).The mean annual temperature ranges from 15 °C to 24 °C, with June being the warmest month, and the coldest month being January.The precipitation presents a decreasing trend with a latitude and altitude increase (Figure 2).The vegetation of the Red River Basin consists of broad-leaved evergreen forests, coniferous forests, grasslands, and shrubs.Strong correlations between NDVI and precipitation are found in a previous study [55].Therefore, it is feasible to develop a spatial downscaling-calibration procedure for low-resolution satellite-based precipitation datasets based on NDVI and terrain factors in this area. TRMM Precipitation Data TRMM is a joint project of the National Aeronautics and Space Administration (NASA) and the Japan Aerospace Exploration Agency (JAXA) launched on 27 November 1997, with the aim of monitoring and studying rainfall in tropical and subtropical regions [26].TRMM is a research satellite designed to improve our understanding of the distribution and variability of precipitation covering the global region between 50°N and 50°S and has provided valuable information on rainfall and its associated heat release [27].The TRMM 3B43 Version 7 dataset used in this study is one of several TRMM precipitation products, containing data from January 2001 to December 2015, with a calendarmonth temporal resolution and a 0.25° × 0.25° spatial resolution.The monthly precipitation data were accumulated to obtain the TRMM annual precipitation. NDVI Data The Terra Moderate Resolution Imaging Spectroradiometer (MODIS) (NASA, Washington, D.C. USA) monthly composite NDVI data of 1-km resolution (MOD13A3, collection v006) spanning the period from January 2001 to December 2015 were downloaded from the NASA Land Processes Distributed Active Archive Center (https://lpdaac.usgs.gov/dataset_discovery/modis).We aggregated the MOD13A3 monthly data into the annual NDVI for 2001-2015. It should be noted that since the vegetation growth is suppressed or promoted by some land covers (e.g., rivers, lakes, snow and ice, and urban areas), the vegetation data of these land covers were excluded.Additionally, due to the strong influence of farming activities (e.g., irrigation, fertilization, and harvest) on the crop growth, vegetation data of farmland were also excluded [56].As such, we identified anomalous pixels simply by land use type: pixels categorized as water, wetland, urban, cropland, snow/ice, and barren were identified as anomalies.The detected anomalous pixels were excluded from the original NDVI dataset and then filled with interpolated values using the IDW method to generate an optimized NDVI dataset. Land Use Data In this study, the land use dataset MCD12Q1 Version 051 (MODIS/Terra+Aqua Land Cover Type Yearly L3 Global 500 m SIN Grid V051) in the period from 2001 to 2013 was used, which has a yearly temporal resolution and a 500 m × 500 m spatial resolution.MCD12Q1 contains five types of classification that describe the properties of surface coverage derived from the observations of the MODIS sensor.The first surface coverage schema identifies 17 classes defined by the International Geosphere Biosphere Program (IGBP), which was adopted in this study for its wide applications.The IGBP classification includes 11 natural vegetation classes, three classes of urbanized lots, and three classes of vegetated ground [57]. TRMM Precipitation Data TRMM is a joint project of the National Aeronautics and Space Administration (NASA) and the Japan Aerospace Exploration Agency (JAXA) launched on 27 November 1997, with the aim of monitoring and studying rainfall in tropical and subtropical regions [26].TRMM is a research satellite designed to improve our understanding of the distribution and variability of precipitation covering the global region between 50 • N and 50 • S and has provided valuable information on rainfall and its associated heat release [27].The TRMM 3B43 Version 7 dataset used in this study is one of several TRMM precipitation products, containing data from January 2001 to December 2015, with a calendar-month temporal resolution and a 0.25 • × 0.25 • spatial resolution.The monthly precipitation data were accumulated to obtain the TRMM annual precipitation. NDVI Data The Terra Moderate Resolution Imaging Spectroradiometer (MODIS) (NASA, Washington, DC, USA) monthly composite NDVI data of 1-km resolution (MOD13A3, collection v006) spanning the period from January 2001 to December 2015 were downloaded from the NASA Land Processes Distributed Active Archive Center (https://lpdaac.usgs.gov/dataset_discovery/modis).We aggregated the MOD13A3 monthly data into the annual NDVI for 2001-2015. It should be noted that since the vegetation growth is suppressed or promoted by some land covers (e.g., rivers, lakes, snow and ice, and urban areas), the vegetation data of these land covers were excluded.Additionally, due to the strong influence of farming activities (e.g., irrigation, fertilization, and harvest) on the crop growth, vegetation data of farmland were also excluded [56].As such, we identified anomalous pixels simply by land use type: pixels categorized as water, wetland, urban, cropland, snow/ice, and barren were identified as anomalies.The detected anomalous pixels were excluded from the original NDVI dataset and then filled with interpolated values using the IDW method to generate an optimized NDVI dataset. Land Use Data In this study, the land use dataset MCD12Q1 Version 051 (MODIS/Terra+Aqua Land Cover Type Yearly L3 Global 500 m SIN Grid V051) in the period from 2001 to 2013 was used, which has a yearly temporal resolution and a 500 m × 500 m spatial resolution.MCD12Q1 contains five types of classification that describe the properties of surface coverage derived from the observations of the MODIS sensor.The first surface coverage schema identifies 17 classes defined by the International Geosphere Biosphere Program (IGBP), which was adopted in this study for its wide applications. The IGBP classification includes 11 natural vegetation classes, three classes of urbanized lots, and three classes of vegetated ground [57]. The land use dataset was mainly used in this study to identify the outliers of NDVI caused by factors other than precipitation.Since the available MCD12Q1 data spanned 2001 to 2013, we used the year 2013 instead of 2014 and 2015 for identifying outliers of MOD13A3.In addition, due to a mismatch in spatial resolution between the MOD13A3 and MCD12Q1 datasets, the MCD12Q1 dataset was up-scaled to a 1-km resolution. DEM Data The DEM data were obtained from the NASA Shuttle Radar Topographic Mission (SRTM) [58].The 90-m DEM was re-sampled at a resolution of 1-km using the pixel averaging method.The variables of slope, aspect, and geolocation (longitude and latitude) data were further extracted from the DEM data.The rain gauge data was divided into two sample sets.The first set was used to calibrate the downscaled annual precipitation data using calibration methods, and the second set was used for validation.The separation of the RGS for calibration and validation was done as follows: all available fifteen-year period (2001-2015) average annual precipitation from all 40 RGS were first calculated and sorted in a lowest-highest sequence 1-40; then 20 RGS with odd numbers from one to 39, plus number 40, were considered as calibration RGS; the others were validation RGS.This separation allowed the calibration to cover the whole range of precipitation, including the lowest and highest values [41]. Remote Sens. 2018, 10, 119 6 of 27 The land use dataset was mainly used in this study to identify the outliers of NDVI caused by factors other than precipitation.Since the available MCD12Q1 data spanned 2001 to 2013, we used the year 2013 instead of 2014 and 2015 for identifying outliers of MOD13A3.In addition, due to a mismatch in spatial resolution between the MOD13A3 and MCD12Q1 datasets, the MCD12Q1 dataset was up-scaled to a 1-km resolution. DEM Data The DEM data were obtained from the NASA Shuttle Radar Topographic Mission (SRTM) [58].The 90-m DEM was re-sampled at a resolution of 1-km using the pixel averaging method.The variables of slope, aspect, and geolocation (longitude and latitude) data were further extracted from the DEM data.The rain gauge data was divided into two sample sets.The first set was used to calibrate the downscaled annual precipitation data using calibration methods, and the second set was used for validation.The separation of the RGS for calibration and validation was done as follows: all available fifteen-year period (2001-2015) average annual precipitation from all 40 RGS were first calculated and sorted in a lowest-highest sequence 1-40; then 20 RGS with odd numbers from one to 39, plus number 40, were considered as calibration RGS; the others were validation RGS.This separation allowed the calibration to cover the whole range of precipitation, including the lowest and highest values [41]. Methods A flowchart was provided to illustrate the main steps of the downscaling-calibration and the monthly fraction disaggregation algorithm, which is shown in Figure 4. Methods A flowchart was provided to illustrate the main steps of the downscaling-calibration and the monthly fraction disaggregation algorithm, which is shown in Figure 4. Downscaling of Original TRMM 3B43 Precipitation We established a hybrid downscaling method using a regression model with residual correction on the basis of previous studies [2,47,48,59,60].Within the hybrid downscaling method, the TRMM precipitation data were decomposed into trend and residual components.First, quantitative relationships between the TRMM data and environmental variables of coarse scale were estimated via regression analysis and used to derive trend component at a fine scale.Second, the residual component, which is the difference between the trend component and the original TRMM data, was then downscaled into a target fine scale via spatial interpolation.The trend and residual components were finally added to generate fine scale precipitation estimates. The downscaling method was based on two assumptions: (1) precipitation has a spatial relationship with environmental variables, and this relationship can be addressed by established models; and (2) the models established at low spatial resolution can also be used to predict the precipitation at a fine resolution with the higher resolution environmental variables dataset [36].The specific steps used for downscaling in this study are described as follows: (1) Original NDVI, elevation, slope, and aspect of 1-km resolution were re-sampled at a resolution of 0.25° using a pixel averaging method with the geographical coordinates of the center of each 0.25° grid is also extracted; (2) The relationships between the re-sampled independent variables and the TRMM 3B43 V7 precipitation data were established using regression models (i.e., MLR, ANN, GWR), from which we obtained the estimated annual precipitation at the 0.25° scale; (3) Geolocations along with the 1-km spatial resolution variables were entered into the model established in Step 2, and a downscaled precipitation of 1-km resolution was achieved; (4) Residual correction can adjust for the precipitation that cannot be predicted by the models.This adjustment was generated by computing the difference between the estimated precipitation Downscaling of Original TRMM 3B43 Precipitation We established a hybrid downscaling method using a regression model with residual correction on the basis of previous studies [2,47,48,59,60].Within the hybrid downscaling method, the TRMM precipitation data were decomposed into trend and residual components.First, quantitative relationships between the TRMM data and environmental variables of coarse scale were estimated via regression analysis and used to derive trend component at a fine scale.Second, the residual component, which is the difference between the trend component and the original TRMM data, was then downscaled into a target fine scale via spatial interpolation.The trend and residual components were finally added to generate fine scale precipitation estimates. The downscaling method was based on two assumptions: (1) precipitation has a spatial relationship with environmental variables, and this relationship can be addressed by established models; and (2) the models established at low spatial resolution can also be used to predict the precipitation at a fine resolution with the higher resolution environmental variables dataset [36].The specific steps used for downscaling in this study are described as follows: (1) Original NDVI, elevation, slope, and aspect of 1-km resolution were re-sampled at a resolution of 0.25 • using a pixel averaging method with the geographical coordinates of the center of each 0.25 • grid is also extracted; (2) The relationships between the re-sampled independent variables and the TRMM 3B43 V7 precipitation data were established using regression models (i.e., MLR, ANN, GWR), from which we obtained the estimated annual precipitation at the 0.25 • scale; (3) Geolocations along with the 1-km spatial resolution variables were entered into the model established in Step 2, and a downscaled precipitation of 1-km resolution was achieved; (4) Residual correction can adjust for the precipitation that cannot be predicted by the models.This adjustment was generated by computing the difference between the estimated precipitation values at the 0.25 • scale and the original TRMM 3B43 data, followed by spatial interpolation into the residual of a 1-km resolution.Five interpolation methods (IDW, spline regularized, spline tension, ordinary kriging, and simple kriging) were applied to the residuals, and cross-validation was used to choose the best interpolation method; and (5) The corrected downscaled precipitation results were obtained by adding the residual correction term of 1-km resolution to the downscaled precipitation of 1-km resolution. Regression Models (1) Multiple Linear Regression Model The method of analysis used in the multiple linear regression models was the method of least squares, which is a minimization of the sum of the squares of the deviations of the observed response from the fitted response [61].With precipitation being the dependent variable, we constructed a model that involves both the environmental variables (NDVI, elevation, slope, aspect, longitude, latitude) and their corresponding parameters. The general form of the model is written as follows: where P is precipitation (mm/year); x 1 -x 6 are the NDVI, elevation, slope, aspect, longitude, latitude, respectively.Furthermore, a 0 -a 6 are the corresponding parameters of the regression function. We applied stepwise regression to avoid multicollinearity and overfitting of the linear regression equation.For the three reference years and entire time period, the significant correlations between TRMM precipitation and latitude or NDVI were found at the 0.05 significance level, while the other four factors had no significance at the 0.1 level.Therefore, the regression relationship was established based on TRMM precipitation, latitude, and NDVI. ( 2) Artificial Neural Network Model The theory behind neural networks is based on an attempt to reproduce human learning processes [62].ANN is an attractive and powerful numerical methodology to map complex relationships between different sets of observed variables [63].It has been used in different studies to downscale coarse TRMM precipitation data [47,64].An ANN consists of a collection of different neurons connected to each other.A connection occurs when the status of a neuron, i, is one of the inputs for another neuron, j, by means of a weight, W ij .Individual neurons are often called nodes of the network.The architecture of an ANN is defined by establishing how the individual neurons of the network are connected to each other.To estimate the number of the hidden layer nodes, the equation proposed by Hecht-Nielsen [65] was used as a reference: where N g is the number of hidden nodes; and N i is the number of input nodes.The multi-layer perceptron (MLP) neural network was used for the application of the ANN.A three-layer feed forward network consisting of an input layer, one hidden layer, and one output layer was used as the network structure.For example, the network structure of 2001 was 6-16-1.The number of the hidden layer nodes was finally estimated through many simulation experiments.According to Equation (2), we first obtained a reference value of 13, then we tried the different numbers around 13, and finally made it 16, as the network performed best when the number of the hidden layer nodes was 16.The input layer had six neurons, including NDVI, elevation, slope, aspect, longitude, and latitude.The output layer was the precipitation.For the needs of a training procedure, 3000 iterations were set as a threshold to terminate the procedure.The tangent sigmoid function was used for transferring data from one layer to another in the hidden layer.The purelin function was used for transferring data in the output layer.The training and learning function were trainlm and learngdm, respectively.The functions used above are part of Neural Network Toolbox in Matlab (The MathWorks, Natick, MA, USA) [66].For the samples, approximately two thirds were regarded as training samples, and the other third was used as validation samples.The training process of the ANN consisted of the determination of all the weights of the network based on the observed input/output patterns [47]. (3) Geographically-Weighted Regression Model GWR is a regional regression method that can be used to investigate the spatially-varying relationship between dependent and explanatory variables [67].In other words, GWR runs a regression for each location instead of a sole regression for the entire study area [68], which attempts to capture the spatial variation of the relationship by adding the geographical location information into the regression process.The regression model can be rewritten as Equation (3): where y i is the ith observation of the dependent variable; x ik is the ith observation of the kth independent variable; the (u i , v i ) represents the geographical coordinates of ith regression point; and β k (u i , v i ) is the kth regression parameter in the ith regression point, which is a function of geographical location.Furthermore, ε i represents the independent normally distributed error term with zero mean.The parameters can be estimated by solving the following matrix equation: In this equation, β(u i , v i ) is the local coefficient to be estimated at location (u i , v i ); x and y are the vectors of the explanatory and the dependent variables, respectively; and W(u i , v i ) is the weight matrix, which plays the role of ensuring that closer observations around the ith point carry more weight than observations farther away [48,69].In this study, the GWR method was conducted using GWR 4.0 software (https://gwrtools.github.io/category/gwr.html)[68]. Calibration of Downscaled Precipitation The GDA calibration method developed by Cheema and Bastiaanssen [53] was used to minimize the difference between the satellite precipitation and the measurements from RGS.Instead of GDA, geographical ratio analysis (GRA) was developed to calibrate the downscaled precipitation.In this process, we compared GDA and GRA using measurements from validation RGS to choose a method that provided results with higher accuracy.The processing steps of GDA/GRA are as follows: (1) The differences/ratio between the downscaled precipitation values and the measurements from RGS were computed; (2) The differences/ratio were interpolated into a resolution of 1-km with the interpolation technique; and (3) The downscaled precipitation was corrected to obtain the final calibrated precipitation by adding/multiplying the differences/ratio term of 1-km resolution. Monthly Fraction Disaggregation from Annual Precipitation A simple fraction method derived from original TRMM 3B43 monthly data developed by Duan and Bastiaanssen [41] was used to disaggregate downscaled-calibrated annual 1-km precipitation maps into maps at monthly time steps.The procedure was as follows: (1) The monthly fractions of 0.25 • , which were used to disaggregate the annual precipitation, were defined as: where the TRMMo i represents the precipitation that occurs during the ith month as estimated from the original TRMM 3B43 product; and the denominator is the annual total value.(2) The 0.25 • fractions were further interpolated into a spatial resolution of 1-km which was consistent with the downscaled-calibrated annual precipitation using an interpolation method; (3) The annual downscaled precipitation values at 1-km resolution were disaggregated into monthly-downscaled precipitation values by multiplying the fraction values of 1-km resolution. Validation The validation in this study included three steps.First, the predicted annual precipitation obtained using three regression models were compared with the original TRMM estimates at 0.25 • resolution.Four metrics, the coefficient of determination (R 2 ), the root mean square error (RMSE), the mean absolute error (MAE), and the Bias, were used to evaluate the results.They were defined as follows: where O and P represent the original TRMM precipitation and predicted precipitation of models at 0.25 • resolution, respectively.Second, leave-one-out cross validation (LOOCV) was used to compare the residual results of different interpolation methods.The LOOCV method removes one point every time, and then predicts the value of this point using the remaining points until all the points were iterated [70].Two indicators (RMSE and MAE) were calculated based on the predicted values and the original values of all the points to evaluate the interpolation methods. Third, the accuracy of the final downscaled precipitation was further validated against the measurements from RGS, based on the R 2 , RMSE, MAE, and Bias.The rain gauge data not used for calibration were used to validate the final downscaled results. Comparison between TRMM and Station-Based Observed Precipitation The applicability of the original TRMM 3B43 V7 was tested using measurements from 34 RGS, which are not included in the Global Precipitation Climatology Centre (GPCC) gauge network used in 3B43 ground calibrations.Figure 5 shows plots of the original TRMM precipitation against rain gauge measurements at multi-time scale from 2001 to 2015.From Figure 5, a significant correlation was found between the two datasets during the dry season, wet season, and whole year.The correlation coefficients reached 0.87 (dry season), 0.84 (wet season), and 0.91 (whole year), respectively, which all passed the 0.01 level of significance test.These results indicated an obvious linear correlation and consistency between TRMM precipitation and rain gauge data.In terms of bias, the dry season, wet season, and whole year were 0.091, 0.057, and 0.062, respectively, which means that the precipitation was overestimated by TRMM 3B43 when compared with observations from RGS.On the whole, there was a strong applicability of the original TRMM 3B43 data in the estimation of precipitation over the Red River Basin. Performance of the Different Downscaling Models Obtaining the regression relationship between the TRMM precipitation and environmental variables, is a vital premise of the downscaling procedure, which directly affects the final result.Therefore, the predictive performances of three regression models (MLR, ANN, and GWR) were evaluated by comparing the predicted annual precipitation by each model with the original TRMM annual precipitation at 0.25° resolution.To better compare the performance of the three models, a bootstrapping procedure was employed to create a sample of the same size as the original data, a total of 1000 tests were conducted, and performance metrics (i.e., R 2 , RMSE, MAE, and Bias) were calculated accordingly.Table 1 shows the mean values and 95% confidence interval of R 2 , RMSE, MAE, and Bias, estimated by the three models for the three reference years and entire time period.The box plots of R 2 , RMSE, MAE, and Bias are shown in Figure 6.From Table 1 and Figure 6, it can be seen that both ANN and GWR produced a higher R 2 and lower RMSE and MAE than that of MLR under different climatic conditions.In terms of Bias, the Bias of GWR was lower than that of ANN in three reference years and the entire time period.GWR is a regression algorithm, which adds the geographical information for each location into the regression process, potentially addressing the spatially heterogeneous relationships between various environmental factors.Compared with ANN, GWR can capture more specific, local details.Thus, GWR was chosen as the downscaling model to generate HSPD in subsequent analyses. Performance of the Different Downscaling Models Obtaining the regression relationship between the TRMM precipitation and environmental variables, is a vital premise of the downscaling procedure, which directly affects the final result.Therefore, the predictive performances of three regression models (MLR, ANN, and GWR) were evaluated by comparing the predicted annual precipitation by each model with the original TRMM annual precipitation at 0.25 • resolution.To better compare the performance of the three models, a bootstrapping procedure was employed to create a sample of the same size as the original data, a total of 1000 tests were conducted, and performance metrics (i.e., R 2 , RMSE, MAE, and Bias) were calculated accordingly.Table 1 shows the mean values and 95% confidence interval of R 2 , RMSE, MAE, and Bias, estimated by the three models for the three reference years and entire time period.The box plots of R 2 , RMSE, MAE, and Bias are shown in Figure 6.From Table 1 and Figure 6, it can be seen that both ANN and GWR produced a higher R 2 and lower RMSE and MAE than that of MLR under different climatic conditions.In terms of Bias, the Bias of GWR was lower than that of ANN in three reference years and the entire time period.GWR is a regression algorithm, which adds the geographical information for each location into the regression process, potentially addressing the spatially heterogeneous relationships between various environmental factors.Compared with ANN, GWR can capture more specific, local details.Thus, GWR was chosen as the downscaling model to generate HSPD in subsequent analyses. under different climatic conditions.In terms of Bias, the Bias of GWR was lower than that of ANN in three reference years and the entire time period.GWR is a regression algorithm, which adds the geographical information for each location into the regression process, potentially addressing the spatially heterogeneous relationships between various environmental factors.Compared with ANN, GWR can capture more specific, local details.Thus, GWR was chosen as the downscaling model to generate HSPD in subsequent analyses. Downscaling Analysis of TRMM Annual Precipitation The estimated precipitation of three reference years and the entire time period was obtained based on the GWR model.The following took the average annual precipitation of 2001-2015 as an example to analyze the downscaled results.A comparison of Figure 7a,b showed similar spatial patterns of estimated TRMM precipitation when compared with the original TRMM precipitation at a spatial resolution of 0.25°.However, the estimated TRMM precipitation was a little lower than the original TRMM precipitation.In addition, the spatial variability of the estimated TRMM precipitation was higher than that of the original TRMM precipitation, indicating a difference between the estimated TRMM precipitation and the original TRMM precipitation with a 0.25° spatial resolution in terms of precipitation value and spatial distribution range.Figure 7c shows the spatial distribution of the residual of annual average TRMM precipitation with a spatial resolution of 0.25° × 0.25°.The map of residuals represents the part of the precipitation that could not be explained by the GWR model.Positive values of the residual indicate that the precipitation was underestimated by the GWR Downscaling Analysis of TRMM Annual Precipitation The estimated precipitation of three reference years and the entire time period was obtained based on the GWR model.The following took the average annual precipitation of 2001-2015 as an example to analyze the downscaled results.A comparison of Figure 7a,b showed similar spatial patterns of estimated TRMM precipitation when compared with the original TRMM precipitation at a spatial resolution of 0.25 • .However, the estimated TRMM precipitation was a little lower than the original TRMM precipitation.In addition, the spatial variability of the estimated TRMM precipitation was higher than that of the original TRMM precipitation, indicating a difference between the estimated TRMM precipitation and the original TRMM precipitation with a 0.25 • spatial resolution in terms of precipitation value and spatial distribution range.Figure 7c Precipitation residuals with a spatial resolution of 0.25° × 0.25° (Figure 8a) between TRMM precipitation and estimated TRMM precipitation were interpolated to a higher spatial resolution of 1-km using different spatial interpolation methods, i.e., the IDW method in Figure 8b, the simple spline regularized interpolation in Figure 8c, the simple spline tension interpolation in Figure 8d, the ordinary kriging method in Figure 8e, and the simple kriging method in Figure 8f.Considering the strong influence of different interpolation methods on residual results, we conducted a comparison using LOOCV.The indicators (RMSE and MAE) of five interpolation methods were compared to select the method with the best performance.According to the results in Table 2, the simple kriging method was accepted for the spatial interpolation of precipitation residuals from lower to higher spatial resolution (Figure 7d).It should be noted that the simple kriging interpolation was used in subsequent analysis such as calibration and monthly fraction disaggregation. Figure 7e shows the spatial patterns of the estimated annual average TRMM precipitation with 1-km resolution.Figure 7f shows the spatial distribution of downscaled annual average TRMM precipitation, which was obtained by adding the high-spatial-resolution predictive annual average precipitation data to the high-spatial-resolution residual obtained above.From Figure 7f, the downscaled TRMM precipitation adequately represented real-world precipitation patterns.In particular, high precipitation was observed mainly in the southern region, and less precipitation was Precipitation residuals with a spatial resolution of 0.25 • × 0.25 • (Figure 8a) between TRMM precipitation and estimated TRMM precipitation were interpolated to a higher spatial resolution of 1-km using different spatial interpolation methods, i.e., the IDW method in Figure 8b, the simple spline regularized interpolation in Figure 8c, the simple spline tension interpolation in Figure 8d, the ordinary kriging method in Figure 8e, and the simple kriging method in Figure 8f.Considering the strong influence of different interpolation methods on residual results, we conducted a comparison using LOOCV.The indicators (RMSE and MAE) of five interpolation methods were compared to select the method with the best performance.According to the results in Table 2, the simple kriging method was accepted for the spatial interpolation of precipitation residuals from lower to higher spatial resolution (Figure 7d).It should be noted that the simple kriging interpolation was used in subsequent analysis such as calibration and monthly fraction disaggregation. Figure 7e shows the spatial patterns of the estimated annual average TRMM precipitation with 1-km resolution.Figure 7f shows the spatial distribution of downscaled annual average TRMM precipitation, which was obtained by adding the high-spatial-resolution predictive annual average precipitation data to the high-spatial-resolution residual obtained above.From Figure 7f, the downscaled TRMM precipitation adequately represented real-world precipitation patterns.In particular, high precipitation was observed mainly in the southern region, and less precipitation was found in the northeast region.Downscaled TRMM precipitation using environmental factors better described the spatial patterns of precipitation with more detail at the spatial resolution of 1-km when compared with the original TRMM precipitation. Calibrating Analysis of Downscaled TRMM Annual Precipitation Calibration with rain gauge data is an essential step for the downscaling-calibration procedure as there may be some bias in the downscaled precipitation introduced by the inherent error in the original TRMM precipitation when downscaling [32].In this study, we employed two calibration methods (GDA and GRA) to correct the bias of the downscaled precipitation data using rainfall data from RGS.The rainfall data from 21 calibration RGS were used to calibrate the downscaled precipitation and the remaining 19 RGS were used for validation.It should be noted that there is the potential for substantial uncertainty in precipitation inferred from gauge observations as gauges in mountainous terrain often suffer significant undercatch errors due to strong winds and accessibility Calibrating Analysis of Downscaled TRMM Annual Precipitation Calibration with rain gauge data is an essential step for the downscaling-calibration procedure as there may be some bias in the downscaled precipitation introduced by the inherent error in the original TRMM precipitation when downscaling [32].In this study, we employed two calibration methods (GDA and GRA) to correct the bias of the downscaled precipitation data using rainfall data from RGS.The rainfall data from 21 calibration RGS were used to calibrate the downscaled precipitation and the remaining 19 RGS were used for validation.It should be noted that there is the potential for substantial uncertainty in precipitation inferred from gauge observations as gauges in mountainous terrain often suffer significant undercatch errors due to strong winds and accessibility difficulties that can obstruct routine maintenance [71].In this section, the original TRMM precipitation was labeled as V7 and the above-mentioned downscaled precipitation was labeled as DS.The precipitation after GDA and GRA calibration based on the downscaled precipitation (DS) was labeled as DSGDA and DSGRA, respectively. The statistical results of V7, DS, DSGDA, and DSGRA using 19 validation RGS measurements for three reference years and the entire time period are listed in Table 3.Compared with the V7 results, the DS results improved accuracy with reduced RMSE and Bias values, and considerable improvement could be observed in the dry year.In general, both the GDA and GRA calibration methods can further improve accuracy with increased R 2 and reduced RMSE, MA, and Bias values.Compared with the GDA calibration method, GRA performed better in terms of the four indicators calculated.Therefore, the GRA method was used to calibrate the downscaled precipitation on an annual scale.The final calibration results of three reference years and the entire time period were conducted based on 21 calibration RGS, which are shown in Figures 9 and 10 Note that some outliers were identified in Figure 10, which may be attributed to high precipitation in mountainous areas, and may also be partially due to errors in the downscaling process. Monthly Results of Disaggregating Annual Precipitation The previous section demonstrated the best annual precipitation data, which were obtained through downscaling and calibration procedures.The following analysis took the average annual precipitation of 2001-2015 as an example to map the monthly precipitation.The accumulated 1-km resolution annual average precipitation of 2001-2015 was disaggregated into monthly time steps using the fraction specified under Equation ( 5).The disaggregated 1-km monthly precipitation was labeled as DSdis.The calibration of DSdis was not conducted in this study given that duplicated calibration processes on a monthly scale are no longer necessary after calibration on an annual scale given the scarcity of available RGS [41]. Figure 11 shows the final downscaled monthly precipitation with a 1-km resolution.The spatial distribution changes of downscaled monthly precipitation clearly reflected the influence of monsoon activities (both onset and decline) on precipitation.Meanwhile, the monthly precipitation variations during the wet season (May to October) and the dry season (November to April) are depicted in Figure 12. Monthly Results of Disaggregating Annual Precipitation The previous section demonstrated the best annual precipitation data, which were obtained through downscaling and calibration procedures.The following analysis took the average annual precipitation of 2001-2015 as an example to map the monthly precipitation.The accumulated 1-km resolution annual average precipitation of 2001-2015 was disaggregated into monthly time steps using the fraction specified under Equation (5).The disaggregated 1-km monthly precipitation was labeled as DSdis.The calibration of DSdis was not conducted in this study given that duplicated calibration processes on a monthly scale are no longer necessary after calibration on an annual scale given the scarcity of available RGS [41]. Figure 11 shows the final downscaled monthly precipitation with a 1-km resolution.The spatial distribution changes of downscaled monthly precipitation clearly reflected the influence of monsoon activities (both onset and decline) on precipitation.Meanwhile, the monthly precipitation variations during the wet season (May to October) and the dry season (November to April) are depicted in Figure 12.Considering the strong seasonality of precipitation in this region, the seasonal bias in the original TRMM data may be introduced into the downscaled monthly precipitation by using the monthly fraction disaggregation method.To explore the effect of the seasonal bias on the downscaled monthly precipitation, we compared monthly precipitation from RGS with the original TRMM data and the downscaled TRMM data in dry and wet seasons, respectively.Figure 13c,e show an underestimation of the original TRMM data in both dry and wet seasons, where the Bias were −0.006 and −0.026, respectively.The Bias of downscaled monthly precipitation were 0.110 in the dry season and 0.037 in the wet season (Figure 13d,f), which indicate that the downscaled monthly precipitation data in turn overestimated the precipitation in both the dry and wet seasons. Sources of Errors and Limitations in the Downscaled Satellite Precipitation Datasets The downscaling method is based on the assumption that the regression models established at low spatial resolution can also be used to predict precipitation at a fine resolution with a higher-resolution environmental variables dataset.However, the TRMM product has considerable regional and seasonal errors [72,73].The errors in TRMM products could seriously disturb the relationship between precipitation and environmental variables at low spatial resolution, making it difficult to accurately predict precipitation with environmental variables at a fine resolution.This limitation can lead to some uncertainty in the downscaled satellite precipitation datasets.Immerzeel et al. [32] pointed out that the downscaled satellite precipitation may be suffering from inherent errors in the satellite precipitation datasets and NDVI caused by orbital drift, atmospheric conditions, and imperfect retrieval algorithms.For example, the purpose of the 3B43 algorithm is to produce the best precipitation rate (mm/h) estimates from TRMM and other data sources.However, there are some deficiencies, such as the discontinuity in the data record and the bias introduced by the AMSU-B algorithm of TRMM 3B43, which potentially reduces the accuracy of TRMM 3B43 [2]. Therefore, calibration with rain gauge data is an essential step to improve the accuracy of the downscaled precipitation data.The GDA and GRA methods have proven to be simple and effective methods to correct the errors in the downscaled data [41,74,75].In this study, calibration using GDA and GRA methods gave better annual precipitation in terms of statistical indicators for the three reference years and the entire time period.GRA was preferred over GDA for the better statistical indicators and some mathematical reasons.Over regions with high spatial variation in precipitation, the downscaled precipitation corrected by ratio-based calibration provided more accurate results than that of difference-based calibration.However, the Red River Basin is a poorly gauged area with 21 rain gauges for calibration present over 76,000 km 2 area and, hence, one gauge represents approximately 3619 km 2 .The scarcity of rain gauge stations inevitably limits the capabilities of the GDA and GRA calibration methods. Precipitation-NDVI Relationships and Precipitation-Topography Relationships The capability of NDVI and DEM for downscaling TRMM precipitation datasets has been widely investigated.The responses of vegetation to precipitation were acknowledged, where NDVI is generally considered as a powerful predictor for precipitation [76,77].The basic assumption of the regression model is that a significant correlation relationship exists between precipitation and NDVI.Therefore, the impacts of NDVI datasets on the downscaling results included two aspects.First, a higher NDVI did not always represent more precipitation in humid zones because of saturated NDVI [75,78].This saturation effect influences the positive relationship between NDVI and precipitation, which may lead to some errors in the downscaled precipitation datasets.Second, in some local areas, NDVI anomalies induced by the existence of various land use (such as water, wetland, snow, urban, cropland, and barren) [2,79], which disrupt the distribution pattern of the NDVI controlled by precipitation and reduces the predictability of precipitation through NDVI.In this study, these NDVI anomaly pixels were detected and eliminated during data processing.However, the NDVI of these pixels was estimated by interpolating the NDVI of pixels around, which may also lead to some errors in the downscaled precipitation datasets.Generally, unlike topography, vegetation activities cannot be seen as a factor influencing rainfall amounts as precipitation is the driver for vegetation development.Therefore, using NDVI as a predictor in the downscaling process needs to be further studied.For example, a lot of pre-processing was required (i.e., excluding various environments) that considered the vegetation condition of a specific region to make it useful in our analysis. The orographic effect is believed to be an important mechanism that controls the spatial variability of precipitation in mountainous areas [11].There are many studies that have highlighted the orographic effect on the spatial variability of rainfall and incorporate topographical features into the downscaling scheme [2,40,80].The key questions in physical process-oriented precipitation mapping are the scale at which topography controls the spatial distribution of precipitation and how this information can be captured.Daly et al. [81] explored this question using six resolutions with a differing smoothing level in the application of PRISM, which is based on station data density and local terrain complexity.Meersmans et al. [34] presented a mean total annual precipitation mapping technique that combined topographical information (i.e., elevation and slope orientation) with average total annual rain gauge data from Belgium where 19 directions and 80 resolutions were considered in this technique to investigate the scale of topographic control at the regional scale.In our study, only a resolution of 1-km (using a pixel averaging method) was taken into consideration, which may have ignored this scale effect.A wider range of resolutions (or smoothing levels) need to be considered to detect the scale of topographic control at a regional scale. The Downscaling Procedure Many different geostatistical techniques are used to interpolate precipitation data [82,83].Recently, several mixed interpolation techniques have been developed, combining kriging and the secondary information [84,85].For example, regression kriging (RK) combines the trend fitted by global regression with the kriged residuals, which makes better use of the available data, thereby improving the accuracy of the estimations when compared to ordinary kriging [86].The advantage of RK is the ability to extend the method to a broader range of regression techniques and to allow separate interpretation of the two interpolated components [87]. The RK technique has been found useful for downscaling low-resolution precipitation datasets, for example, Wagner et al. [86] considered two covariates (i.e., distance in wind direction from the main orographic barrier and TRMM precipitation) in the regression-based interpolation approaches to evaluate different spatial interpolation schemes for daily rainfall in data scarce regions.Teng et al. [59] established MLR models and the interpolation of residuals in RK where elevation and TRMM were used as auxiliary variables to predict the daily rainfall at a 1 km × 1 km scale.Park et al. [60] quantified the relationships between the TRMM data and environmental variables based on MLR and gained the residual interpolation by using area-to-point kriging to downscale the TRMM precipitation. Although great efforts have been made to advance the downscaling algorithms based on environmental variables, there were still a potential source of errors in downscaling algorithms.The satellite precipitation datasets were predicted using global regression, which had not thoroughly considered that the relationship between precipitation and environmental variables was spatially varying and scale-dependent [49].For a spatially varying relationship, it may be more appropriate to apply local regression rather than global regression. Geographically-weighted regression kriging (GWRK) is a hybrid technique and an extension of the GWR approach.GWRK is composed of two components, deterministic and stochastic, where both are modeled separately.The deterministic component is modeled with GWR and uses available covariate information to predict the trend of a target variable.Similar to universal kriging, GWR is a local regression approach that extends the traditional regression framework by allowing the estimation of local, rather than global, parameters.GWR has the advantage of investigating the non-stationary and scale-dependent characteristics of the relationship between the variables [49,67].The stochastic component (residuals) can be interpolated with kriging and added to the estimated trend.Residuals are considered as errors, and it is possible that the errors have some spatial correlation structure that can be modeled.It could be considered that the errors are the component of the model which cannot be explained by the deterministic component, but is important to add as it helps explain the variation of the target variable across space [88].GWRK has been proven to be an effective method for spatial prediction [88][89][90].However, few studies have investigated this approach to downscale the low-resolution satellite-based precipitation datasets. This study applied the GWRK downscaling algorithm for the TRMM 3B43 precipitation product in Red River Basin, Southwest China, providing a fresh case study for similar research in other mountainous areas.GWRK can provide an alternative to other global models when estimating non-stationarity relationships, especially in precipitation which is highly variable spatially.Despite several limitations with the GWR approach (i.e., smoothing effect) that were found in the previous study [49], the GWRK presented a realistic picture of the spatial distribution of precipitation.Overall, the GWRK explicitly addressed the spatial dependency and represents a new and accurate way of mapping precipitation.Our work also provided a basis for water resources management using a remote sensing data source with a considerable difference of altitudes and uneven spatial distribution of precipitation patterns.However, this approach could not be used to downscale the satellite precipitation datasets over the regions where precipitation has no relationship with NDVI and DEM (such as evergreen vegetation, snow covered areas, desert, and water bodies).The common characteristics of these regions are that there is no vegetation and the precipitation is usually not correlated with the topography.The very weak relationship between the dependent variable and the independent variables would be the detriment of the explanatory and predictive ability of the built regression model [49]. Conclusions This paper investigated a geostatistical downscaling-calibration procedure of TRMM 3B43 product, in conjunction with rain gauge data and environmental variables, to map annual and monthly gridded precipitation data at a fine spatial resolution in a mountainous watershed affected by a monsoonal climate.The following conclusions were drawn based on this work: 1. A comparison between the TRMM precipitation and measurements from RGS indicated that good agreement was found between the two datasets.Moreover, TRMM precipitation was 6.2% higher than observations from RGS at a yearly scale.2. In the process of downscaling satellite precipitation datasets with environmental variables, it was critical to select a suitable downscaling procedure to effectively conduct the downscaling based on the relationship between satellite precipitation and the NDVI, DEM.In this study, we established a hybrid downscaling method using a regression model with residual correction. According to the comparison of different regression models and residual interpolation methods, the GWRK method was accepted to conduct the downscaling of TRMM data.This indicated the non-stationary nature of the precipitation-NDVI and precipitation-DEM relationships and the spatial correlation structure of regression residuals needs to be considered when downscaling satellite precipitation datasets. 3. Calibration with rain gauge data is an essential step for the downscaling-calibration procedure. Both the GDA and GRA calibration provided better annual precipitation when validated based on rain gauge data.The GRA outperformed the GDA method in terms of the validation metrics calculated.4. Downscaled TRMM precipitation using environmental factors better described the spatial patterns of precipitation with more details at the spatial resolution of 1-km when compared with the original TRMM precipitation.Moreover, the simple disaggregation procedure based on the monthly fractions was practically used to disaggregate annual precipitation to monthly precipitation.The disaggregated 1-km monthly precipitation not only improved the spatial resolution, but also agreed well with rain gauge data (i.e., R This study demonstrated that through integrated downscaling-calibration of satellite precipitation, it is feasible to map fine-resolution annual and monthly precipitation for a mountainous and monsoon-driven watershed.Since the atmospheric variables (e.g., humidity, wind, temperature, etc.) are closely related to precipitation, these variables should be considered to downscale satellite precipitation datasets in the future.In addition, the Integrated Multi-satellitE Retrievals for GPM (IMERG) product is available at finer spatial and temporal scales, which have also been proven to be better than the TRMM product [91,92].Further investigations should be conducted to examine the potential of the IMERG product for downscaling. Figure 1 . Figure 1.Elevation and distribution of meteorological stations in the Red River Basin.Figure 1. Elevation and distribution of meteorological stations in the Red River Basin. Figure 1 . Figure 1.Elevation and distribution of meteorological stations in the Red River Basin.Figure 1. Elevation and distribution of meteorological stations in the Red River Basin. 2. 2 . 5 . Rain Gauge Data Monthly precipitation observations obtained from 40 RGS during 2001-2015 were from the Meteorological Agency of Yunnan Province.The locations of the 40 rain gauges are shown in Figure 1.As depicted in Figure 1, the distribution of stations in the study area is dense in the East and relatively sparse in the West.These stations have altitudes from 137 m to 2003 m, with 85% between 1000 m and 2000 m. Figure 3 shows the variation of annual rainfall from 2001 to 2015.We selected three typical years: 2005, 2001, and 2009 to represent normal, wet, and dry conditions, respectively.These three reference years, as well as the average annual precipitation of 2001-2015, were selected for the downscaling and calibration procedure to evaluate the performance in different climatic conditions. 2. 2 . 5 . Rain Gauge Data Monthly precipitation observations obtained from 40 RGS during 2001-2015 were from the Meteorological Agency of Yunnan Province.The locations of the 40 rain gauges are shown in Figure 1.As depicted in Figure 1, the distribution of stations in the study area is dense in the East and relatively sparse in the West.These stations have altitudes from 137 m to 2003 m, with 85% between 1000 m and 2000 m. Figure 3 shows the variation of annual rainfall from 2001 to 2015.We selected three typical years: 2005, 2001, and 2009 to represent normal, wet, and dry conditions, respectively.These three reference years, as well as the average annual precipitation of 2001-2015, were selected for the downscaling and calibration procedure to evaluate the performance in different climatic conditions. Figure 3 . Figure 3. Variation of annual precipitation from 2001 to 2015 over the Red River Basin. Figure 3 . Figure 3. Variation of annual precipitation from 2001 to 2015 over the Red River Basin. Figure 4 . Figure 4. Flowchart of the downscaling-calibration algorithm used in the study. Figure 4 . Figure 4. Flowchart of the downscaling-calibration algorithm used in the study. Figure 5 . Figure 5.Comparison of precipitation measured by RGS with the original TRMM 3B43 data at multitime scales: (a) the dry season; (b) the wet season; and (c) the whole year from 2001 to 2015.The red line indicates a 1:1 correspondence. Figure 5 . Figure 5.Comparison of precipitation measured by RGS with the original TRMM 3B43 data at multi-time scales: (a) the dry season; (b) the wet season; and (c) the whole year from 2001 to 2015.The red line indicates a 1:1 correspondence. Figure 7 . Figure 7. Downscaled results of annual average TRMM precipitation covering a period of 2001-2015: (a) the original TRMM 3B43 precipitation at 0.25 • resolution; (b) the estimated TRMM 3B43 precipitation at 0.25 • resolution by GWR; (c) residual at 0.25 • resolution; (d) residual at 1-km resolution; (e) the estimated TRMM 3B43 precipitation at 1-km resolution; and (f) the final downscaled precipitation map. Remote Sens. 2018, 10, 119 14 of 27 described the spatial patterns of precipitation with more detail at the spatial resolution of 1-km when compared with the original TRMM precipitation. Figure 8 . Figure 8. Spatial patterns of precipitation residuals between estimated precipitation by GWR and the TRMM-based precipitation of 0.25° spatial resolution using different spatial interpolation methods: (a) original grid residual map of 0.25° resolution; (b) the IDW interpolation method; (c) the simple spline regularized interpolation; (d) the simple spline tension interpolation; (e) the ordinary kriging interpolation method; and (f) the simple kriging interpolation method. Figure 8 . Figure 8. Spatial patterns of precipitation residuals between estimated precipitation by GWR and the TRMM-based precipitation of 0.25 • spatial resolution using different spatial interpolation methods: (a) original grid residual map of 0.25 • resolution; (b) the IDW interpolation method; (c) the simple spline regularized interpolation; (d) the simple spline tension interpolation; (e) the ordinary kriging interpolation method; and (f) the simple kriging interpolation method. . The estimated precipitation for 2001, 2005, 2009, and 2001-2015 were 1544 mm, 1320 mm, 1068 mm, and 1309 mm, respectively.The spatial distribution and amount variations of precipitation reflected well the wet, normal, and dry conditions. Figure 9 . Figure 9. Final calibrated downscaled annual precipitation with 1-km resolution for three reference years and the entire time period. Figure 9 . Figure 9. Final calibrated downscaled annual precipitation with 1-km resolution for three reference years and the entire time period. Figure 9 . Figure 9. Final calibrated downscaled annual precipitation with 1-km resolution for three reference years and the entire time period. Figure 10 . Figure 10.Box plots of downscaled and calibrated annual precipitation for three reference years and the entire time period. Figure 10 . Figure 10.Box plots of downscaled and calibrated annual precipitation for three reference years and the entire time period. Figure 12 . Figure 12.Box plots of downscaled average monthly precipitation during 2001-2015.The measurements from validation RGS were also used to assess the accuracy of the downscaled monthly data.The 19 validation RGS were the same as those mentioned in Section 3.3.2. Figure 13a,b compares the monthly precipitation from RGS with the corresponding values from the original TRMM 3B43 data and the downscaled data.The original TRMM 3B43 data could estimate the monthly precipitation with R 2 = 0.88, RMSE = 26.5 mm, MAE = 13.7 mm, and Bias = −0.023.The monthly DSdis data agreed well with the measured data with an R 2 of 0.91, RMSE of 22.2 mm, MAE Figure 12 . Figure 12.Box plots of downscaled average monthly precipitation during 2001-2015.The measurements from validation RGS were also used to assess the accuracy of the downscaled monthly data.The 19 validation RGS were the same as those mentioned in Section 3.3.2. Figure 13a,b compares the monthly precipitation from RGS with the corresponding values from the original TRMM 3B43 data and the downscaled data.The original TRMM 3B43 data could estimate the monthly precipitation with R 2 = 0.88, RMSE = 26.5 mm, MAE = 13.7 mm, and Bias = −0.023.The monthly DSdis data agreed well with the measured data with an R 2 of 0.91, RMSE of 22.2 mm, MAE Figure 12.Box plots of downscaled average monthly precipitation during 2001-2015. Remote Sens. 2018, 10, 119 18 of 27 of the original TRMM data in both dry and wet seasons, where the Bias were −0.006 and −0.026, respectively.The Bias of downscaled monthly precipitation were 0.110 in the dry season and 0.037 in the wet season (Figure 13d,f), which indicate that the downscaled monthly precipitation data in turn overestimated the precipitation in both the dry and wet seasons. Figure 13 . Figure 13.Comparison of monthly precipitation measured by RGS with (a) original monthly TRMM data; (b) downscaled monthly data; (c) original monthly TRMM data in the dry season; (d) downscaled monthly data in the dry season; (e) original monthly TRMM data in the wet season; and (f) downscaled monthly data in the wet season.The red line indicates a 1:1 correspondence and the blue line gives the linear regression best fit. Figure 13 . Figure 13.Comparison of monthly precipitation measured by RGS with (a) original monthly TRMM data; (b) downscaled monthly data; (c) original monthly TRMM data in the dry season; (d) downscaled monthly data in the dry season; (e) original monthly TRMM data in the wet season; and (f) downscaled monthly data in the wet season.The red line indicates a 1:1 correspondence and the blue line gives the linear regression best fit. Table 1 . Comparison of annual precipitation data from TRMM 3B43 and estimates based on MLR, ANN, and GWR models (95% confidence intervals are in parentheses). Table 2 . Evaluation of different interpolation methods for residual based on GWR model's output (95% confidence intervals are in parentheses). Table 2 . Evaluation of different interpolation methods for residual based on GWR model's output (95% confidence intervals are in parentheses). Table 3 . Statistics of validation results using 19 validation RGS on an annual scale for three reference years and the entire time period (95% confidence intervals are in parentheses).
v3-fos-license
2016-06-18T00:34:11.619Z
2016-02-18T00:00:00.000
15685626
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2016.00173/pdf", "pdf_hash": "e4f721c3a23f70f3966856092d83fb23bca250a6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42529", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "e4f721c3a23f70f3966856092d83fb23bca250a6", "year": 2016 }
pes2o/s2orc
Global Metabolic Profiling of Arabidopsis Polyamine Oxidase 4 (AtPAO4) Loss-of-Function Mutants Exhibiting Delayed Dark-Induced Senescence Early and more recent studies have suggested that some polyamines (PAs), and particularly spermine (Spm), exhibit anti-senescence properties in plants. In this work, we have investigated the role of Arabidopsis Polyamine Oxidase 4 (PAO4), encoding a PA back-conversion oxidase, during dark-induced senescence. Two independent PAO4 (pao4-1 and pao4-2) loss-of-function mutants have been found that accumulate 10-fold higher Spm, and this associated with delayed entry into senescence under dark conditions. Mechanisms underlying pao4 delayed senescence have been studied using global metabolic profiling by GC-TOF/MS. pao4 mutants exhibit constitutively higher levels of important metabolites involved in redox regulation, central metabolism and signaling that support a priming status against oxidative stress. During senescence, interactions between PAs and oxidative, sugar and nitrogen metabolism have been detected that additively contribute to delayed entry into senescence. Our results indicate the occurrence of metabolic interactions between PAs, particularly Spm, with cell oxidative balance and transport/biosynthesis of amino acids as a strategy to cope with oxidative damage produced during senescence. INTRODUCTION Polyamines (PAs) putrescine (Put), spermidine (Spd), and spermine (Spm) are nitrogen-containing compounds of low molecular weight known to participate in stress responses (Alcázar et al., 2010;Takahashi and Kakehi, 2010;Minocha et al., 2014;Tiburcio et al., 2014). The polycationic nature of PAs enables their participation in the modulation of cell ion balance as well as in the interaction with negatively charged molecules such as membrane lipids, proteins, and nucleic acids (Schuber, 1989;Cai et al., 2014). Protection of plant cell membranes by PAs has been documented and this might underlie some of the anti-senescence properties reported (He et al., 2002;Liu et al., 2007;Del Duca et al., 2014). However, PAs cannot only be considered as mere polycations stabilizing macromolecules. Evidence indicate that PAs have intrinsic properties and some act as signaling molecules (Minocha et al., 2014;Moschou and Roubelakis-Angelakis, 2014;Tiburcio et al., 2014). Some of the reported anti-senescent effects of PAs have been associated with their ability to act as free radical scavengers and inhibitors of lipid peroxidation (Stoynova et al., 1999;Navakoudis et al., 2007;Yaakoubi et al., 2014). Therefore, the mechanisms of action of PAs seem multiple and additive. As such, the use of omic approaches might be useful for unraveling PA mechanistic processes, and to integrate PAs in the context of global metabolic networks. Polyamine levels mostly depend on the balance between PA biosynthesis and catabolism. PA catabolism is mediated by two types of amine oxidases: copper-containing amine oxidases (CuAO) and FAD-containing PA oxidases (PAO) (Cona et al., 2006;Angelini et al., 2010). Spd, Spm, and thermospermine (tSpm) are preferential substrates of PAO activity Fincato et al., 2011;Tavladoraki et al., 2012). PAOs are classified depending on whether they terminally oxidize PAs or catalyze their back-conversion (Angelini et al., 2010;Moschou et al., 2012). PAOs catalyzing PA back-conversion oxidize the carbon at the exo side of the N 4 of Spd and Spm, producing Put and Spd, respectively. Arabidopsis thaliana (thereafter referred to as Arabidopsis), carries five genes coding for PAOs (AtPAO1-5; Fincato et al., 2011). Tissue-and organ-specific expression studies of AtPAO gene family members have shown some overlapping patterns but also contrasted differences. This, together with their different substrate specificity, suggests a functional evolutionary diversification of the AtPAO gene members . The different subcellular localization of AtPAO proteins may further support this view. AtPAO2-4 are peroxisomal proteins, whereas AtPAO1 and AtPAO5 are predicted to be cytosolic Fincato et al., 2011Fincato et al., , 2012Kim et al., 2014). Oxidation of PAs by amine oxidases not only contributes to the regulation of PA homeostasis but also generates products linked to different biological functions (Angelini et al., 2010;Tiburcio et al., 2014). PAs are metabolically linked to reactive oxygen species (ROS) through the production of H 2 O 2 via PA catabolism (Moschou et al., 2008;Fincato et al., 2011;Ono et al., 2012). Indeed, H 2 O 2 generated by amine oxidase activity has been shown to contribute to stomatal opening , trigger programmed cell death (PCD; Tisi et al., 2011) and γ-aminobutyric acid (GABA) accumulation (Bhatnagar et al., 2002;Mohapatra et al., 2010), which is thought to participate in stress signaling (Bouché et al., 2003;Shelp et al., 2012). Peroxisomes constitute a very important source of ROS and reactive nitrogen species (RNS). Current data suggest a link between PAs and ROS/RNS in stress signaling (Molassiotis and Fotopoulos, 2011;Filippou et al., 2013;Tanou et al., 2014). However, the relationship between PAs, ROS, and RNS, and their integrated effects in plant physiology are not completely established. PAO4 exhibits high affinity for Spm oxidation, and transforms via back-conversion Spm into Spd, but not Spd into Put (Kamada-Nobusada et al., 2008;Fincato et al., 2011). Previously, Arabidopsis pao4 loss-of-function mutants were found to display high Spm and low Spd levels in roots (Kamada-Nobusada et al., 2008). From a signaling perspective, Spm can modify the expression of several genes encoding redox components (Kamada-Nobusada et al., 2008;Mitsuya et al., 2009). Blockage of Spm oxidation by exogenous inhibitors suppressed this transcriptional response, thus suggesting that H 2 O 2 derived from Spm oxidation underlies this response (Mitsuya et al., 2009). Even though a potential signaling role has been recognized for Spm through transcriptional approaches, global metabolite profiling in engineered genotypes in which Spm levels are endogenously affected are, to our knowledge, not yet reported. Such studies might provide clearer associations between genotypes and stresstolerance phenotypes, as well as a better integration of PAs in the context of global metabolic networks (Bitrián et al., 2012). In this work, we have studied the involvement of AtPAO4 in Arabidopsis during dark-induced senescence, through the phenotypic analysis of two independent pao4 loss-of-function mutant alleles (pao4-1 and pao4-2). We demonstrate that pao4 mutation leads to delayed dark-induced senescence. Global metabolic profiling of pao4 mutants and wild-type plants was carried out to investigate mechanisms linked to primary metabolism that underlie the anti-senescent properties. We found that pao4 mutation promotes the accumulation of hub metabolites in central metabolism and phytohormone biosynthesis, which are known to protect plants against abiotic stress. We also found interactions between PAs and oxidative, sugar, lipid, and nitrogen metabolism. Our results indicate that Spm accumulation modifies the metabolic profile of Arabidopsis plants, thus delaying dark-induced senescence. Plant Material and Growth Conditions Arabidopsis thaliana accession Columbia-0 (Col-0) was used as wild type (WT) in this study. Seeds were stratified for 3 days in the dark at 4 • C and sown in pots containing a mixture of soil and vermiculite (1:1 [v/v]), irrigated with water and Hoagland-based mineral solution and grown at 21 • C under long-day photoperiod (16 h of white fluorescent light, photon flux of 70-90 mmol m −2 s −1 ). Dark-induced senescence was carried out on adult plants. Fully expanded leaves from 4-week-old plants were used for all analyses. Dark-induced senescence was established essentially as described (Fotopoulos and Kanellis, 2013). In brief, leaves were floated on water in 25 mm-diameter Petri dishes and incubated in the dark at ambient temperature for a period of 4 days. I (RNase-free; Promega USA) and reverse-transcribed using the SUPERSCRIPT First-Strand Synthesis kit (Invitrogen) following manufacturer's instructions. PCR from equal amounts of cDNA was performed using AtPAO4-specific primers and TaKaRa Ex Taq TM . Amplification of the Arabidopsis Actin 2 gene (AT3G18780.2) (forward primer, 5 -TCACCACAACA GCAGAGCGGGA -3 and reverse primer, 5 -GAAGATGCCCA GAAGTCT -3 ) was used for normalization. The PCR conditions were as follows: 96 • C 5 min, followed by 35 cycles (5 s at 96 • C, 10 s at 64 • C, and 40 s at 72 • C). PCR products were separated on a 1.0% agarose gel. The analysis was repeated three times with identical results. Polyamine Analyses Polyamines were analyzed by high-performance liquid chromatography (HPLC) separation of dansyl chloride derivatives. The extraction and determination methods have been previously described (Marcé et al., 1995). The analyses were performed in triplicates from three or more independent experiments. Pigments Content Leaf pigments were extracted from 12 mm leaf disks in dimethyl sulfoxide as described by Richardson (Richardson et al., 2002). Chlorophyll concentrations were determined using the equations described by Sims and Gamon (2002). Protein Extraction Total protein was extracted with phenol, as previously described (Wang et al., 2006). Protein concentration was determined by Bradford (Bio-Rad), diluted to a final concentration of 20 µg/µl, and stored at −20 • C. 20 µg of total protein extracts were separated by SDS-PAGE in 12.5% acrylamide gels. Bands were resolved using Colloidal Comassie Brilliant Blue G-250 stain. Hydrogen Peroxide and Nitric Oxide Quantification Hydrogen peroxide was quantified using the KI method, as described by Velikova et al. (2000). Nitrite-derived NO content was measured using the Griess reagent in homogenates prepared with Na-acetate buffer (pH 3.6) as described by Zhou et al. (2005). NO content was calculated by comparison to a standard curve of NaNO 2 . Lipid Peroxidation Lipid peroxidation was determined measuring malondialdehyde (MDA) content resulting from the thiobarbituric acid (TBA) reaction using an extinction coefficient of 155 mM −1 cm −1 as described by Hodges et al. (1999). Metabolite Profiling Metabolite profiling by GC-time of flight (TOF)-MS was performed as previously described (Lisec et al., 2006;Erban et al., 2007). 110 mg of frozen ground homogenized material from rosette leaves was extracted in 360 µL of methanol including internal standard ([ 13 C 6 ] -sorbitol) at 70 • C for 15 min and with 200 µL of chloroform at 37 • C for 5 min. The polar fraction was prepared by liquid partitioning with 400 µL of water. An aliquot of 80 µL from the upper polar phase was dried in a Speed Vacuum Concentrator for derivatization by methoxyamination in pyridine (40 mg/mL) and subsequent trimethylsilylation in a final volume of 80 µL. Alkanes were added to pyridine for use as retention index standards. Samples were measured using GC-TOF-MS (LECO Instrumente GmbH, Mönchengladbach, Germany). Chromatograms and mass spectra were processed and evaluated using TagFinder software (Luedemann et al., 2008). Metabolite identification was manually supervised using the mass spectral and retention index collection of the Golm Metabolome Database (Kopka et al., 2005;Hummel et al., 2010). Peak heights of the mass fragments were normalized based on sample fresh weight and internal standard [ 13 C 6 ]-sorbitol. Metabolic implication of reported altered metabolites in this work, further classification and simplified metabolic maps were made by the use of public database KEGG (Kanehisa and Goto, 1999;Kanehisa et al., 2014) and AraCyc developed by Plant Metabolic Network project (PMN; Mueller et al., 2003;Chae et al., 2012). Statistical Analyses Statistical analyses were performed using IBM R SPSS R Statistics V.22. Biochemical and physiological damage measurements were subjected to ANOVA. Significant differences between individual means were determined using Tukey's HSD (Honestly significant difference) pairwise comparison test at the 5% confidence level. Data from metabolomics were analyzed and heat maps obtained from MeV: MultiExperiment Viewer v.4.9 (Saeed et al., 2003). PA Levels in pao4 Mutants The levels of free Put, Spd, and Spm levels were analyzed in 4 weeks-old pao4-1, pao4-2 and wild-type plants. PA analyses indicated that both pao4 mutants accumulated up to 10-fold higher levels of Spm than the wild-type, consistent with Spm being the preferential substrate of PAO4 activity. Conversely, both pao4-1 and pao4-2 mutants exhibited lower Spd levels than the wild-type ( Figure 1B). The levels of Put were only increased in pao4-2 and not in pao4-1, probably as result of the residual PAO4 expression in the latter. We concluded that accumulation of Spm and dampening of Spd levels are common metabolic hallmarks of pao4-1 and pao4-2. Dark-Induced Senescence in pao4 Mutants We investigated the differential response of pao4 mutants and wild-type plants to early senescence induced by dark treatment. For this, detached mature leaves from 4 week-old pao4 mutants and wild-type plants grown under optimal conditions were used. No differences in size, senescence status (determined by total chlorophyll and protein levels) or turgor were visible between leaves of the wild-type and pao4 mutant before the dark-induced treatments (data not shown). Interestingly, both pao4-1 and pao4-2 mutants evidenced signs of delayed senescence after 4 days of continuous dark treatment (Figure 2A). Total protein levels were measured to quantify the extent of senescence delay induced by PAO4 mutation. Protein levels were significantly higher in pao4-1 and pao4-2 than the wild type, thus suggesting a lower rate of protein degradation consistent with delayed senescence (Figure 2B). Quantification of chlorophylls in pao4-1 and pao4-2 further supported these observations (Figure 2C), suggesting that pao4 mutation leads to delayed dark-induced senescence. Polyamine levels were determined during senescence in pao4 mutants and wild-type. Levels remained constant for most PAs throughout the induced senescence, except for Spd levels, which dropped in pao4 from fivefold lower than the wild-type under basal conditions to 10-fold lower than the wild-type after senescence treatment ( Figure 2D). H 2 O 2 , MDA, and NO Levels in pao4 Mutants During Dark-Induced Senescence Reactive oxygen species and RNS are important players of the oxidative and nitrosative response that exhibit contrasted effects on senescence. While ROS generally promote senescence (Khanna-Chopra, 2012), RNS might underlie anti-senescence effects (Niu and Guo, 2012;Liu and Guo, 2013). We measured H 2 O 2 and NO levels in pao4-1, pao4-2 and wildtype plants after dark-induced senescence (Figure 3). Both pao4 mutants exhibited lower H 2 O 2 levels than the wildtype plant after the senescence treatment, thus suggesting the enhancement of the antioxidative machinery in pao4 ( Figure 3A). Consistent with these observations, the levels of MDA (a measurement of membrane damage by lipid peroxidation) were significantly lower in pao4 than the wild-type ( Figure 3A). Interestingly, the levels of NO exhibited an opposite pattern and accumulated in pao4 compared with the wild-type ( Figure 3B). We concluded that ROS production induced by senescence is restricted in pao4 mutants, whereas NO production is stimulated. Metabolomic Profiling of pao4 Mutants Under Basal Conditions In order to analyze the metabolic consequences of PAO4 lossof-function on primary metabolism, we performed GC-TOF/MS metabolomic profiling (Erban et al., 2007;Allwood et al., 2011) in 4-week-old pao4 mutants and wild-type plants grown under optimal conditions in the absence of stress, referred to as 'basal' conditions. Primary metabolite profiling identified a total of 75 metabolites, 37 of which did not show significant differences respect to the wild-type (Supplementary Table S1). From the remaining 38 metabolites, 28 were increased (Figure 4) and 10 decreased in pao4 compared to the wild-type (Supplementary Table S2). Most down-regulated metabolites could not be classified into metabolic groups, because their chemical structure is unknown (Supplementary Table S2). Up-regulated metabolites could be sorted into four major metabolic categories belonging to oxidative and nitrogen metabolism, sugars and lipids. However, many metabolites were shared between categories ( Figure 4A). Increased metabolites in pao4 included sugars (galactose), sugar alcohols (myo-Inositol, erythritol), ethanolamine and many amino acids (Ser; aromatic amino acids Phe and Tyr; precursors of PAs Orn and Met; branched-chain amino acids Ile and Val). Indeed, amino acids represented the largest group of up-regulated metabolites in pao4 under basal conditions ( Figure 4A). Other important upregulated metabolites included pyruvate, which is a crucial hub metabolite, GABA, which is suggested to participate in stress responses, and ascorbate/dehydroascorbate (ASC/DHA), which are important metabolites involved in antioxidant defense pathways. Pearson's correlation analyses indicated the occurrence of strong positive correlations between Spm and up-regulated metabolites, but negative correlations with Spd (P < 0.05; Figure 4B). Based on these analyses, we conclude that pao4 mutants exhibit constitutive accumulation of several amino acids and important stress protection metabolites, and this associates with higher Spm levels and/or Spm/Spd ratios. Metabolomic Profiling of pao4 Mutants After Dark-Induced Senescence Metabolomic profiling after dark-induced senescence in pao4 and wild-type leaves identified a total of 103 metabolites ( Figure 5A and Supplementary Table S3), 28 of which exhibited significant differences between pao4 and wild-type senescent leaves ( Figure 5A). Among these, 13 metabolites were upregulated and 15 down-regulated in pao4 compared to the wildtype ( Figure 5A). 8 of the 13 up-regulated metabolites were already increased in pao4 compared to the wild-type under basal conditions (Figures 4A and 5A). Such constitutively upregulated metabolites were the PAs Put and Spm, antioxidative metabolites ASC/DHA, myo-Inositol, GABA and the amino acids Thr and Phe. Among up-regulated metabolites exclusively induced after senescence treatment in pao4, and not in the wildtype, we identified sugars (glucose and xylose) and the TCA cycle intermediate 2-oxoglutarate ( Figure 5A). Down-regulated metabolites in senescent pao4 leaves were amino acids involved in senescence signaling such as Glu, pyroglutamate, Trp, Asn, and 3-Cyanoalanine ( Figure 5A). The decrease in Glu and Asn is associated with late senescence partly because Asn and 3-Cyanoalanine are products of the cyanide detoxification pathway induced by ethylene biosynthesis (Diaz et al., 2005). Other molecules involved in glucose biosynthesis/degradation, such as α,α, trehalose were downregulated in pao4. A strong positive correlation was found between up-regulated metabolites in senescent pao4 leaves and Spm levels, but negative correlations with Spd (P < 0.05; Figure 5B). Conversely, downregulated metabolites showed an opposite pattern of strong positive correlation with Spd but negative with Spm, suggesting that homeostasis of these PAs may be relevant in the response to senescence (P < 0.05; Figure 5B). DISCUSSION The identification of metabolic networks in which PAs are integrated is a necessary step to elucidate potential mechanisms underlying PA-triggered stress protection (Shi and Chan, 2014). Here, we report that loss-of-function mutations in PAO4, a member of the five Arabidopsis AtPAO gene family, leads to delayed dark-induced senescence and this associates with higher Spm and/or lower Spd/Spm ratios. Accumulation of Spm in pao4 mutants ( Figure 1B) is consistent with the reported higher affinity of PAO4 enzyme toward Spm (Kamada-Nobusada et al., 2008;Fincato et al., 2011). Given the previously reported antisenescence properties of Spm in plants and animals (Pandey et al., 2000;Serafini-Fracassini et al., 2010;Del Duca et al., FIGURE 5 | Heat map of pao4 mutants altered metabolite pools after dark-induced senescence. (A) Metabolite levels were determined from detached leaves of 4 week-old plants grown as described (see Materials and Methods). Values represent log 2 -transformed fold-changes relative to the wild-type. The first group (top) represents up-regulated metabolites in pao4 compared with wild-type, and the second group (bottom) down-regulated metabolites in the same comparison. Altered metabolites were detected with MeV tool V.4.9 by rank product statistical test (P < 0.05). (B) Pearson correlation values (r) related to Spm and Spd, values were obtained from at least four independent biological experiments (P < 0.05). 2014; Moschou and Roubelakis-Angelakis, 2014), and the high Spm levels in pao4-1 and pao4-2 mutants, current findings suggest that the delayed pao4 senescence may be associated with the endogenous Spm levels. However, because pao4 mutants also exhibit lower Spd levels, it cannot be completely ruled out that the Spd/Spm ratio may modulate this response. In any case, global metabolic analyses in both pao4 mutants indicated that primary metabolism is intricately connected with PA metabolism, and this is differentially regulated in pao4 under senescence conditions. Our results indicate that loss of PAO4 functionality is beneficial to prevent senescence under darkinductive conditions. Global metabolite analyses in pao4 mutants under basal conditions (Figure 4) identified amino acids as the largest group of metabolites which were up-regulated, compared with wildtype plants. Up-regulated amino acids included PA precursors (Met and Orn), branched-chain amino acids, aromatic and polar uncharged, which are essential for post-translational modifications. In addition, most altered amino acids were either involved in day/night cycle transitions (Gibon et al., 2006) or adaptation to extended dark conditions (Gibon et al., 2006(Gibon et al., , 2009). Spm has previously been shown to reprogram the oxidative status of citrus plants exposed to salt stress, and to increase the ASC redox state (Tanou et al., 2014). In this study, the metabolic profile FIGURE 6 | Metabolic interactions between PAs and primary metabolism, and the observed increases/decreases in the comparisons between pao4 and wild-type under dark-induced senescence. 2-HG, 2-Hydroxyglutarate; 2HGDH, 2-Hydroxyglutarate Dehydrogenase; ETF, Electron-transfer flavoprotein; ETFQO, Electron-transfer flavoprotein: Ubiquinone oxidoreductase; 2-OG, 2-Oxo-glutarate; SAM, S-Adenosylmethionine; dcSAM, Decarboxilated S-Adenosylmethionine; ACC, Aminocyclopropane Carboxilic Acid; CNH, Hydrogen Cyanide; IAA, Indole-3-Acetic Acid; SA, Salycilic Acid; JA, Jasmonic acid N-Ac-Glu; N-Acetyl-L-Glutamate; Ac-Orn, Acetylornithine; L-Glu-Cys, L-Glutamylcysteine. of pao4 suggests the constitutive enhancement of anti-oxidative mechanisms, mainly through the accumulation of ASC/DHA, nicotinate and sinapate, which are essential metabolites in the maintenance of anti-oxidative capacity (Hashida et al., 2010;Wang et al., 2010;Foyer and Noctor, 2011;Gallie, 2012). pao4-1 and pao4-2 exhibited accumulation of metabolites in central metabolism and signaling hubs under basal conditions. Such metabolites included pyruvate and myo-Inositol, which is involved in sugar and phospholipid signaling (Gillaspy, 2011;Williams et al., 2015). AtPAO4 loss-of-function also led to the up-regulation of nitrogen-mobilization molecules, such as GABA (Bitrián et al., 2012;Shelp et al., 2012). The role of GABA during stress remains unclear. However, GABA has been proposed to act as a signaling molecule that coordinates the C:N balance in challenging environments, such as prolonged dark conditions (Buchanan-Wollaston et al., 2005). GABA also serves as nitrogenstorage molecule during nitro-oxidative stress (Tanou et al., 2012). Overall, the metabolic profile of pao4 mutants under basal conditions is consistent with a prime-like status, in which the antioxidant machinery is pre-activated and GABA accumulates. It is therefore suggested that Spm and/or low Spd/Spm ratio triggers pre-acclimation to stress in Arabidopsis. Subsequently, mechanisms underlying the pao4 antisenescence phenotype from a metabolic perspective were investigated. The levels of H 2 O 2 and NO were determined in wild-type and pao4 mutants after dark treatment. Interestingly, delayed senescence in pao4 correlated with significant increases in NO levels ( Figure 3B), which is a pattern consistent with previous observations (Niu and Guo, 2012;Liu and Guo, 2013). Conversely, the levels of H 2 O 2 were lower in pao4 than wild-type plants (Figure 3A), which is in agreement with promotion of the ASC/DHA cycle in pao4 ( Figure 4A) and supports previous findings in which ROS inhibition leads to delayed senescence in tobacco and wheat (Hui et al., 2012;Fotopoulos and Kanellis, 2013;Tian et al., 2013). NO might be an inductive element of the oxidative response after stress imposition (Linka and Theodoulou, 2013;Corpas and Barroso, 2014). It can be hypothesized that priming by Spm confer a more intense dark-induced stress response involving NO signaling. Compared with the wild-type, most metabolites altered by dark in pao4 were related to oxidative and nitrogen metabolism (Figure 5). Down-regulated metabolites in dark-treated pao4 were amino acids and compounds involved in their metabolism ( Figure 5A). This pattern is consistent with high nitrogen mobilization in pao4 induced by senescence (Soudry et al., 2005). Indeed, interactions have been observed between PA and amino acid metabolism during senescence in Arabidopsis (Mattoo et al., 2010;Watanabe et al., 2013). NO is also known to be involved in the regulation of free amino acid levels during the stress response by induction of the γ-glutamyl cycle for GSH biosynthesis (Innocenti et al., 2007), and through modulation of proteolytic mechanisms such as autophagy or the TOR pathway in Arabidopsis and other species (López-Berges et al., 2010;Tripathi et al., 2013). Some down-regulated amino acids by dark-induced senescence in pao4 have important implications in senescence signaling. As such, Glu influences adaptation to dark periods in Arabidopsis (Gibon et al., 2009). Glu is also a product of glutathione catabolism along with pyroglutamate (Ohkama-Ohtsu et al., 2007, which is also involved in mitochondrial reassembly during oxidative stress (Obata et al., 2011) and GABA formation (Soudry et al., 2005;Watanabe et al., 2013). Recent evidence also indicates that increases in nitrogen assimilation favors GSH biosynthesis with concomitant decreases in pyroglutamate and Glu levels (Paulose et al., 2013). The above data suggest the potential modulation of GSH homeostasis by PAO4 activity, which conditions Spm or Spd/Spm ratio. Metabolite profiling suggests the occurrence of a Spmtriggered oxidative response involved in the maintenance of the redox status throughout modulation of amino acid transport and recycling. Trp is a main precursor of the phytohormone indole-3-acetic acid (IAA; Zhao, 2014), and it participates in plant development and dark-induced senescence signaling (Van der Graaff et al., 2006). Asn and 3-cyanoalanine are products of cyanide detoxification pathway (Piotrowski et al., 2001), which is activated after the final biosynthetic reaction of ethylene (Yamagami et al., 2003). Both Asn and 3-cyanoalanine are considered as senescence markers (Van der Graaff et al., 2006;Watanabe et al., 2013). Cross-talk between PAs and hormones such as ethylene and IAA has been reported, but the molecular nature of such interactions remains elusive (Bitrián et al., 2012). Because pao4 mutants display lower levels of 3-cyanoalanine, Asn and Trp, it is suggested that high Spm levels might promote delayed entry into dark-induced senescence through inhibition of ethylene biosynthesis, although this requires further investigation. Aromatic and branched-chain amino acids have been shown to act as alternative electron donors for mitochondrial respiration during the stress response, in a process whereby the hydrolysis of 2-hydroxyglutarate (2-HG) produces 2-oxoglutarate (2-OG) with concomitant release of electrons donated to ubiquinol via the ETFQO complex (Ishizaki et al., 2005;Araújo et al., 2010Araújo et al., , 2011Obata et al., 2011). Interestingly, Phe, 2-HG, and 2-OG were increased in pao4 mutants compared with wild-type plants, thus suggesting that Spm promotes the alternative electron donor pathway for mitochondrial respiration (Figure 5). In support to this view, an Spm-induced signaling pathway leading to mitochondrial dysfunction has previously been reported during biotic stress in tobacco and Arabidopsis (Takahashi et al., 2004;Mitsuya et al., 2009). Therefore, it seems reasonable that increases in Spm and NO might enhance mitochondrial energy production after dark-induced senescence. Other molecules involved in glucose biosynthesis/degradation and enhancement of oxidative burst were also identified, such as α,α, Trehalose (O'Hara et al., 2013). This metabolite has emerged as a redox signaling molecule with a proposed role during stress and senescence (Fernandez et al., 2010;Krasensky et al., 2014). Trehalose degradation confers drought tolerance by producing glucose (Van Houtte et al., 2013), a pattern which has also been observed during dark-induced senescence (Buchanan-Wollaston et al., 2005;Gibon et al., 2006), and is consistent with the increase in glucose levels observed in pao4 after dark treatment ( Figure 5). Furthermore, increases in xylose observed in dark-treated pao4 plants suggest activation of the phosphate-pentose pathway, which is reported to be up-regulated in Arabidopsis roots after oxidative stress imposition (Lehmann et al., 2009) as a source of reducing equivalents in peroxisomes for GSH biosynthesis (Corpas et al., 2009). Increased lactate was also found, which is consistent with a link between sugar and pyruvate-related amino acid metabolism. Overall, we provide a global view of metabolic changes affected by PAO4 mutation in Arabidopsis, which are associated with delayed entry into dark-induced senescence (Figure 6). Current findings suggest that the delayed pao4 senescence may be associated with high Spm levels, reduced ROS production and increased NO levels. Furthermore, our results point to an important role of Spm as a 'signaling' metabolite promoting stress protection through metabolic connections involving ASC/GSH redox state modifications, changes in sugar and nitrogen metabolism, cross-talk with ethylene biosynthesis and mitochondrial electron transport chain modulation, all of which are involved in the nitro-oxidative response after stress imposition.
v3-fos-license
2024-06-17T06:17:23.970Z
2024-06-15T00:00:00.000
270520594
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00431-024-05643-y.pdf", "pdf_hash": "0e92558bc05fc2869f64233a35c98e32a707cbe2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42531", "s2fieldsofstudy": [ "Medicine" ], "sha1": "a26fa34d8f17a5f4edb0d9b87e86529e34eeb567", "year": 2024 }
pes2o/s2orc
Review of patients with achondroplasia: a single-center's experience with follow-up and associated morbidities Achondroplasia (ACH; MIM #100,800), caused by a heterozygous gain of function pathogenic variant in the fibroblast growth factor receptor 3 gene (FGFR3; MIM*134,934), is the most prevalent and most readily identifiable cause of disproportionate short stature that is compatible with life. In addition, individuals with achondroplasia face significant medical, functional, and psychosocial challenges throughout their lives. This study assessed associated morbidities in patients with achondroplasia at a single center in Turkey. In this study, the clinical findings and associated morbidities of a group of patients with achondroplasia (n = 68) with clinical multidisciplinary follow-up at a single center between the years 2005–2023 are evaluated retrospectively. A total of 68 patients, 30 male (44.1%) and 38 female (55.9%), were evaluated. In the majority (84.2%) of patients, shortness of extremities was detected in the prenatal period at an average of 28.7 gestational weeks (± 3.6 SDS) with the aid of ultrasonography. More than half (n = 34/63, 54%) of the patients had a father of advanced paternal age (≥ 35 years). Among the complications, respiratory system manifestations, including obstructive sleep apnea (70%), ear-nose-throat manifestations including adenoid hypertrophy (56.6%) and otitis media (54.7%), neurological manifestations due to foramen magnum stenosis (53.2%), and skeletal manifestations including scoliosis (28.8%), are represented among the most common. The mortality rate was 7.3% (n = 5/68). Conclusion: This study not only represents the first retrospective analysis of the associated morbidities of patients with achondroplasia from a single center in Turkey but also will provide a reference point for future studies. Introduction Achondroplasia (ACH; MIM#100,800) is the most common cause of disproportionate short stature worldwide.According to the best estimates, it occurs in 1:25,000-1:30,000 live births [1].The most characteristic clinical findings include rhizomelic shortening of the limbs, macrocephaly, and characteristic facial features with a low nasal bridge, frontal bossing, and midface hypoplasia.Hypotonia is typical in infancy and early childhood, and the acquisition of motor developmental milestones is often delayed [2].Nevertheless, intelligence is generally normal.ACH is caused by heterozygous pathogenic variants in the fibroblast growth factor receptor 3 gene (FGFR3, MIM*134934), a transmembrane receptor tyrosine kinase [3,4].About 98% of patients with ACH have the variant c.1138G > A, while only 1% have c.1138G > C, both of which cause a glycine to arginine substitution at amino acid position 380, p.Gly380Arg.In 80% of cases, ACH is sporadic, and the disease is due to a de novo pathogenic variant [5].Such de novo variants occur exclusively in the father's germline and increase in frequency with advanced paternal age (> 35 years) [6].The diagnosis of ACH can be established on the basis of clinical and radiographic features (short tubular bones, generalized metaphyseal changes, proximal femoral radiolucency, narrowing of the interpedincular distance of the caudal spine, and narrow sacrosciatic notch) (Fig. 1).ACH requires multidisciplinary follow-up in order to prevent life-threatening complications and increase the quality of life [7][8][9].Complications such as craniocervical junction compression, central and obstructive sleep apnea, recurrent otitis media, conductive hearing loss, bowing of the lower legs, kyphosis, spinal stenosis, and obesity may occur [7,10] (Fig. 2).The overall life expectancy is decreased by approximately 10 years in the present adult Fig. 1 The typical radiographic findings in ACH.Please note; 1a) generalized metaphyseal changes and proximal femoral radiolucency; 1b) short tubular bones, rhizomelic shortening of the arms, and trident hand; 1c) proximal femoral radiolucency and narrow sacrosciatic notch; 1d) narrowing of the interpedincular distance of the caudal spine in infancy period; 2a-2d) progressive scoliosis in a patient aged 6, 9, 16, and 19 years; 3a-3b) genu varum in a patient and postoperative radiograph aged 8-9 years; 4a-4b) genu valgum in a patient and postoperative radiograph aged 15-17 years ACH population [11,12].In this study, we aimed to evaluate the associated morbidities of patients with ACH who are under follow-up at a single center in Turkey. Materials and methods Patients with ACH, regardless of molecular confirmation, who were under clinical follow-up at Hacettepe University Faculty of Medicine, Department of Pediatric Genetics between the years 2005-2023, were included in the study.The prevalence of developmental and behavioral, neurologic, respiratory, ear and hearing, musculoskeletal, and dental findings were recorded and analyzed.Neuroimaging findings were evaluated when magnetic resonance imaging (MRI) scans were available.Data were gathered retrospectively from hospital records.Information regarding the age at diagnosis, parental ages, body mass index, developmental assessment (formal/verbal), molecular tests, and history of recurrent otitis media and/or ventilation tube insertion, adenoid hypertrophy and/or adenoidectomy, hearing loss, snoring, obstructive sleep apnea, cardiac anomalies, skeletal and neurological symptoms, surgeries (neurosurgery/orthopedic), and mortality were recorded.Associated morbidities detected in patients were divided into two age groups, childhood and adult (> 18 years), according to the current age of the patient at the time of this study. The Denver II developmental screening test had been used for the standard assessment of development, in patients with ACH younger than 6 years at our institution.Patients older than 6 years of age were assessed cognitively by the use of the WISC-R (Wechsler Intelligence Scale for Children-Revised). The diagnosis of hydrocephalus was established through the evaluation of various radiographic parameters, including ventricular dilation (Evans' index > 0.3), narrow sulci, distension of the third ventricular recesses, bulging third ventricular floor, elevated corpus callosum, reduced mamillopontine distance, widening of the lateral ventricular horns, narrowing of the ventricular angle, and aqueductal flow void on MRI examinations [13].Scoliosis examination was performed by specifically measuring the Cobb angle on X-rays to determine the degree of spinal curvature.A diagnosis of scoliosis was made when a coronal curvature assessed on a posterior-anterior radiograph exceeded 10 degrees.Foramen magnum stenosis was evaluated based on the presence of MRI findings, including narrowing of the craniocervical junction, loss of cerebrospinal fluid space, cord compression, and increased T2 cord signal [14]. Body mass index (BMI) was assessed according to the Centers for Disease Control and Prevention (CDC) guidelines for those aged 2 to 20 years [15].When compared to the normal population, 5-85 percentile were considered healthy weight, 85-95 percentile overweight, and ≥ 95 percentile obese in the 2-20 age group.For patients over 20 years, a BMI between 25 and 30 was defined as overweight, and a BMI of 30 or higher as obese.Patients classified as overweight or obese according to normal population data were then re-evaluated according to the BMI charts for patients aged 0-16 years with ACH, as detailed in the study by Hoover-Fong et al. [16]. Statistical analyses were performed using SPSS for Windows Version 22.0.Numerical variables were summarized by mean ± standard deviation, discrete numeric variables were expressed as medians (minimum-maximum), and qualitative variables were summarized by number and percentage. Results The data of 68 patients with ACH were analyzed retrospectively.Clinical findings, demographic features, and associated morbidities of the patients are presented in Table 1 and Fig. 1-2. Demographic findings Thirty patients were male (44.1%), and 38 were female (55.9%), with a male-to-female ratio of 0.79.Advanced paternal age (≥ 35 years) was observed in 34 patients (54%).Parental consanguinity was present in 15 patients (22%).Upon examining the last recorded ages of our patients, the youngest was 5 months old, and the oldest was 28 years old. Prenatal findings The finding of shortness of extremities was detected in 84.2% of the patients during the prenatal period, at an average of 28.7 weeks (± 3.6 SD) with the aid of ultrasound.Macrocephaly was found in 10 patients, a narrow thorax in one patient, and polyhydramnios in two patients with shortness of extremities.Premature birth occurred in 10 patients (between 30-35 weeks). Diagnostic findings While the median age at clinical diagnosis was 2 months (range: 0-153 months), the median age at molecular diagnosis was 12 months (range: 0-336 months).One patient had a molecular diagnosis with a prenatal test.Sanger sequencing results for 31 patients (45.6%) were available.Almost all (n = 30/31, 96.8%) had a heterozygous pathogenic variant in FGFR3 c.1138G > A;p.Gly380Arg.One patient had a maternally inherited heterozygous, and previously reported FGFR3 c.1031C > G; p.Ser344Cys pathogenic variant [4].The majority of cases were sporadic.A family history of ACH was positive in only four patients (n = 4/68, 5.9%). Ears-hearing and respiratory findings (n = 53) Twenty-nine patients with ACH (54.7%, n = 29/53) had recurrent otitis media, and a ventilation tube was inserted in 16 (55.2%,n = 16/29).Thirty patients exhibited adenoid hypertrophy and 19 of them underwent adenoidectomy (63.3%, n = 19/30).Five patients experienced hearing deficiency (9.4%, n = 5/53).There were no patients presenting with ear, nose, and throat findings for the first time in adulthood.In terms of symptoms, 50 patients (94.3%) reported snoring, and 16 experienced apnea (30.2%).Obstructive sleep apnea was diagnosed in 14 of the 20 patients (70%) who underwent polysomnography (PSG) as part of routine follow-up.The families of two of these patients did not report any instances of apnea.Nine patients had been using CPAP (continuous positive airway pressure) therapy.Three patients persisted in snoring throughout adulthood; however, they did not describe apnea clinically, and PSG was not performed in adulthood. Growth and developmental findings Anthropometric measurements of the patients were evaluated according to ACH-specific growth curves [17].Among our patients, the maximum height was 135 cm for men and 123 cm for women.Of the patients who were over 2 years at the last check-up and for whom height and weight data were available (n = 37/68, 54.4%), thirteen were overweight and eighteen were obese.However, when 28 of these 31 patients aged 0-16 years were evaluated according to the ACH BMI curve, only 2 patients (5.4%) had a BMI above the 95th percentile (Fig. 3).The other three patients were 19, 19, and 27 years old at last visit with BMIs of 32.4,27.4, and 30.6, respectively.Of the 68 patients, 20 underwent a formal assessment of development (n = 20/68, 29.4%).When assessing the developmental status of patients with ACH, a specific chart was not used, but developmental delay based on physical differences was taken into account in patients with ACH compared to similar age matched groups.Denver II developmental screening test results were available for 18 patients.Among these 18 patients, seven (38.9%) exhibited developmental delays in at least two of the motor, language and social domains, five (27.8%) had gross motor delay, two (11.1%)experienced language delay, one (5.6%)showed social delay, and three (16.7%)had normal development. During the follow-up period, the WISC-R was administered to eight of these patients.In total, the WISC-R was performed on 10 patients (14.7%, n = 10/68), specifically in those with a suspicion of learning or intellectual disability after reaching school age.Of these 10 patients, two had normal results, four were found to have borderline intelligence, one had mild intellectual disability, one had moderate intellectual disability, and two had learning disabilities.It is also important to mention that the WISC-R may underestimate performance at patients with ACH because it is based on motor skills. In the informal assessment of developmental stages, which involved 49 of the 68 patients, 36.7% were found to have developmental delays in at least two areas, primarily motor and language; 36.7% had isolated motor delay, 4.1% had isolated language delay, and 22.4% showed normal development. Neurological and neurosurgical findings (n = 47) Central nervous system imaging revealed foramen magnum stenosis in 25 patients (53.2%), hydrocephalus in 13 patients (27.7%), ventriculomegaly/large lateral ventricles in seven patients (14.9%), cervical stenosis and/or myelopathy in six patients (12.8%), lomber stenosis and/or myelopathy in two patients (4.2%), a thin corpus callosum in four patients (8.5%), cerebral atrophy in four patients (8.5%), and a cervical intramedullary tumor in one patient (2.1%).Five patients (7.3%, n = 5/68) underwent surgical intervention; two underwent cervical laminectomy (one had severe foramen magnum stenosis, and the other had a cervical intramedullary mass), two had cervical decompression surgery (one had severe foramen magnum stenosis and apnea, and the other had a narrow craniocervical junction and hydrocephalus), and one patient had a thoracolumbar laminectomy due to spinal canal stenosis at the T11-L1 level.Among these patients, only one required VP shunt intervention.In adulthood, only one patient who presented with leg pain and paresthesia had lumbar stenosis and myelopathy and did not require surgery. Musculoskeletal findings (n = 52) Kyphosis persisting after infancy was observed in six patients (11.5%), genu varum in 12 (23.1%),genu valgum in six (11.5%), scoliosis in 15 (28.8%), and pectus excavatum/carinatum in one patient (1.9%).Fixation osteotomy was performed on one patient for genu valgum, and on two patients for genu varum.Lengthening surgery was performed on six of the 52 patients (11.5%).Some of the radiographic findings are shown in Fig. 1.Upon evaluation in adulthood, one patient had genu varum and one patient had scoliosis.We did not have any patients with a history of orthopedic surgery during this period. Mortality In our study, five of the 68 patients died, yielding a mortality rate of 7.3%.These patients died at 5 months, 12 months, 16 months, 24 months, and 36 months, all with a diagnosis of pneumonia.One individual had both ACH and Down syndrome, experienced obstructive apnea, pulmonary hypertension and other sequelae.The lethal episode arose secondary to aspiration.Another patient, died at 12 months of age; surgery for a cervical intramedullary mass was followed by postoperative pneumonia.The other three patients also had histories of recurrent pneumonia; additionally, one of them had severe sleep apnea, and another had both pulmonary hypertension and difficulty swallowing.It is not clear why in our population life-threatening pneumonia appears to be so much more frequent than is typically reported in ACH.We did not have any patients who were followed into advanced ages and died in adulthood. Discussion ACH is the leading cause of disproportionate short stature, and individuals with ACH encounter various medical, functional, and psychosocial challenges over their lifetimes.Early intervention and regular follow-up are essential for effectively managing potential complications in individuals with ACH.This study aimed to identify the most common clinical features and associated morbidities among 68 patients with ACH diagnosed in our clinic over the past 25 years. Short stature is one of the main concerns in ACH.Moderate to marked short stature is present in all affected individuals.Obesity is another concern in ACH [18].However, excessive weight gain usually becomes noticeable during early childhood, and in adulthood, obesity can exacerbate morbidity associated with lumbar stenosis, cause generalized joint issues, and contribute to the onset of cardiovascular complications [12].Childhood obesity rates in ACH have been estimated to range from 0 to 10%, which is significantly lower than the rates observed in adults.Among adolescents, the prevalence of overweight and obesity has been reported to be as high as 56% [19,20].In our study, we evaluated the body mass index of 37 patients with ACH during the most recent follow-up.In the age group of 0-16 years, only two patients were obese according to the standards of ACH. However, three patients for whom BMI data were available in adulthood, one was overweight and two were obese.Although this suggests that the prevalence of adult obesity is increasing, the number of adult patients was insufficient for a clear interpretation. Individuals with ACH generally have normal cognitive development and function [21].However, when compared with controls, they have specific differences in development, including delayed and unusual motor development and language-related problems [22].Mild to moderate hypotonia is typical during infancy, making it difficult for infants to support their heads.This, along with variations in body habitus, contributes to motor development delays [23][24][25].Except in cases of hydrocephalus or other central nervous system problems, intelligence is normal [26].Nevertheless, a small minority of children with ACH will be more seriously delayed, demonstrate significant learning disabilities, and may have autism spectrum disorders and/ or a cognitive disability [22].Although the frequency of such problems has not yet been well documented, it is suggested that it accounts no greater than 10% [22].In this study, the majority of patients initially had a delay in gross motor skills during infancy, as noted in informal assessments.However, they eventually caught up with their peers.Formal evaluations, using either the Denver II Developmental Screening Test or WISC-R, indicated that four patients had borderline intelligence, one had mild intellectual disability, one had moderate intellectual disability, and two (a twin pair) had learning disabilities and attention deficit hyperactivity disorder (ADHD).Notably, two additional patients (n = 2/68, 2.9%) were also found to have serious delays.The brain MRIs of these patients showed foramen magnum stenosis with hydrocephalus in four patients, cervical stenosis in two patients, cervical stenosis and cerebral atrophy in one patient, and thin corpus collosum in one patient. Population-based studies indicate that without evaluation and treatment, the excess risk of death for infants with ACH could reach 7.5% in the first year of life due to issues with the craniocervical junction [11].This risk seems to be related to central apnea caused by damage to the respiratory control centers.This risk could be reduced to as little as 0.3% with evaluation and neurosurgical treatment [33].In the present study, following a neurological evaluation and cranial MRI, a total of five patients (n = 5/47, 10.6%) with ACH underwent intervention. Foramen magnum stenosis is a well-recognized, serious, and potentially life-threatening complication in ACH.It might clinically present with sleep-disordered breathing, hypotonia, or hypertonia with increased reflexes and extensor plantar responses; however, children and infants with foramen magnum stenosis can also be asymptomatic [7].To ensure optimal monitoring of this potentially life-threatening complication, guiding principles have been developed [34].These include routine clinical monitoring of infants and young children, scheduled magnetic resonance imaging screenings, referral of suspected cases to a neurosurgeon, combined assessments to inform decompression decisions, collaborative decision-making on proceeding with decompression, and management strategies for older children with previously undetected foramen magnum stenosis [34].In the present study, foramen magnum stenosis was detected in more than half (53.2%) of the patients, yet six of them (24%) had headache, nine had apnea (36%), and three (12%) required surgical intervention.Risks for apnea-related death, as well as high cervical myelopathy and paralysis, result from the foramen magnum's growth being out of phase with that of the spinal cord.High cervical myelopathy can also be caused by compression of the cervicomedullary cord, typically manifesting in young children as disproportionate and persistent hypotonia, weakness, asymmetric reflexes, and hyperreflexia [35].Therefore, from the time of diagnosis, every patient should undergo a thorough neurological examination, neuroimaging, and PSG evaluation.Symptomatic spinal stenosis, affecting L1-L4, is the most prevalent medical condition in adults [36].In our ACH group, headache was noted in six patients; arm pain, weakness, and paresthesia in one; and leg pain, weakness, and paresthesia in two patients.These symptoms frequently manifested in late adolescence and adulthood. Thoracolumbar junction kyphosis affects 89% of infants with ACH, but it usually resolves spontaneously over time [37].The prevalence of kyphosis is lower, ranging from 19 to 35%, in children of walking age (over 3 years) and adolescents [38].In our study, six patients (11.5%) exhibited kyphosis at ages over 3 years.Scoliosis is another common skeletal finding, affecting 60% of patients at an average age of 18 years [39].In the present study, scoliosis was the most common musculoskeletal finding, observed in 28.8% of patients, yet none of the patients required surgical intervention.Only three patients had been using a corset for scoliosis.Scoliosis in patients was determined by considering both clinical and radiographic findings, however, the lower incidence of scoliosis among our patients was thought to be due to their younger average age (median: 6 years; range: 0-28 years) and missing follow-up data. Otitis media and hearing loss are common problems in ACH [22].Otitis media affects about 80% of all children with ACH at some point in their lives.In the present study, more than half (54.7%) of the patients had otitis media, and 9.4% experienced hearing loss.Previous studies reported higher rates of otitis media (80%) and hearing loss (37%) in patients with ACH than in our patients, which was thought to be related to missing data of some patients not continuing their regular follow-up after diagnosis [40].Of patients with otitis media, about one-third (30.2%) had a history of ventilation tube placement.Snoring, mouth breathing, sleep apnea, and obstructive sleep apnea are common breathing disorders in ACH.Among our patients, snoring was a very common symptom (94.3%).In their clinical histories, apnea accompanying snoring was observed in 16 patients (30.2%).Obstructive sleep apnea (OSA) was detected in 14 of 20 patients for whom PSG evaluation data were available.PSG should be performed on all patients with ACH following diagnosis, since clinical history is a poor predictor of apnea [41].Of note, despite the absence of symptoms such as snoring or apnea, OSA was detected with PSG in two of our patients. Approximately half of patients (n = 31; 45.6%) have undergone at a minimum one surgery related to ACH such as ventilation tube insertion, adenoidectomy, neurosurgery, ortopedic surgery.This rate was 80% in the United States, 75.7% in the Japanese, and 72% Europe ACH cohort [42][43][44].It has been suggested that inadequate patient data may be the reason for the lower rate of surgery in this study. Middle ear procedures were the most prevalent type of surgeries as in the previous studies [42][43][44].Lengthening surgery was performed in six patients (11.5%) in our cohort.The rate of lengthening surgery in patients with ACH varies considerably among countries, ranging from 1.2% in the United States to ~ 60% in Japan and up to 90% in ~ Spain [42].Besides medical approaches, societal perspectives on disease and cultural values are likely to play a role in the development of such serious differences.For the same considerations, the rate of growth hormone use in treatment in a study in Japan was 75.7%, whereas in this study, three patients (4.4%) received growth hormone therapy [44]. Interestingly, a few unusual findings in some of our patients caught our attention.In one of the twin pairs, headache, learning difficulties, and ADHD were present and a brain MRI displayed foramen magnum stenosis and craniocervical junction compression.The fundus examination revealed papilledema, but neurological examination was completely normal, and an increase in intracranial pressure was not evident.Nevertheless, after receiving medical treatment with acetazolamide, the papilledema improved.Patients with ACH with papilledema have previously been reported; however, unlike our patients in discussion, hydrocephalus and high intracranial pressure were generally present in these patients [45]. The other interesting finding was a patient with hypergonadotropic hypogonadism and streak gonads with a karyotype of 46,XX.To the best of our knowledge, gonadal dysgenesis has not previously been reported in ACH.Our patient exhibited hypogammaglobulinemia and severe intellectual disability as well.Due to atypical findings, advanced molecular analysis including chromosomal microarray analysis and exome sequencing was performed in addition to FGFR3 sequence analysis.Apart from the FGFR3 c.1138G > A heterozygous variant, no other pathogenic or clinically significant variant was identified through these tests. Lastly, during follow-up, one of the patients, who showed both phenotypic and radiological findings consistent with ACH, displayed acanthosis nigricans and global developmental delay which suggested a diagnosis of severe ACH with developmental delay and acanthosis nigricans (SADDAN; MIM#616482).Physical examination revealed diffuse hyperpigmentation on the neck and trunk with dryness of the skin.There was no evidence of insulin resistance or adrenal insufficiency.Therefore, the most common variant in SADDAN c.1949A > T; p.Lys650Met was initially checked which was normal.A heterozygous FGFR3 pathogenic variant, c.1138G > A was detected in this patient. Acanthosis nigricans in patients with ACH has rarely been reported in the literature; however, a recent study showed that it is present in 10% of patients.It is more likely to occur in the non-white population and has typically been observed to first appear in prepubertal or adolescent years [46].In our patient group, only one patient (1.5%) had acanthosis nigricans, which may be related to the patients' ages and the fact that some of the patients admitted were from the non-white population.At the last examination, 54 patients (79.4%) were preadolescents (under 10 years old), and we plan to continue monitoring these patients for acanthosis nigricans. In previous studies, mortality in children under 4 years of age was often sudden death with acute brainstem compression [12,33].The five patients (7.3%) who deceased in our study were also due to pneumonia.One of them had Down syndrome and died due to an aspiration pneumonia and the other one died due to a postoperative pneumonia.Although recurrent pneumonia and serious infections are an expected finding in Down syndrome, life-threatening pneumonia in patients with ACH is not as common as it was in our study.These patients had symptoms such as severe sleep apnea and difficulty swallowing, so we thought they might have recurrent infections associated with complications secondary to foramen magnum stenosis.However, we did not have data to support this because two of our five patients who died did not have a brain MRI before death and the other three did not have foramen magnum stenosis. This study has several limitations, one of which is the missing follow-up data for some of our patients.Although this study includes patients of various ages, we were unable to plot the distribution of height SD scores due to missing data.Secondly, formal assessments for clinical findings, including intellectual disability and apnea, were not available for some patients.Thirdly, information on multidisciplinary and adulthood follow-up is missing. In conclusion, ACH is commonly viewed as a mild condition, yet its associated health risks and potential for mortality are often underestimated.Early and continuous monitoring for serious complications from diagnosis onwards is essential to prevent early death, handle future health issues, and enhance the overall quality of life.Despite being the most prevalent form of non-lethal skeletal dysplasia, ACH's complications are not limited to orthopedic problems alone.This highlights the need for a comprehensive approach by a multidisciplinary healthcare team to cater to the varied needs of individuals with ACH. Fig. 3 Fig. 3 BMI curves (5th, 50th, and 95th percentiles) of 28 patients from 0 to 16 years of age in boys and girls with achondroplasia considered overweight or obese compared to normal population (The Table 1 The demographic and clinical features of all patients with ACH
v3-fos-license
2018-12-27T05:23:22.653Z
2008-11-12T00:00:00.000
64603363
{ "extfieldsofstudy": [ "Economics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://extra.shu.ac.uk/ppp-online/wp-content/uploads/2013/06/ecotowns_labour_residential_development.pdf", "pdf_hash": "b698e7864e6a9f1be9a414da9f0517d8f847dace", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42532", "s2fieldsofstudy": [ "Environmental Science", "Sociology" ], "sha1": "b698e7864e6a9f1be9a414da9f0517d8f847dace", "year": 2008 }
pes2o/s2orc
Eco-towns , New Labour and sustainable residential development The three-legged stool of sustainability is embodied within the eco-towns debate as a popularist and politically mobilised concept. This is most distinguishable when considered within a housing-orientated framework and which reveals significant economic, social and environmental risks to the eco-towns programme. This creates a complicated position for various actors, with ramifications which extend across the entire development industry. These are entirely surmountable through shrewd policy decisions at every level, although require continued enthusiasm from each professional, political and public participant if an effective mediation of our built environment is to be achieved. Predominantly this must be underpinned by a rational and critical decision process which builds upon stronger elements of the eco-towns programme and incorporates them within a wider policy perspective irrespective of the political gains that reside through emphasis elsewhere. Introduction Introduction Introduction Introduction Over the last two decades the sustainability concept has moved from the fringes of academic and policy debates and into mainstream culture.Individuals grow ever more aware of the excesses of modern society, governments increasingly aware of the need to be seen to react, and business is left to exploit the 'green gold' which exists at the centre of the debate.It is therefore unsurprising that the 'S' word has now become synonymous with the development industry and the mediation of our existing and future environments.Yet in an era when seemingly every new policy, product and service is prefixed 'eco-' it becomes ever more necessary to distinguish rhetoric from reality; communicating what critical analysis exists to professionals and the general public alike.Eco-towns potentially represent one of the most important contemporary manifestations of this need in English urban policy. As the sustainability notion encroaches daily further into mainstream consciousness it adopts greater media significance; an important factor in an era denoted by the customisation of politics towards popularist policies (Mildlarsky, 1997: 323).Public Choice Theory understands such customisation through supposition that rational politicians lack incentive to fight influential lobbyists whereas hold incentive to garner support.Concurrently, perceived marginal value net of opportunity costs stimulates voter behaviour in spite of the inconsequentiality of their democratic participation.Awareness of this trend has led to accusations of urban planning as 'a trashcan for any buzzword doing the rounds', but this does not appear to have diminished political support for the eco-towns programme (Jenkins, 2008: 31).In March 2007 then Housing Minister Yvette Cooper announced the development of five new eco-towns in England, initially to be of 10,000 homes each.(DCLG, 2007a: Cracknell, 2007).Publication of the Eco-towns Prospectus in July expanded this figure to range between 5-20,000 homes, and September saw the number of proposed eco-towns doubled by Gordon Brown to ten (DCLG, 2007b;Milne, 2007).In March 2008 Birmingham City Council alone released plans to develop five eco-towns itself (Energy Saving Trust, 2008). New Labour politics and the English eco-towns programme are thus inextricably bound in their fortunes; founded upon the triple-bottom line approach to urban development identified in PPS1: Delivering Sustainable Development (DCLG, 2005c;2007).Focussing on the economic, environmental and social remits of planning policy, this 'three-legged stool' of sustainability was assessed in The Eco-towns: Scoping Report (2007), produced by the Town and Country Planning Association (TCPA) and their centrality emphasised by TCPA Chairman David Lock who noted that '[m]iss out any one of these three aspects, and the project will very likely fall over' (TCPA, 2007a: 1).The Eco-towns Prospectus (2007) requires eco-towns to meet several criteria to achieve these ends.They must have a distinct identity with well-linked transport infrastructure and a wide range of facilities serving the health and education needs of its inhabitants.Of these developments, 30-50% should be mixed tenure affordable housing and the process itself should be overseen by a delivery organisation, achieving zero carbon status, and demonstrating themselves as exemplars of sustainable living (DCLG, 2007b: 4).What remains for evaluation within this context is therefore not the sustainability of eco-towns per se so much as their political mobilisation, something henceforth expanded upon within a housing-oriented discussion which will follow the triple-bottom line approach, assessing deliverability against governmentally purported aims. Eco Eco Eco Eco----towns and the economy towns and the economy towns and the economy towns and the economy Economic efficiency supposes, on the most basic level, that rational operatives equate the marginal benefit of consumption against the marginal cost, with supply equalling demand at the equilibrium.This perception of supply and demand is widely appreciated by both practitioner and layman and underpins crucial flaws in the ecotowns programme.Kate Barker's Review of Housing Supply highlighted the harmful effects of poor housing supply on the UK economy, noting how poor supply 'hinders labour market flexibility, constraining economic growth' (Barker, 2005: 1).It noted the 'increasingly unaffordable' nature of homes polarising our communities and restricting our aspirations (ibid: 1).Although there is little chance that the eco-town developments, producing a maximum of 200,000 additional homes, will make much genuine impact upon housing supply which requires three million additional homes by 2020 (CPRE, 2008:1).Moreover, the impact of low supply on the housing market at each level would doubtlessly reduce the opportunity for affordability gains at either a national, regional or local level.A government-driven delivery system could have the theoretical ability to instigate an affordability gain through the regulatory system.Affordability, a key criterion for eco-towns, is set to deliver between 30-50 per cent of the development (DCLG, 2007b: 13).Nonetheless, whilst being higher than national average, this is in line with many policies which already exist for Growth Areas and Growth Points making the eco-towns affordability contribution somewhat short of radical.In these macroeconomic respects it seems unlikely that the eco-towns programme will deliver the 'huge opportunities for sustainability gains' claimed by the Housing and Growth Programmes Team at the Department for Communities and Local Government (Cleary, 2007:1). Beyond the affordability discussion it is necessary to fit the issues of housing supply into the wider economy and Kate Barker's Review of Land Use Planning highlighted the positive role that a plan led system can have upon delivering economic growth (Barker, 2006).The Unpopular Housing Report supportively concluded that '[m]any of the causes of unpopular housing are linked to wider, regional and sub-regional economic and housing market changes' (DCLG, 2006c: 35).The correctional behaviour of government in resolving housing market failures may therefore be assumed justifiable on economic or equity grounds (ODPM, 2006a: 1).Housing Market Renewal (HMR) Pathfinders provide one such example of policies designed to address the 'considerable body of evidence' regarding the inter-connectivity of effects such as displacement (ODPM, 2006c: 6).This has led the government to resolve that '[t]he costs involved in correcting or managing badly designed development are much greater than getting it right in the first place'; making the economic benefits of a well-planned eco-town seem naturally preferable to haphazard sprawl of urban-fringe estates, a point reinforced by the Stern report on The Economics of Climate Change (Stern, 2007: 153).Their role as forms of exemplar urban development may consequently hold some benefits, upon the assumption that future failures will be reduced or removed.Indeed, new urban development will doubtlessly benefit from aspirational targets.Meanwhile programmes such as HMR Pathfinders enable the re-incorporation of unsuccessful current stock into the system, negate the environmental detriment of new provision and prevent the continued decline of that which already exists. Local economic benefits from the development of eco-towns seem to be equally as problematic.Not in fact new towns, their scale would create niche developments equivalent to new settlements or smaller.Contemporary global examples, as at Dongtan (Peoples Republic of China), indicate self-containment of eco-development as under-pinning their sustainability claims.The UK plans take a different approach, commonly alluding to the Garden City concept of Ebenezer Howard, whose heirs profess that the proposed 'polycentric form of networked urban places of interaction is the ultimate realisation of Howards Social City' (TCPA, 2007b: 44).Moreover, the Town and Country Planning Association imply that considerations of self-containment need 'to be tempered by an understanding of the benefits of inter-operability' (TCPA, 2007d: 293).This is despite the fact that inter-operability of design is likely to encourage commuting between eco-towns and larger local and regional settlements both for work and shopping.The logical result is indicative of economic reliance upon other settlements.The TCPAs Best Practice in Urban Extensions and New Settlements emphasises that 'the further distant from the central major city, the greater probability of self-containment', but green credentials are undermined by out-commuting arising when communities fail to achieve critical mass (TCPA, 2007b: 45).This may not however be socially detrimental, and The Commission for Architecture and the Built Environment (CABE) have emphasised that '[g]rowth can be positive for existing communities, creating a critical mass of people to support more services' (CABE, 2007: 7).Current plans are not yet advanced enough to clearly demonstrate whether specific levels of economic provision will be sustainable, although significant opportunities exist for local economic empowerment.Land purchasing issues could be pursued in accordance with Building on Strong Foundations (2008).This document perceives asset management as needing to 'underpin, and contribute to, delivery of the local vision', and argues that transfer of assets to community management can be shown by both experience and the Quirk Report (2007) to work efficiently (DCLG, 2008a: 6: ibid, 19).The Prospectus appears to support such community management, indicating that 'community ownership of assets' may be an adopted approach (DCLG, 2007b: 16). Whilst the intentions to achieve a step-change may exist, Smith stressed that 'the key challenge for governance is translating this' into common practice (Smith, 2004: 2).This challenge arises because green builders currently 'operate in a very different socio-technical context to mainstream volume house builders' (ibid: 2).Problems are exemplified through the associated economies of scale, such as at Beddington Zero Energy Development (BedZED), stimulating the Calcutt Review to emphasise that '[t]he zero carbon standard is not free' (DCLG, 2007e: 98).BedZED, for example, carried £5,000 premiums per unit compared to conventional equivalent dwellings and despite this only achieved carbon-neutrality to Code for Sustainable Homes Level 4 (The Peabody Trust, 2008: 1).The government will attempt to give some relief to these costs for developers by way of a 'time-limited relief' from Stamp Duty Land Tax (DCLG, 2007b: 11).Irrespective of these temporary benefits, New Labour's eco-aspirations will not arise without incentivisation of the building industry, inevitably at the cost of the planner.Such incentives may, for example, involve reductions in affordable housing or Code levels, section 106 trade-offs and a greater level of state investment in infrastructure and facilities.Moreover, such actions appear required in spite of the delivery schedule indicaticating eco-town completion at least four years after the 2016 Code for Sustainable Homes Level 6 standard is (theoretically) compulsorily implemented nationwide.Hence it seems fair to suggest that, assuming the building industry can meet this deadline, the supposed step-change will seem somewhat short of remarkable. Eco Eco Eco Eco----Towns and the Environment Towns and the Environment Towns and the Environment Towns and the Environment Environmental considerations are central to notions of eco-development and one purpose of the English eco-towns will be to act as exemplar 'green developments' and prove the viability of sustainable living (DCLG, 2007f: 16).One crucial role of planning is perceived to be the impact it can have to 'help speed up the shift to renewable and low carbon forms of energy' (HMSO, 2007: 11).This is something that it is already doing.The Department for Communities and Local Government has stated clearly that the 'key goal is to achieve zero carbon new homes within a decade', and it has committed to this through the Code for Sustainable Homes (DCLG, 2006: 2).The Code requires that all homes built after 2016 will reach Level 6 and as such be of zerocarbon design (DCLG, 2007b: 27).Level 6 is however defined as being an 'aspirational standard based on zero carbon emissions for the dwelling and high performance across all environmental categories', therefore leaving doubts as to how the target may realistically be achieved (DCLG, 2007a: 18).The Code aims to increase overall environmental sustainability of design through the establishment of a framework of national standards within which the building industry can operate (DCLG, 2007d: 5).This policy not only drives engineering and planning considerations, but beyond this may be understood as responding to political need and social desire. Environmental awareness is an increasingly powerful policy driver.Climate Change has been identified by the Urban Task Force as 'the greatest threat to our planet's future' (Urban Task Force, 2005: 12).In September 2007 Gordon Brown reiterated this, describing it as 'the most urgent challenge to humankind ' (No. 10 Press Release, 2007).That said, it is a relatively new agenda, which prior to the year 2000 was largely not identified as a policy driver (ODPM, 2005).Eco-towns overtly show New Labour's government as addressing this threat, a threat which its own actions have identified and promoted.With environmental awareness embraced by the general public, property developers are equally keen to stress their environmental credentials.Mitigation of environmental impact does not however overcome the detrimental influence of mass housing provision planned for England prior to 2020, and the impact must not be underestimated.Barker has not been alone in noting that increased housing provision 'raises concerns' about the environment, and it is important to stress that eco-towns cannot address the fundamental impact of housing provision and urban development upon the landscape (Barker, 2005: 1).Infill projects and regeneration programmes do not present this challenge to our green space.Furthermore, the retrofitting of exising stock avoids the energy consumption of new build housing. Existing research on embodied energy lacks consensus, partly due to regional sourcing variation.The CO2 values of Embodied Energy within virgin structural steel vary from between 24Mj/Kg in the UK (Hammond and Jones, 2006) to 59Mj/Kg in New Zealand (Buchanan & Honey, 1994).Similar variations are found in terms of Embodied Emissions and such discrepancies are even greater amongst sources of timber.Whilst eco-towns account for rising awareness of Embodied Energy impacts through the need for sustainable design there appears to have been little attention paid to the lifecycle of this energy.No reference has ostensibly been made to this issue and academic research has been limited.Research has shown, however, that material sourcing, selection, and waste handling at the end of building life are the most important stages of development whilst labour transport and construction/demolition processes are relatively insignificant.The implication of which is that locally sourced materials, labour and equipment should be sought (Vukotic, 2008).As such the energy values are much higher when providing new stock than upgrading that which already exists.Retro-fitting of existing redundant housing stock, if combined with investment in resolving the market failures that created the redundancy, may therefore be understood as more environmentally sustainable through their requirement of fewer materials.Environmental gains could consequently be maximised by reducing the required scale of new property provision.Moreover, the impact of a retro-fitting programme would inevitably result in a wider and more equitable impact than a restricted series of ecotowns. Eco Eco Eco Eco----Towns and Society Towns and Society Towns and Society Towns and Society Barker's Review of Housing Supply lays the foundation to an understanding of the Housing Green Paper by emphasising how '[h]omes are more than shelter', intrinsic to the creation of communities (Barker, 2005: 1).This is essentially the same perception of housing supply as commented upon by TCPA Chief Executive Gideon Amos when espousing the opinion that '[w]e must ensure that real communities are created -not just soulless housing estates' (TCPA, 2007c).Calcutt's Review sagely noted that Code for Sustainable Homes Level 6 is 'not achievable by energy efficiency alone' (DCLG, 2007g: 89).Together they emphasise the ability of architecture and urban design to encourage and foster inclusivity.This will become increasingly important in an era of demographic change.Increased levels of divorce and, in particular, the UK's ageing population means that accommodation considerations 'must become a mainstream consideration in planning new communities' (DCLG, 2008b: 107). Avoiding sprawl amid these demands for increased provision will inevitably raise density issues which must be addressed.To avoid the sprawl of previous decades, ecotowns will have to boast much higher densities in order to make the most of available efficiency gains.The results remain to be seen.The era of Corbusian high-rise is at an end.Even in areas of seeming revival, such as Manchester, problems of vacancy and inadequate service provision (amongst others) are already evident.More relevant to the debate is however the consideration that there may be detrimental effects emerging from the application of increased densities to English society, where such residential styles are non-traditional.Historically speaking, British flirtations with high density development have produced varied results.Unlike other nations attempting to develop eco-towns, living densities in the UK are generally much lower.Taking China or Japan as examples, mid-rise developments are commonplace and sought after, whereas that demand does not readily exist in the same form in England. A Way Forward?A Way Forward?A Way Forward?A Way Forward? In almost every respect, national planning guidance seems most appropriately to emphasise brownfield infill urban development.Even in the light of the sustainability debate this emphasis appears entirely appropriate.Eco-towns should not lead us to lose focus on the need to provide live-work quarters that contribute to the improvement of existing housing stock and increasing standards of that yet to come.Only through emphasis upon both can truly sustainable communities be achieved, and a genuine housing step change stimulated.Housing demand in the UK is subject to such pressures that we must recognise the need for new development beyond our existing urban areas.New towns, settlements and urban extensions are inevitable to some degree and eco-towns hold the potential to pioneer these developments in a more successful manner than has historically been achieved.The pitfalls may be too numerous for a genuine step change, but conceptually they are nonetheless worthy. The planning profession was born from housing issues.It has addressed the slums of Victorian Britain, aimed to provide 'Homes for Hero's' from each of the Twentieth Century's global conflicts and must seize the opportunity for reflection.Planning has always been political, and as Lang noted, '[p]lans are policies and policies, in a democracy at any rate, spell politics' (Lang, 1959: 168).With the rising Climate Change agenda at a global level, pioneered nationally by the New Labour regime, it is now more essential than ever that planning re-establishes certain central tenets.Howard's vision of development, mediating urban and rural conditions, was already being set in bricks and mortar when J.S.Nettlefold coined the term 'town planning' in 1905.Yet the resurgence of Garden City rhetoric more than a century later seems as fresh and relevant as ever before.Why then should perceptions of urban development, pivotal to the embryonic evolution of the discipline, now be heralded as a step change?Why should sustainability, a neat re-packaging and surmisal of existing objectives, trumpet eco-towns more than any other development? It is with regret that one is forced to conclude planning to be a discipline left romantically yearning for that which its very essence prevents it from ever achieving.The notion of 'mediation of space' inevitably results in the concept of mediation being twisted to equate with compromise, invariably decreasing the influence of the planner to that of a wishful spectator.Eco-towns, for a short while at least, offer the planner a window of opportunity to achieve more.That window overlooks further opportunities to create sustainable economies, environments and communities.Without the immediate realisation of the challenges and immediate support from government for real change as opposed to popularist policies the planning profession risks its tenuous position.This is a position which disappointingly appears to be eroded daily and to the point whereby a century from now we may once again find ourselves ensnarled in our own rhetorical fantasies, still seeking a sustainable future, albeit hopefully not too late.
v3-fos-license
2018-12-07T15:17:02.820Z
2015-12-07T00:00:00.000
59290906
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.14295/bds.2015.v18i4.1078", "pdf_hash": "7c5674096fa0450862536cee09c96bcb9d724b43", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42533", "s2fieldsofstudy": [ "Medicine" ], "sha1": "7c5674096fa0450862536cee09c96bcb9d724b43", "year": 2015 }
pes2o/s2orc
Clinical and morphofunctional aspects of pterygoid hamulus : literature review O entendimento sobre a hâmulo pterigóideo (PH) e suas relações anatômicas e funcionais com estruturas vizinhas é de extrema importância para conduzir um diagnóstico diferencial assertivo e para a prevenção de doenças na região. Esse estudo teve como objetivo revisar a literatura sobre as características anatômicas e funcionais do PH, uma estrutura óssea, por vezes negligenciada pelo dentista. Além disso, as principais patologias que ocorrem no PH são relatados. PH relaciona-se com muitos elementos anatômicos do crânio e da face e está envolvido em uma variada processos fisiológicos como a fala e deglutição. Alterações no PH podem se refletir em de alterações nas estruturas vizinhas promovendo sintomas semelhantes a outros distúrbios craniofaciais dificultando o diagnóstico correto. PH pode estar envolvido em patologias que devem ser incluídas no diagnóstico diferencial de quaisquer desordens craniofaciais. AbstRAct INtRoDuctIoN T he dentist's knowledge on the pterygoid hamulus (PH) and understanding of its anatomical and functional relationship with neighboring structures is of utmost importance to conduct assertive, differential diagnosis and for the prevention of diseases in the oropharyngeal region (Figure 1 and 2). PH is a process located in the medial plate of the pterygoid process of the sphenoid bone serving mainly of support point of the tensor muscle of the soft palate functioning as a reflection pulley, and of fixing the pterygomandibular ligament [1].In this ligament, which extends to the retromolar triangle, the buccinator muscle and the superior constrictor muscle of the pharynx are fixed [1].A portion of palatopharyngeus muscle and also has its origin in PH [2].The tensor muscle of the palatine veil tightens the palate and opens the Eustachian tube during speech, swallowing, chewing, breathing, sneezing and yawning [2]. The changes of PH or related structures can cause symptoms common to other diseases, for example pain when chewing or swallowing, edema and erythema in the posterior region of the palate [3,4] as well as ear pain, hearing loss and autophonia [5].These lesions can be misled with temporomandibular disorders or glossopharyngeal neuralgia [6].Other symptoms may also include snoring and sleep apnea [7]. Thus, this study aimed to review the literature, reporting the morphology of PH and structures which it relates as well as addresses the function of these elements, emphasizing the changes in which they can be involved.Therefore, the inclusion of such changes in the differential diagnosis of diseases that may affect the PH region are highlighted. Anatomical and functional aspects of the pterygoid hamulus and related structures The pterygoid hamulus (PH) is a tapered process located at lower end of the posterior edge of the medial plate of the pterygoid process of the sphenoid bone [1,8,9].In the posterior edge of that plate, the cartilaginous part of the Eustachian tube is inserted, the pharyngobasilar fascia and the superior constrictor muscle of the pharynx [9]. Despite the variable morphology, PH usually has a spherical end, denominated head, attached to the rest of the process by a narrow neck.A shallow groove can be seen on the PH lateral side extending from the base toward the cervix.In adult skulls, the average length of 7.2 mm, the average width of 1.4 mm in the sagittal plane and 2.3 mm in the transverse plane were found.The distance between the hamulus ranged from 26 to 36.9 mm.In children, the PH features have the same morphology, although smaller [8]. Krmpotic-Nemanic et al. [7] studying PH morphology found an average length of 6. 9 mm, results similar to those of Putz and Kroyer [8].The authors noted that in children PH is smaller (3.6 mm), as well as in the elderly, although less significantly, but in these patients, the mean length of PH was 5 mm, concluding that PH morphology changes with age.Orhan et al. [6] evaluating the PH morphology through CBCT scans and their results showed that the average length of PH was 5.48 mm on the right and 5.40 mm on the left side. PH serves as fixation for pterygomandibular ligament and for several muscles, such as the tensor muscle of the palatine veil that in addition to using PH as reflection pulley, also comes from this process and the superior constrictor muscle of the pharynx, among others [8]. PH has primarily served as support point of the palatine veil of the tensor muscle functioning as a reflection pulley.The tensor muscle of the palatine veil comes from the scaphoid fossa extending to the soft palate.During its path, it bypasses the pterygoid hamulus, narrowing at that point (Figure 3).To bypass it changes its direction, initially vertical, to become a horizontal tendon plate that meets the opposite side constituting the palatine aponeurosis [1]. Certain anatomical features of the tensor muscle of the palatine veil are not yet defined and have been investigated [10].In relation to its origin, the authors cite the scaphoid fossa [1,10], the spine of the sphenoid [7,10] and the Eustachian tube [5,7,10,11].Some authors observed a fixation of the tensor muscle of the palatine veil in PH [8,12].This fixation has not been found in other studies that claim to be the PH an exclusively reflection pulley [10,11]. The Eustachian tube communicates the nasopharynx to the tympanic cavity.Its function is to equalize the air pressure of the external environment and the air contained within the tympanic cavity.The two anterior-medial thirds are cartilaginous and the posterolateral third is bone.The cartilaginous portion is located in a groove between the petrous part of the temporal bone and the greater wing of the sphenoid.The Eustachian tube is laterally related to the tensor muscle of the palatine veil, mandibular nerve and the middle meningeal artery.Laterally, is related to the elevator muscle of the palatine veil and the pharyngeal recess.The cartilaginous portion opens during swallowing and yawning, preventing a pressure increase in the middle ear.The opening mechanism of the tube, if either passive or muscular, is not yet fully understood.It is believed that if the mechanism is muscular, the muscle activity is the tensor muscle of the palatine veil [9].So the complacency of the Eustachian tube depends on the tensor muscle of the palatine veil, whose peculiarity is its fixation as a "spider's web" in the skull base, between the pterygoid process and the spine of the sphenoid, having PH as one of its support points [13]. Between PH and the tensor muscle of the palatine veil, sometimes a synovial bursa can be found [8].This synovial bursa, located inside the PH groove, allows the sliding of the tendon of the tensor muscle of the palatine veil [7].Such synovial bursa can be observed in human embryos at eight weeks [10].However, Abe et al. [11] reported not finding any saccular structure covered by epithelium or a space compatible with the bursa around the tensor muscle of the palatine veil.The same results were found by Barsoumian et al. [12] which found no evidence of the presence of the bursa in their study.Putz and Kroyer [8] observed the bursa occasionally.Kronman et al. [14] observed the presence of a bursa during the surgical treatment of a patient with osteophyte in PH.Initially Gray [15] described a synovial membrane surrounding the tendon of the tensor muscle of the palatine veil which was later associated with a bursa [12].The pterygomandibular ligament, also called oropharyngeal raphe, extends laterally and downward from PH to the temporal crest, behind the third molar.It represents a tendon insertion separating the buccinator muscle from the superior constrictor muscle of the pharynx [16].The pterygomandibular ligament is of fundamental importance for the movement of the pharynx.It is believed that this ligament fixation associated with the impact of mechanical stresses due to chewing and swallowing has been related to the increase in PH length in adults.The opposite was observed in infants and the elderly, whose PHs feature is short and massive, especially in edentulous elderly in whom the reduction of the chewing effort promotes a PH shortening [7]. The medial plate of the pterygoid process, the PH, the pterygomandibular ligament and the insertion of the ligament in the mandible form a network for insertion of the muscles that perform the constriction and elevation of the pharynx.Only after PH reaches the proper length and the pterygomandibular ligament is firmly adhered to it and to the mandible, the superior constrictor muscle of the pharynx may act accordingly. The position, length and inclination of PH exerts an important influence on the function of several muscles as the tensor muscle of the palatine veil, the palate-pharyngeal and the top of the superior constrictor muscle of the pharynx (pterygoid-pharynx fascia).These muscles are responsible for separation of the nasal cavity and the mouth during sucking and swallowing.This separation is performed by raising the soft palate through the constriction of the elevator muscle of the palatine veil, the tubal part of the tensor muscle of the palatine veil, the contraction of the pterygoid-pharynx fascia of the superior constrictor muscle of the pharynx and by increasing the palatine aponeurosis by the tensor muscle of the palatine veil [7]. The superior constrictor muscle of the pharynx originates in PH, and fix in the pterygomandibular raphe, posterior ending of the milo-hyoid line and the lateral side of the tongue [9,17,18]. According to Putz and Kroyer [8] the buccinator muscle and the medial pterygoid muscle also use PH for fixing their fibers. Clinical Considerations The PH is reference in the differential diagnosis of palatine injuries.Professionals should include it in clinical cases with pain in the palate region without conclusive diagnosis [19].The most common symptom associated with changes in PH is local pain [4,14,20], when chewing or swallowing, associated with digital pressure or when the tongue is applied in this area.Clinically, the presence of edema and erythema occurs at the site [4].According to Ramirez et al. [2] PH palpation is done manually and orally or through a blunt instrument, by palpating carefully until reaching the posterior and medial part of the maxillary tuberosity. In addition to local pain, signs and symptoms include pain in the palate and jaw, ear and throat pain and difficulty in swallowing [21].Still, earache, autophonia, sore throat, taste changes, jaw pain, toothache, retro-orbital pain, headache and sensitivity changes such as burning, paresthesia and hypoesthesia have been cited [2].Kronman et al. [14] reported a case in which the patient had an osteophyte in the PH region and pain in the mandible, maxilla, and neck, headache, autophonia feeling, pressure in the eyes and TMJ disorders, among others.The authors do not disagree on the symptoms, since all remain linked.Among the diseases that can involve PH, the bursitis of the pterygoid hamulus is highlighted.The bursa located on the PH groove, responsible for the sliding of the tensor muscle of the palatine veil [7], can be affected by an inflammatory process so-called bursitis of the pterygoid hamulus [21] or bursitis of the tensor muscle of the palatine veil [14].It is a pathological condition that is often mistaken by temporomandibular disorders (TMD), impacted tooth, trigeminal or glossopharyngeal neuralgia, calcification of stylohyoid ligament, inflammation of the stylomandibular ligament, tumors, cysts, herpes simplex infections and otitis [22].For differential diagnosis, the clinician must carefully examine the palate and pharyngeal region and consider the diagnosis of hamular bursitis aiming to provide the patients with the appropriate treatment [23]. The etiology of this disease is diverse and is not fully understood, which can be caused by trauma in the region [2,21], ill-fitting dentures, undergoing intubation, swallowing of large masses [21], bulimic patients, those who suffered sex abuse [2,21] and related to the presence of osteophytes in PH [14].Elongated PH may predispose to this condition [2,20,24]. The PH stretching can interfere with the contraction function of the tensor muscle of the palatine veil causing bursitis or fibrosis due to excessive pressure in the palatal aponeurosis.This condition may stimulate greater and lesser palatine nerves, glossopharyngeal and facial, resulting in painful sensation on the palate, pharynx and areas of the face, head and neck [20].Sasaki et al. [20] reported a case in which the patient had pain in the oropharynx and burning sensation in the soft and hard palate, as well as bilateral swellings of the soft palate in the posterior region of the maxillary tuberosity.In computed tomography, PH was elongated.No other changes in tissue were noted.After anesthesia and resection of HP, the pain ceased, confirming the diagnosis of the syndrome of elongated PH. Hertz [25] reported the case of an edentulous patient with elongated PH and painful condition, ulceration, edema and erythema at the distal surface of the tuberosity of the maxilla.The adopted treatment was surgical resection and osteotomy of 4 mm from PH. In addition to the elongated PH, other anatomical features may be related to the pathogenesis of PH injuries such as: a) medial plate of the pterygoid process, consequently PH, may have a lower location than expected in relation to palate b) the mucosa of the soft palate can be located closer to PH than normal; c) even the mucosa of the soft palate can be thinner than normal [4]. The average PH length is 7.2 mm [8].Among the cases of elongated PH reported in the literature, Orhan et al. [6] cite a PH of 10.9 mm in length, while Sasaki et al. [20] reported a case which PH measured 13 mm. The treatment of this condition may be conservative or surgical [2,14,20,21].In the conservative treatment, the trauma source must be removed associated with a soft diet and local anti-inflammatory infiltration with prior anesthesia.The post-infiltration anti-inflammatory systemic medication should be prescribed.After two weeks the local infiltration can be repeated if the pain persists, however, according to the authors, this situation is not frequent.When conservative treatment is not effective, surgery may be used to remove bone spurs or fibrosis in the bursa.If no such changes occur, careful partial resection of PH should be carried out, since the tensor muscle of the palatine veil is the structure involved.[20]. Orhan et al. [6] reported the case of a patient with pain, swelling and burning sensation in the soft palate and pharynx, related to a traumatic extraction of right maxillary third molar.After the confirmation of elongated PH by cone beam computed tomography, anesthetic infiltration was performed and by the ending of the symptoms, the final diagnosis was hamular pain.Kronman et al. [14] reported the case of a patient with history of aggression in the face and head for twenty years and a recent fall trauma.Clinical examination revealed a mass in the PH region, which, when pressed caused burning sensation in the hard palate and tingling on the cheek beneath the eye on the same side.The surgical removal of the fibrous tissue and osteophytes from PH was performed. The authors agree regarding to the technique used for the diagnosis, based on the anesthetic infiltration on the PH site.The disappearance of symptoms confirms the diagnosis of hamular bursitis [2,14,20,21]. DuPont and Brown [3] evaluated the comorbidity of temporomandibular joint disorders (TMD) and painful conditions in the PH area, once these conditions have common symptoms.The tactile and visual inspection of PH area can be useful for diagnostic conclusion. Snoring and sleep apnea are symptoms that may also be associated with PH changes.The distance between the ending of PH, the slope of these processes and the hard palate width difference are responsible for the degree of the palatine aponeurosis tension.If the PH remains short, as in the newborns, the superior constrictor muscle of the pharynx will not have adequate support and the contraction leads to a narrowing of the upper part of the pharynx, causing the snoring and sleep apnea [7]. During surgical interventions in third molars, the movement with distal pressure can cause the PH fracture, resulting in looseness of the soft palate on the side of fracture and consequent dysphonia and dysphagia due to the path of the tensor muscle of the soft palate that surrounds the PH [1] (Figure 1).[26] failure of the opening of the Eustachian tube, was reported to be a sequel after the maxillary tuberosity fracture with consequent disruption of PH and tensor muscle of the palatine veil, and trismus by injury of the pterygoid muscles and ligaments. Snah and Bridgman The study of the effects of craniomaxillofacial surgeries in the auditory tube shows that chronic disorders of the middle ear of patients with cleft depend on the integrity of PH and tensor muscle of the palatine veil after the surgical repair of the cleft.Thus, the PH and the tensor muscle of the palatine veil must be preserved during veloplasty [13]. In individuals with cleft palate, the tensor muscle of the palatine veil may present variations in the insertion sites, usually fixing on the points lateral to the cleft, such as the maxillary tuberosity and the lower portion of the pterygoid process, including PH.The hamulotomy is a surgical procedure used to obtain enough tissue to close the cleft palate.Therefore, in these patients the tensor muscle of the palatine veil tendon can become loose.Thus, one seeks to preserve the maxilla insertions so that the function of expander the Eustachian tube is maintained [11]. During the palate repair surgery for in patients with cleft lip and palate, some surgeons perform the fracture of PH and move the tensor muscle the palatine veil aiming at decreasing the stress on the region avoiding the occurrence of dehiscence.However, the PH fracture as surgical maneuver in palatoplasty is not recommended by Kane et al. [27], since none advantage was demonstrated, although no deleterious effect was observed.However, Chaudhuri and Bowen-Jones [28] demonstrated that the palatoplasty with PH fracture did not influence on the deafness of children, but improved the quality of speech, since it reduces nasal air escape with the best palatine sealing. The knowledge of PH morphology as well as its functional importance and possible pathologies involved is of fundamental relevance to the dentist and other health professionals such as otorhinolaryngologists.In order to relieve the painful symptoms of the patient, the clinician must include the PH changes among the diagnostic hypotheses of craniofacial disorders. coNclusIoNs Based on the literature it can be concluded that PH is anatomically and functionally related to the relevant structures of the stomatognathic system and participates in various processes Nayara Fernanda barchetta (corresponding address) R. Mal.Deodoro da Fonseca, 316 -Jardim Boa Vista, Pindamonhangaba -SP, 12401-010 Email: nayara_barchetta2@hotmail.com such as speech, swallowing and opening of the Eustachian tube.The PH and associated structures may be involved in inflammatory or traumatic changes predominantly, and the knowledge of the morphology is essential for the correct diagnosis of these pathologies. Clinical and morphofunctional aspects of pterygoid hamulus: literature reviewBarchetta NF et al. Figure 1 - Figure 1 -Lower view of the skull showing the pterygoid hamulus (arrow). Figure 2 - Figure 2 -Lateral view of the pterygoid hamulus and its proximity to the posterior region of maxilla. Figure 3 - Figure 3 -Posterior view of the tensor (1) and elevator (2) muscles of the palatine veil and its relationship with PH (arrow). Clinical and morphofunctional aspects of pterygoid hamulus: literature reviewBarchetta NF et al. Clinical and morphofunctional aspects of pterygoid hamulus: literature reviewBarchetta NF et al. Clinical and morphofunctional aspects of pterygoid hamulus: literature reviewBarchetta NF et al.
v3-fos-license
2019-04-28T13:08:54.134Z
1990-01-01T00:00:00.000
136994239
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.jstage.jst.go.jp/article/kona/8/0/8_1990008/_pdf", "pdf_hash": "71eda5a8797a3f51fcc141d8a0ebe500083e4912", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42534", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "7ff0ac8ac38ae8839d328083f7d5ebab576f22a6", "year": 1990 }
pes2o/s2orc
Wear of Media during Ultra-fine Grinding t Contamination during fine grinding is an important problem for a raw powder of newly developed materials. Wear behaviors of grinding media, which were made of steel, alumina and zirconia, were examined during fine grinding using a vibration ball mill. Ball wear was nearly proportional to a specific surface area of ground products. It was supposed that the wear in wet grinding is more than in dry grinding, because wet grinding is more effective for fine grinding. Wear was effected by the materials of the grinding media. Ball wear, which was made of zirconia, was the least of three kinds of balls, that were used. Linear wear was not as much as ball wear. Introduction Powder is rarely a final product, but is handled, much more frequently, as an intermediate product, as is the case with starting material for the newly developed materials. It is increasingly required that powders be finer as the starting material for the new materials, and its particle and surface characteristics, chemical composition, crystalline phase have been attracting more attention. Grinding is a common procedure to reduce particle size in the production of preparation of powders, which are increasingly required to be finer. In grinding operation, a physical and mechanical energy supplies to the powder. It decreases the particle size and at the same time produces new surface. How~ver, it wears dow~ the grinding media because of the frequently applied contact force. The grinding procedure is invariably accompanied by this type of problem and causes the powder to be contaminated with fine solids from the grinding components. The extent of contamination tends to increase as the size of the grinding particle decreases. t This report was originally printed in J. Soc. Powder Technology, Japan, 26, 411-416 (1989) in Japanese, before being translated into English with the permission of the editorial committee of the Soc. Powder Technology, Japan. 24 This problem is more pronounced in the fine grinding operation. It is accepted that the wear mechanisms fall into four general categories 1 l; (1) adhesive, (2) abrasion, (3) corrosive wear and ( 4) surface fatigue. Each mechanism rarely appears by itself but in combination with the others. Furthermore, the extent of the actual wear varies greatly and depends on many factors, such as the atmosphere and operating conditions to which it is exposed. Abrasive is one type of powder. Abrasion is distinguished from wear, but these may be regarded as the same phenomenon in that an abrasive causes volume reduction (or dimensional change) in materials it contacts. An abrasive consists of hard particles of which efficiently grind something or remove foreign matter therefrom. Extensive wear may result if the ground particles are harder than the grinding medium. Wear caused by ball milling was extensively studied in 1940's, from the respective of controlling operations 2 l. As a result, the mechanisms of the ball wear are considered to fall into the three general categories 3 l: The first and the second are based on the concepts that wear rate is proportional to the ball surface area (d 2 ) and the ball mass (d 3 ), respectively, and the third is intermediate between the two, assuming that the wear rate is proportional to dn (2<n<3), wheredstands for the ball diame-ter. These studies were carried out with steel balls and mainly directed at controlling the grinding speed to understand the relationships between ball size reduction and the ball makeup methods. The wear phenomena results were analyzed in which abrasive wear played a predominant role yet contamination of the powder was not discussed. This study investigated the wear of media both for dry and wet grinding using a vibration ball mill, with emphasis on powder contamination. Silica sand served as the sample powder and three different materials were used for the grinding medium. Experimental Procedure and Sample The grinding apparatus used in this study was a high-vibration, high amplitude mill (nano-VMILL with two containers, Chuo Kakoki Co., Ltd.), whose amplitude (total amplitude) (from 4 to 19 mm and whose frequency varied up to 3400 c.p.m. by an inverter. In this study, the amplitude and frequency were set up at 10 mm and 1200 c.p.m., respectively, and its vibration strength was 8.06 relative to the acceleration of gravity. The grinding media (balls) were of HD alumina (3,5 and 10 mm in diameter, Nippon Kagaku Togyo Co., Ltd.), YTZ zirconia (3,5 and 10 mm in diameter, Tosoh Co., Ltd. -Nippon Kagaku Togyo Co., Ltd.), and carbon steel (3 mm (1/8") and 5 mm (3/16") in diame-ter, Tsubakimoto Seiki Co., Ltd.). The charged quantity of each grinding medium was set up at J = 0.7. Two types of mill containers were used; one was lined with HD alumina (Nippon Kagaku Togyo Co., Ltd.) for the ceramic balls, and the other was of steel for the steel balls. These were 170 mm¢ X 200 mm (4.5 Q) and 200 mm¢ X 195.6 mm (6.1 £),respectively, in size. The sample powder was of silica sand (Si0 2 : 99.85%, #200, Kyoritsu Yougyo Genryo Co., Ltd.), ground by both the dry and wet processes, the latter was performed in the presence of water. The charged quantity of the sample powder was set at U = 1.14. For the wet grinding, the concentration and charged quantity of slurry were set at 40 wt% and 114%, respectively. These conditions are summarized in Table 1. The grinding time was as much as 1 00 hours for wet grinding and 50 hours for dry grinding. A small quantity was sampled at given time intervals, to monitor the wear rate and specific surface area changing with time. The same quantity of water was made up in wet grinding. The wear rate was followed in two ways; chemical analysis of sampled powder and weight loss of the grinding balls. The ball was weighted before and after the test; it was thoroughly washed with pure water to recover the ground product and dried before being measured after the test. The chemical analysis was performed by means of an ICP luminescence analyzer (ICPA-lOOS, Nippon Jarrell Ash Co., Ltd.), with Al as the trace element for the alumina balls, Zr for the zirconia balls (Al was used simultaneously for the liner), and Fe for the steel balls. Figure 1 shows sample preparation procedure for ICP analysis. The analytical conditions were: The specific surface area was determined by the N 2 gas absorption type BET methods (Carlo Elba, Sorptomatic 1800). Figure 2 shows the parity plot, where the wear debris fraction in Si0 2 powder determined by the <;:hemical analysis is plotted against that determined by the weight balance of the ball on a log-log graph. The solid line represents the 1 : 1 relationship between them. It is only natural that the values for the zirconia ball case are the same, because only the Zr0 For the steel and alumina ball case, however, the chemically analyzed value should be higher, with respect to the wear debris from the walls. Nevertheless, they coincided well with each other, suggesting that the quantities of debris from the walls were very small and, at the same time, the chemical analysis results were reliable. Figure 3 shows Fe, Al 2 0 3 and Zr0 2 concentrations in the ground powder over time for the dry grinding case. The broken line presents the Fe 2 0 3 concentration in the powder ground by the steel balls. It serves as a comparison for the later presented wet grinding results, which discuss Fe 2 0 3 concentration. The wear of the balls, is presented by Fe quantity, which, however, is in all likelihood oxidized into Fe 2 03 when present in the ground powder. Of the 5 mm-diameter balls tested, the steel ball produced the largest quantity of the wear debris, followed by the alumina and then the zirconia ball. Wear is often associated with relative hardness of the materials involved. In general, hardness of steel, alumina and zirconia are Hv 400, 2000 and 1300, respectively, while that of silica sand is Hv 800. Thus, steel satisfies the condition that causes abrasive wear by the ground silica sand powder. It was observed, however, that the wear debris quantity was not proportional to the grinding time, meaning that the wear, even if abrasive wear predominated, was affected by ground powder characteristics (extent of size reduction during the process), in addition to the ball size (surface area or mass), and that the wear could not be explained by abrasive wear alone, because the vibration ball mill used in this study used impact force as grinding mechanism. The relationship between wear rate and hardness was reversed for the results with the alumina and zirconia balls. Figure 4 presents the wet grinding results, where the balls had the same diameter as those used for the dry grinding and where the debris from the steel balls was represented by iron oxide (Fe 2 0 3 ), estimated from the elementary Fe concentration in the powder, because Fe was considered to be oxidized. As for the wet grinding, the zirconia balls demonstrated the smallest wear rate. Comparing the results shown in Fig. 4 with those in Fig. 3 revealed that the debris produced by the wet grinding was approximately one order of magnitude greater than that by the dry grinding. Results and Discussions As described earlier, grinding with the zirconia balls was performed in the alumina lined container. Analysis of Zr and AI concentrations in the ground powder, therefore, gave the Figure 5 shows the analytical results, indicating that the lining was worn quite differently during the dry and wet grinding process. During wet grinding impact of the balls were directly applied continuously to the alumina walls, because the powder was dispersed in the grinding fluids, with the result that quantity of worn alumina increased with grinding time. During dry grinding, on the other hand, the extent of wear of the alumina walls was limited, except during the first grinding stage before the walls were coated with a layer of ground powder. An increase in the worn quantity with time was not observed. It is possible that the ground powder coating the walls greatly absorbed the impact of the balls and protected the walls. The balls were also coated with the powder, but the layer was probably thin as it repeatedly exfoliated from, and reattached to, the balls were continuously worn, though their wear rate was approximately one order of magnitude lower than that observed during wet grinding. The wear of alumina and zirconia balls and the alumina walls during wet grinding was further investigated by the other test. It was conducted for 50 hours under almost the same conditions, except that no powder was used. The same diameter (5 mm) balls, in 1.5 Q of water were used to measured the wear of balls. During the test, the slurry samples containing the worn powder were collected after 5 and 20 hours. The quantity of worn balls during 50 hours was measured by balance. The quantity of liner worn during the 50 hours grinding was estimated from the measured slurry density and the quantity of worn balls. The quantities of the balls and liner worn by 5 and 20 hours were also estimated. The results are shown in Table 2, where the slurry concentrations estimates are based on the assumption that the balls were worn at a constant rate. The estimated results in paren-28 theses are based on the assumption that the ratio of wear quantity between balls and liner was constant. Densities of the alumina ball and liner are described later. The specific surface areas measured 26.1 (5 hours) to 45.1 (50 hours) m 2 jg with the powder from the worn alumina balls, and 12.9 (5 hours) to 34.0 (50 hours) m 2 jg with the worn zirconia balls (including the worn alumina powder from the liner). This indicates that they were fairly fine. It was therefore considered that all closed pores in the sintered body surfaced in the worn powders and that their densities were slightly higher than those provided by manufacturer (alumina: 3.60, zirconia: 6.00). The manufacturer's values were used in the calculation however. As shown in Table 2, the wear rate of the liner was fairly low, approximately 10% the anticipated results as shown in Fig. 2. This and the results shown in Fig. 2 (except for the zirconia ball results), showed that the liner was less worn than the ball, when similar materials were used as the grinding media. When the balls of dissimilar materials (zirconia balls and alumina liner in this case) were used, on the other hand, the alumina liner was worn significantly more, and great care must be taken in such case. It should be noted, however, that the wear rate in the presence of ground powder (Alumina: 193.4 g, Zirconia: 21.6 g (Alumina: 56.6 g)) was lower in each case, as revealed by the 50 hours grinding tests. It is possible that the ground powder dampened the balls' impact on the liner walls, even during the wet grinding. The densities of the alumina ball and liner, provided by manufacturer were 3.60 and 3.50, respectively, which are considerably lower than well-densified alumina at 3.98. The alumina ball consisted of sintered Al 2 0 3 (91% ), Si0 2 (7%) and unknowns (2% ), and may have contained 1 0% or more of glassy and spinel phases in the grain boundaries and a number of pores. Both the glassy and spinel phases were low in hardness and strength, and the presence of large quantities of the grain boundaries diminished the inherent characteristics of alumina therefore aggravating the wear of the alumina products. The zirconia ball had a density of 6.00, which was very close to the theoretical density of 6.05. Zirconia, though not very strong, its toughness helps reduce its wear. Figure 6 through 8 show the relationships between the specific surface area of ground powder and contents of Fe 2 0 3 , Al 2 0 3 and Zr0 2 , respectively, where the specific surface area is the value of the ground powder containing the wear debris. Figure 6 shows the steel ball results. The broken line represents the wet grinding results (marked with o's) with the 5 rum-diameter balls, where the samples were treated with an acid to remove the Fe 2 0 3 wear debris. The relationship with the wear debriscontaining powder is represented by a straight line on a log-log plot, indicating that the wear debris content was closely related to the specific surface of the ground powder and that the Relation between specific surface area and contamination of powder ground by zirconia ball grinding methods whether dry or wet had no effect on the relationship, though they produced very different amounts of wear debris. The acid-treated powder had a significantly reduced specific surface area, which means that the wear debris of steel was somewhat finer. Figure 7 and 8 present the results with the alumina and zirconia balls, respectively. Unlike the case with the steel balls, the dry grinding produced smaller quantities of wear debris, and caused the content to be less sensitive to the specific surface of the powder than the wet grinding. In any case, no significant increase in specific surface area is expected from dry grinding . The effects of the ball size were hidden by the effects of the mechanical properties of the different materials. During wet grinding, the wear debris quantity tended to increase as specific surface area of ground powder increased. It is therefore believed that the ground powder is somewhat contaminated with the wear debris when making a very fine powder. Conclusions Three types of grinding media were used both for dry and wet grinding in a vibration ball mill, to investigate extent of wear debris contamination in the ground powder. (1) The extent of contamination was essentially proportional to the specific surface area of the ground powder in any case of what grinding medium material was used. Wet grinding, which grinds powder at a higher rate than dry grinding, will cause the ground powder more contamination. (2) Of the ball types used in the tests, the zirconia ball produced the least amount of debris, followed by the alumina ball and then the steel ball. (3) The liner was generally less worn than the balls. However, care must be taken if the liner and the ball are of dissimilar materials. (4) Wear of the grinding medium in the impact mechanism seems to be more affected by the materials' strength than the ball size (mass or surface area). operation. Nomenclature J : ball filling of mill (ratio of volume of balls plus space between balls to volume ofmill) [-] 30 U : powder fraction (ratio of volume of powder particles plus space between particles to volume of space between balls) [-]
v3-fos-license
2021-02-24T14:21:05.772Z
2021-02-24T00:00:00.000
232022339
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2021.630059/pdf", "pdf_hash": "8e6a52792d2726fdfaaa66bc62a61c240fd400ca", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42535", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "c1151c679b48401824e288a01f3604f257034732", "year": 2021 }
pes2o/s2orc
Functional Hyperspectral Imaging by High-Related Vegetation Indices to Track the Wide-Spectrum Trichoderma Biocontrol Activity Against Soil-Borne Diseases of Baby-Leaf Vegetables Research has been increasingly focusing on the selection of novel and effective biological control agents (BCAs) against soil-borne plant pathogens. The large-scale application of BCAs requires fast and robust screening methods for the evaluation of the efficacy of high numbers of candidates. In this context, the digital technologies can be applied not only for early disease detection but also for rapid performance analyses of BCAs. The present study investigates the ability of different Trichoderma spp. to contain the development of main baby-leaf vegetable pathogens and applies functional plant imaging to select the best performing antagonists against multiple pathosystems. Specifically, sixteen different Trichoderma spp. strains were characterized both in vivo and in vitro for their ability to contain R. solani, S. sclerotiorum and S. rolfsii development. All Trichoderma spp. showed, in vitro significant radial growth inhibition of the target phytopathogens. Furthermore, biocontrol trials were performed on wild rocket, green and red baby lettuces infected, respectively, with R. solani, S. sclerotiorum and S. rolfsii. The plant status was monitored by using hyperspectral imaging. Two strains, Tl35 and Ta56, belonging to T. longibrachiatum and T. atroviride species, significantly reduced disease incidence and severity (DI and DSI) in the three pathosystems. Vegetation indices, calculated on the hyperspectral data extracted from the images of plant-Trichoderma-pathogen interaction, proved to be suitable to refer about the plant health status. Four of them (OSAVI, SAVI, TSAVI and TVI) were found informative for all the pathosystems analyzed, resulting closely correlated to DSI according to significant changes in the spectral signatures among health, infected and bio-protected plants. Findings clearly indicate the possibility to promote sustainable disease management of crops by applying digital plant imaging as large-scale screening method of BCAs' effectiveness and precision biological control support. INTRODUCTION Baby leaf vegetables constitute the major ingredient of readyto-eat salads, very appreciated worldwide by consumers looking for healthy diets rich in fibers and low in calories, with organoleptic and nutraceutical traits particularly enhanced in pigmented varieties. Currently, in Italy, which is among the top European producers of these crops, it is estimated that more than 4,500 hectares are devoted, both in tunnels and, marginally, in open field, to grow baby salads for the high convenience food chain (Morra et al., 2017). A rather large group of different leafy vegetable species are included under this appellation, although by far, wild rocket [Diplotaxis tenuifolia (L.) DC.] and baby lettuce (Lactuca sativa L. var. acephala) are the most extensively cultivated. Because of the intensive exploitation of soils, continuous cropping, cultivars susceptibility to pathogens and reduced use of synthetic fungicides, those crops are dramatically prone to several diseases occurring in the humid and temperate microclimate of the sprinkler-irrigated tunnels/fields (Caruso et al., 2018;Gilardi et al., 2018a,b;Gullino et al., 2019). The soil-borne fungi Rhizoctonia solani Kuhn, Sclerotinia sclerotiorum (Lib.) de Bary and Sclerotium rolfsii Sacc., belonging to the Phylum Basidiomycota, are parenchymatic, polyphagous, necrotrophic pathogens of different salad crops, causing huge economic losses and symptoms ranging from the simple rotting of the attacked organs to the damping-off. Their non-chemical counteraction is particularly requested under sustainable management systems pursuing the zero residues goal, while it is mandatory according to the organic farming rules (Giménez et al., 2019). To this scope, the integrated disease management people are exploring alternative approaches to synthetic fungicides, including the implementation of effective microbes able to control phytopathogenic attacks, referred as biological control agents (BCAs). Soil microbiota represents a precious reservoir of biocontrol microorganisms to impact plant health, growth and productivity in agricultural applications. Several fungal species belonging to the genus Trichoderma (Ascomycota) are known to suppress soil-borne and foliar plant diseases directly by mechanisms against the host pathogen (competition for space and nutrients, antibiosis, and mycoparasitism) and indirectly by the induction of a resistance responses in the colonized plants (Howell, 2003). Because of their crucial role as antagonists, Trichoderma spp. are among the most effective and commercialized biological control agents, registered as Plant Protection Products to manage a broad-spectrum of plant pathogens (Sharma et al., 2019). A number of Trichoderma spp. antagonistic strains are sourced from several telluric environments carrying disease controlrelated functions, including suppressive composts, to gain increasing efficacy firstly due to the niche-competence shared with the targeted soil-borne pathogens (Wang et al., 2019). The selection of novel and effective BCAs requires fast and robust screening methods suitable to evaluate high numbers of candidates. In this context, digital technologies, such as remote sensing, could play a pivotal role not only for early disease detection but also for the rapid performance analyses of BCAs and in the prediction of the biocontrol efficacy. Hyperspectral imaging is a non-destructive and powerful digital technology to directly identifying biochemical and physiological shifts occurring in plants in response to external stimuli, including pathological prodding (Thomas et al., 2018). It involves the pixel-by-pixel analysis of an image containing spatially distributed the reflectance spectrum captured in the visible (VIS, spectral range 400-700 nm) and near infrared (NIR 700-1,000 nm) regions as hypercube dataset resulting by the interaction of the canopy with the incident light (Liu H. et al., 2020). Several previous hyperspectral studies pointed up broad/narrow extracted band indices, called vegetation indices (VIs) that have been used to associate the spectral information to several crop characteristics (Thenkabail et al., 2000), including plant health (Xue and Su, 2017). For example, the best known one, Normalized Difference Vegetation Index (NDVI) that is predictive of the vegetative growth and the general plant status (Rouse et al., 1973), recently was also proposed to refer about the Vitis vinifera -Botrytis cinerea interaction (Pañitrur-De la Fuente et al., 2020). The sensitivity of hyperspectral VIs about disease grade of the canopy, was also proposed to automatically evaluate the performances of disease control methods as innovative functional application . In this view, hyperspectral imaging may additionally help the fine scouting of new effective microbial antagonists under selection by configuring a standard quantitative analytic method to follow biocontrol dynamics that can be usefully implemented in a perspective definition of precision biological control guidelines. The aim of this work was to select new useful antagonistic strains of Trichoderma able to protect wild rocket and baby lettuce from deleterious soil-borne pathogens. R. solani and S. sclerotiorum infections are very diffuse among these cultivations while S. rolfsii is going emerging importance on baby-leaf because of its attitude to grow at high temperature regime, as under greenhouse. Additionally, computing the reflectance data from the canopy of the bio-treated plants, this study can lead to the identification of high-performing vegetative indices (VIs) functional to the large-scale evaluation of the biocontrol effectiveness and, furthermore, to discriminate between healthy and infected plants. Isolation of Trichoderma Strains The sixteen Trichoderma strains characterized here, were isolated from a high suppressive rocket and fennel-derived compost Scotti et al., 2020) and stored in the fungal collection of CREA-Centro di ricerca Orticoltura e Florovivaismo (Pontecagnano Faiano, Italy CREA-OF). Isolates were subjected to monosporic culturing by serial ten-fold dilution. For the strain characterization, macroscopic features (medium pigmentation, colony color, colony edge shape, smell) were evaluated after 7 days of growth on potato dextrose agar (PDA, Condalab, Madrid, Spain) medium at 25 • C. Microscopic parameters (conidium length, width and shape) were also measured under light microscopy at 40× magnification with the optical microscope (Nikon Eclipse 80i, Nikon, Melville, NY, USA) in 0.05% Tween R 20 considering n = 40 conidia. All the isolates were maintained on PDA at 4 • C and sub-cultured weekly. Identification of Trichoderma Strains Isolates were grown in potato dextrose broth (PDB, Condalab, Madrid, Spain) on a rotary shaker at 120 rpm for 96 h at 25 • C. Fresh mycelium was collected after vacuum filtration through No. 4 Whatman filter paper (Whatman Biosystems Ltd., Maidstone, UK), then frozen in liquid nitrogen, ground to a fine powder and immediately processed. Total genomic DNA was extracted from 100 mg of ground mycelium by using the PureLink R Plant Total DNA Purification Kit (Invitrogen TM , ThermoFisher Scientific, Waltham, MA, USA) according to the manufacturer's protocol. PCR amplification of internal transcribed spacers and translation elongation factor 1α (TEF1) was performed in a Biorad C1000 Thermal Cycler (Bio-Rad, Hercules, CA) following PCR program: denaturation at 96 • C for 2 min; 35 cycles of denaturation at 94 • C for 30 s; annealing at 55 • C for 30 s; extension at 68 • C for 75 s; final extension at 68 • C for 10 min. Primers ITS1 (5 ′ -CTTGGTCATTTAGAGGAAGTAA-3 ′ ) and ITS4 (5 ′ -TCCTCCGCTTATTGATATGC-3 ′ ) were used to amplify a fragment (∼0.6 kb) of rDNA including ITS1 and ITS2 and the 5.8S rDNA gene (White et al., 1990;Gardes and Bruns, 1993) while the 5 ′ portions of translation elongation factor 1α (∼0.8kb) coding region and introns were amplified with primers TEF1-F (5 ′ -ATGGGTAAGGARGACAAGAC-3 ′ ) and TEF1-R (5 ′ -GGARGTACCAGTSATCATGTT-3 ′ ), which prime within conserved exons (O'Donnell et al., 1998). Amplicons were separated by gel electrophoresis in 1% w/v agarose supplemented with SYBR Safe DNA Gel Stain (Invitrogen, Paisley, UK). Amplicon sizes were determined against a 100 bp DNA ladder (Invitrogen TM , ThermoFisher Scientific, Waltham, MA, USA). PCR products were purified by PureLink TM PCR Purification Kit (Invitrogen TM , ThermoFisher Scientific, Waltham, MA, USA) following the manufacturer's instructions, quantified with a NanoDrop TM system (NanoDrop Technologies Inc., Wilmington, DE, USA) and sent to Sanger sequencing. Phylogenetic Reconstruction Phylogenetic relationships of the 16 Trichoderma strains were investigated based on ITS and TEF1 sequences. DNA sequences were blasted against the NCBI GenBank database using default parameters and then aligned with the more related Trichoderma isolates by the Clustal W algorithm (Thompson et al., 1994) with MEGA7 software (Kumar et al., 2016). Multiple alignments parameters were gap penalty = 10 and gap length penalty = 10. The default parameters (Ktuple = 2, gap penalty = 5, window = 4, and diagonals saved = 4) were used for the pairwise alignment. Final alignment adjustments were made manually in order to remove artificial gaps, as reported by Ospina-Giraldo et al. (1999). The analysis was conducted on the two gene partial sequences separately. Aligned sequences were then concatamerized to a total length of 1,667 nucleotides. The evolutionary history was inferred using the maximum likelihood method. The evolutionary distances were computed In vitro Dual Confrontation Assay The ability of the sixteen Trichoderma strains to contain the development of R. solani, S. sclerotiorum and S. rolfsii in vitro, was evaluated by the dual culture technique. These phytopathogenic fungi were stored in the fungal collection of CREA-OF, maintained on PDA slants. Mycelial plugs of 5-mm diameter, obtained from the periphery of 7-days old cultures of both pathogen and Trichoderma strains were placed simultaneously on the border of the plate (9 cm diameter), about 0.25 mm from the edges at opposite sides. The Petri dishes containing PDA medium inoculated only with the pathogen were used as reference controls. All plates were incubated at 25 • C and the radial growth was recorded 7-days post-inoculation. The growth inhibition percentage was calculated by using the formula: where C = pathogen radial growth in the control and T = pathogen radial growth of the in the dual culture. In vivo Biocontrol Activity Assays The biocontrol activity of Trichoderma strains was assessed in vivo against R. solani on wild rocket, S. sclerotiorum on green baby lettuce and S. rolfsii on red baby lettuce. One L flasks containing 150 g of common millet seeds were saturated with a 0.1 × PDB (w/w) and autoclaved. Flasks were then inoculated with 15 plugs 5 mm diameter obtained from one-week-old plates of each pathogen maintained on PDA, and incubated for 21 days at 25 • C. At the end of incubation, the inoculum was ground and added to sterilized peat soil at the final concentration of 1% (w/w) for R. solani and S. rolfsii, and 2% (w/w) for S. sclerotiorum, respectively, according to the pathogen virulence. In the uninfected pots, non-inoculated common millet prepared as described above, was added. Trichoderma spp. spore suspensions were obtained from one-week-old cultures maintained on PDA at 25 • C. For each isolate, the conidia were harvested by washing the plates with sterilized water using a sterile brush. The suspension was filtered and collected in a 50 mL Falcon R tube (Falcon, USA). The spore suspension concentration was measured by a Burker chamber (Brand, Germany) and adjusted at 1 × 10 7 spore mL −1 . Seeds of wild rocket cv. Tricia (Enza Zaden, Italy), green baby lettuce cv 166 (Sementi Dom Dotto, Italy) and red baby lettuce cv. Pamela (Maraldi, Italy) were sown in vermiculite-filled 500 mL bowls, germinated in the dark at 25 • C and then maintained in a growth chamber at 22 • C with a 12-h photoperiod. The irrigation was manually performed daily and a basal NPK mix liquid fertilization was applied twice a week. After 15 days, plants were transplanted in plastic pots (7 cm diameter and 100 mL volume capacity) filled with sterile peat, infected as described above. Each treatment consisted of three pots (replicates) containing 5 plants each for baby lettuces, and 10 plants per pot for rocket. After that, Trichoderma suspension treatments were applied by soil drenching reaching a final concentration of 1 × 10 6 spore mL −1 . Untreated infected pots and healthy pots were used as reference controls. Pot distribution was arranged randomly in the growth chamber at the same conditions described above. After 7-days incubation, each pot was assessed for hyperspectral images, disease incidence (DI%) and severity index (DSI). DI was calculated as the percentage of plants with disease symptoms on the total. Disease severity was assessed using a 1-3 scale adapted from Larkin and Honeycutt (2006): 0: no symptom; 1: foliar discoloration; 2: plant withering and visible lesion(s); 3: severe infection and plant dead. DI% and DSI were calculated according to Yang et al. (2009). The experiment was performed twice. Hyperspectral Imaging Hyperspectral images were acquired by using the SPECIM IQ camera (Specim, Spectral Imaging Ltd., Oulu, Finland) working in the range of 400-1,000 nm on a total of 204 wavelengths with a spectral resolution of 4 nm. The camera carries a CMOS sensor with a spatial sampling of 512 pixels and an image resolution of 512 × 512 pixel. The pixel size is 17.58 × 17.58 µm. Reflectance value was calculated automatically by the camera software. The images were captured under natural light conditions (Irradiance range 800-1,000 W/m 2 ). One image per replicate (pot) was acquired, each containing all conditions (treatments) analyzed. Relative reflectance of hyperspectral images was simultaneously computed by the camera software. White reference, dark frame and raw data, were acquired during the measurements. The equation applied for the computation of the raw reflectance was as follows: where White is the white reference, t1 and t2 are integration times (used for a highly reflective white reference), and Dark represents a target with low reflectance. The elaboration of the hyperspectral images was carried out with the R software. Raster R package (Hijmans et al., 2015) was FIGURE 1 | Workflow of data processing in hyperspectral imaging. Frontiers in Plant Science | www.frontiersin.org used to visualize and extract the hypercube dataset, successively elaborated into a typical spectral graphic. The unsupervisioned classification of the images was performed with Cluster R package to remove background once separated the objects "X" into "K" clusters. K-means clustering algorithm is a partitional or nonhierarchical clustering method (MacQueen, 1967;Anderberg, 1973), that here highlighted two clusters, background and plants (Figure 1). Then, the background cluster was deleted from the image, while the plant cluster was submitted to the extraction of the 46 hyperspectral VIs by imaging, averaging the pixel values for each replicate per treatment. Hyperspectral Vegetation Indices and Statistical Analysis Measurements of the pathogen growth inhibitions in vitro, disease incidence and disease severity percentages, were subjected to the statistical analysis by GraphPad Prism Software. Ordinary one-way ANOVA was applied to test the effects of the Trichoderma strains on the assessed parameters. In all cases, the statistical analysis of variance was corrected for multiple comparisons by the Bonferroni hypothesis test, considering a p-value ≤ 0.05. Since experiment effect was not observed, data from the repeated experiments were pooled. The same procedure was applied to evaluate the indices calculated on the hyperspectral dataset. Moreover, in order to select the most informative ones, they were analyzed, in relation to the observed disease severity in each pathosystem, by Multiple Variable analysis, applying the Pearson's correlation coefficient. The high-performing VIs that resulted commons to all the three host-pathogen target systems, were filtered on the base of a stringent statistical grid (p-value ≤ 0.05 and R 2 > 0.5) and highlighted by using Venn diagram (http://bioinformatics. psb.ugent.be/webtools/Venn/). The heatmap visualization and the hierarchical clustering analysis of the selected indices were obtained applying ClustVis online software (https://biit.cs.ut. ee/clustvis). Unit variance scaling was applied to rows and columns and they were clustered using correlation distance and average linkage. Furthermore, the Principal Component Analysis (PCA) of vegetative indices / disease index for each pathosystem was performed with the pca function of the R Factoextra package (Kassambara and Mundt, 2017). Data were log-normalized and disease severity index was converted to "factor" by grouping in classes according to the following 0-4 scale: 0 = 0 ≤ DSI ≤ 0.2; 1 = 0.21≤ DSI ≤ 0.4; 2 = 0.41 ≤ DSI ≤ 0.6; 3 = 0.61 ≤ DSI ≤ 0.8, 4 = 0.81 ≤ DSI≤ 1. Then, lm function (R package) was applied to fit linear models. Colony and Conidium Morphological Characteristics The morphological characterization of the sixteen Trichoderma isolates studied in this work was carried out based on the inoculated medium appearance and pigmentation, color and edge of colonies, culture smell, shape and size of the conidia. After 5-days incubation at 25 • C, the growth and sporulation patterns of the Trichoderma isolates showed significant differences. During the growth, due to the release of secondary metabolites, medium pigmentation varied significantly among the Trichoderma isolates, ranging from colorless to bright yellow and yellow-brownish to amber. Some of them showed a profuse production of conidia with coloration ranging from white to dark green (Figure 2). Furthermore, microscopic observations allowed highlighting differences in terms of conidia size and shape. In fact, the conidia of Ta56, Ta117, Ta105, ThRP, and Tat3C1 isolates, showed spherical shape with length-to-width ratio around 1, while the conidia of all the remaining strains, resulted ellipsoidal with length-to-width ratio > 1. The morphological colony and conidium features are summarized in Table 2. Determination of Trichoderma Species The multi-locus sequence analysis is suggested for a better distribution of Trichoderma spp. in a phylogenetic tree (Samuels et al., 2010). Therefore, in the present work, concatemers of the ITS-TEF1 genes were used to contract the phylogenetic tree inferred by neighbor-joining method, as reported by Ospina-Giraldo et al. (1999). rDNA region and partial translated elongation factor locus amplifications, yielded products of ∼600 and 800 bp, respectively, as estimated by agarose gel electrophoresis. Loci were analyzed separately, aligned and manually adjusted. Sequences were then grouped in concatamers and subjected to the phylogenetic analysis. This analysis involved 26 nucleotide sequences with a total of 1,667 positions in the final dataset. Based on the bootstrap values, the 16 Trichoderma strains were arranged into three distinct groups, belonging to T. atroviride, T. longibrachiatum and T. harzianum species (Figure 3) In vitro Dual Challenge Assay The dual culture assay was optimized to compare the inhibition activity of the 16 Trichoderma strains against the three soilborne fungal pathogens. Since no significant differences were observed in the timing of growth among Trichoderma strains, S. sclerotiorum, R. solani, and S. rolfsii, the fungi were coinoculated. As reported in Figure 4, all Trichoderma strains determined around 60% inhibition of S. sclerotiorum and R. solani radial growth. Only slight differences were observed among the different Trichoderma strains in inhibiting those phytopathogenic fungi. Furthermore, all the biocontrol strains, except Ta100 and Th23, reached the pathogen in 4-5 days and overgrew it in 9-10 days. On the other hand, most of the Trichoderma strains showed the ability to inhibit S. rolfsii radial growth up to 70%. Additionally, significant differences were observed among the different Trichoderma strains in containing this pathogen. In fact, a profuse overgrowth was observed for Ta116, ThRP, Ta105, Tat11, ThCB, Ta104C, Ta56, TaIC12, and Ta104S after 9 days, while Tl35 and Th23 resulted less effective in reducing the in vitro fungus development. In vivo Biocontrol Activity The ability of the different Trichoderma strains to protect plants was investigated by in vivo assays with R. solani on wild rocket, S. sclerotiorum on green baby lettuce and S. rolfsii on red baby lettuce. On all cases, disease incidence percentages (Figure 5 left) and disease severity index (Figure 5 right) were assessed. Overall, a significant Trichoderma treatment effect was found (p-value < 0.001), as well as the interaction between factor Trichoderma strain × plant/pathogen system (p-value < 0.001). The application of Ta116, Tl35, Ta56, TaIC12, Tat3C1, and Tl41, on wild rocket significantly reduced the percentage of Rhizoctonia disease incidence detected 120 h post-inoculation, in comparison with the infected control. In fact, only the 60% of Tl35 treated plants showed disease symptoms; for all the other treatments, the disease incidence was around 80%. Interestingly, all Trichoderma strains, except for Tat11 and Th23, contained the severity of the disease: the bio-treated plants displayed mild disease symptoms or were almost healthy. On the other hand, the BCAs reduced Sclerotinia disease incidence on green baby lettuce, excepted for Ta104, Ta104S and ThRP; the number of plants with symptoms was significantly lower than that observed in the infected control and a consistent reduction in the disease severity index was also observed. Trichoderma harzianum Th23 resulted the best one in containing Sclerotinia disease development. The strains Ta100, Ta104, Ta117, Tl35, Ta56, Tat11, ThCB, ThRP, and Th23, were able to control S. rolfsii on red baby lettuce determining a meaningful reduction of disease incidence. Furthermore, all Trichoderma treated plants, excepted for Ta104S, TaIC12, and Tat11 interactions, showed a significant lower disease severity index than the infected control. Hyperspectral Imaging Plants infected with the three soil-borne pathogens and exposed to the biocontrol treatment with Trichoderma, were subjected to hyperspectral imaging analysis in order to capture the spectral changes that occurred during the plant-pathogenantagonist relation. As reported in Figure 5A, out of the 46 analyzed hyperspectral indices, 13 significantly cross-correlated with Rhizoctonia disease on rocket, 26 with Sclerotinia drop on green baby lettuce and 7 with Sclerotium rotting on red baby lettuce. Interestingly, four indices, OSAVI, SAVI, TSAVI and TVI, resulted shared by the three assayed pathosystems. The Multiple Variable analysis showed the score of their negative cross-correlation with the disease severity index for each plant/pathogen systems, with samples distributing between the two extremes, full healthy and full diseased (Figure 6B), coherently with changes visualized in the spectral signatures among non-inoculated, infected and infected but bio-treated plants ( Figure 7A). Hence, heatmap visualization of the VIs/DSI hierarchical clustering quickly identified the most effective biocontrol agents in relation to the specific pathosystem ( Figure 7B). PCA analysis of VIs detected in the three different pathosystems showed their consistent ability to discriminate among different disease levels (Figure 8). Furthermore, OSAVI, SAVI and TSAVI resulted quite redundant, probably due to they differ only in the algorithm used for combining spectral data, while the distinct contribution in explaining the variance along PC1 (93.9%) was associated to TVI (Figure 8). In order to fit a linear model, DSI data and selected indices were analyzed for multiple regression (Table 3). Based on the PCA results, SAVI indices (OSAVI, SAVI and TSAVI) computed together and TVI were submitted to linear regression analysis. OSAVI index was excluded since OSAVI:TVI interaction was found not significant in the resulting linear model. Results showed that F-statistic was highly significant (< 3.5e-10) meaning that at least, one of the predictor is significantly related to the outcome variable. All the coefficients, including the interaction term coefficients, were statistically significant, suggesting that there is an interaction between the two predictor variables TSAVI + SAVI and TVI. On the other hand, these last are able to provide information about the biological observations although R-squared value was low. Thereby, statistical outputs corroborated the visualization by VIs images of the effects of Trichoderma strains on the disease symptom expressions over the cultivars. Differences between healthy and diseased controls resulted, actually, perceptible on OSAVI, SAVI, TSAVI and TVI images, and the BCA treated plants displayed intermediate collocations (Figures 9-11). However, the correlational analysis identified disease-specific indices as reported in Figure 6A. Indeed, MCARI and SRPI resulted effective to track the R. solani/wild rocket interaction, other 15 indices (ARI, CAR, LRDSI, msr705, NDVI, PRI, PSSRc, R705, RDVI, RGRcn, RVI, RSVI, SIPI, TCARI, VARI-Green) were found significantly correlated to the S. sclerotiorum infection degree of green baby lettuce, while LIC3, VOG2, VOG3 were found suitable for following the S. rolfsii/red baby lettuce interaction. A summarization of Pearson's analysis involving all the VIs, is reported in Table 4. DISCUSSION Trichoderma spp. include a plethora of isolates with biocontrol activity against phytopathogens (Kumar et al., 2017) that can also give additional benefits to the plants, such as increase the nutrient uptake, enhance the photosynthetic activity and stimulate different metabolic processes that positively affect yields and quality of the treated crops (El Enshasy et al., 2020). Recently, it has been shown that soil treatment with Trichoderma gave biostimulant effects on wild rocket and baby lettuce, ranging from the increase of leaf yield, fresh and dry weight, to the improvement of leaf nutritional status, resulting in a premium quality of the fresh-cuttings with higher lipophilic antioxidant activity and total ascorbic acid content (Fiorentino et al., 2018;Caruso et al., 2020;Di Mola et al., 2020;Rouphael et al., 2020). However, expressing the full biocontrol potential in these contexts, Trichoderma-based formulates can successfully integrate disease management protocols for producing baby leaf vegetables with high added value in terms of sustainability, decreasing the dependence on synthetic fungicides. This study recruited sixteen new Trichoderma antagonistic strains assigned, on the base of the variations of the rRNA ITS and translation elongation factor 1-α gene partial sequences, to three different species, T. longibrachiatum, T. atroviride, and T. harzianum. Several stains of these species are well-known as BCAs of many pathogens affecting vegetables, including our targets (Bastakoti et al., 2017): they are proposed alone, being part of complex microbial consortia or activating suppressive organic amendments (Kareem et al., 2016;Wang et al., 2019;Chilosi et al., 2020). The macroscopic and microscopic examination of the selected strains showed interesting characters such as the profuse sporulation, the ability to secrete secondary metabolites in the medium changing its pigmentation and the capability of some to produce a volatile compound with the typical coconut-like aroma. This last specific character was detected in the strains Ta56, TaIC12, and Tat11 and could be putatively associated to the production of 6-pentyl-α-pyrone, a bioactive unsaturated δ-lactone with interesting properties involved in the microbial antagonism (Bonnarme et al., 1997;Serrano-Carreón et al., 2004;Longo and Sanromán, 2006;Ramos et al., 2008;Penha et al., 2012;Pascale et al., 2017). However, to clarify these aspects, further metabolomic investigations are necessary. All the new identified antagonists significantly inhibited the mycelial growth of the pathogens in the dual culture assay. The main mechanism of control resulted to be the mycoparasitism, highlighted by the overgrowth of the BCAs onto the pathogen mycelia, observed already after 9-10 days of incubation. Mycoparasitism is one of the major weapons displayed by Trichoderma spp. against phytopathogens (Sachdev and Singh, 2020) allowing them to parasitize and kill the fungal host after the direct contact. During this intimate interaction, the beneficial fungus produces antibiotics and a huge array of cell degrading enzyme (protease, as β-glucanase, chitinase) necessary for the parasitism process (Steyaert et al., 2003). In vivo biocontrol assays classified the Trichoderma BCAcandidates for the substantial ability to protect wild rocket, red and green baby lettuces from their most feared telluric fungal pathogens. Contrary to what was observed in in vitro assays, the in planta trials showed meaningful differences in biocontrol intensity among the strains in relation to the target pathosystem. Specifically, Tl35, Ta56, Ta116, TaIC12, and Tat3C1 resulted the most effective strains in controlling Rhizoctonia dampingoff of wild rocket, determining a significant reduction in terms of DSI (roughly 60%) compared with infected control under high disease pressure (100%). Rhizoctonia crown and root rot FIGURE 7 | (A) Spectral signatures of wild rocket (a). green baby lettuce (b) and red baby lettuce (c) assayed with R. solani. S. sclerotiorum and R. rolfsii. respectively. and treated with the Trichoderma strains (blue). compared to the non-inoculated (red) and infected (black) controls. (B) Hierarchical clustering of OSAVI. SAVI. TSAVI. TVI in relation to the observed disease severity index (DSI) in the systems R. solani-wild rocket (a). S. sclerotiorum-green baby-lettuce (b). and S. rolfsii-red baby-lettuce (c). Rows were centered and unit variance scaling was applied. Columns were clustered using correlation distance and average linkage. Analysis was performed by ClustVis software. is a problematic disease of wild rocket for the ready-to-eat produces in the Italian cropping areas (Nicoletti et al., 2004). For their biological control, only the hyperparasite Clonostachys rosea has been noticed in literature (Nicoletti et al., 2007). Genetic resistance to this pathogen is not available yet (Pane et al., 2017), while wild rocket waste meals are proposed as amendments to promote the soil general suppressiveness providing antifungal molecules contained into the grounded plant tissues (Schlatter et al., 2017). Our results suggest that Trichoderma spp. can reduce the incidence and the severity of the disease and earn a chance as effective antagonist in Rhizoctonia damping-off management. Interestingly, T. atroviride strain TA56 and T. longibrachiatum strain TA35 resulted to be multi-suppressive, namely highly effective in containing all the three diseases of the baby-leaf vegetables, demonstrating positive performances both in vitro and in vivo. The ability of these two BCAs to control the main disease of baby leaf make them promising candidates for a wide-spectrum application in preventive and/curative biological control practices in fresh-cut salad cropping, especially under soil sickness conditions. relationships with plant health, as variably modulated by the biological control treatments contemporary in all the three target systems. The indices OSAVI, SAVI, TSAVI and TVI were able to highlight the most effective BCAs in controlling multiple soilborne diseases of baby leaf vegetables. This result confirmed that selected indices can be applied as highly-informative tool for both BCA selections and disease monitoring in the presence of soilborne pathogens generally associated to root and collar rot and, in advancing, leaf withering and plant death. Therefore, the disease progression significantly affects the vegetation vitality and also the chlorophyll content. OSAVI, SAVI and TSAVI are soil adjusted vegetation indices, also defined as soil-line indices descriptive for sparse vegetation covering (Ren et al., 2018) as baby-leaf crops are. They have been used for grading wheat powdery mildew disease severity trough satellite-acquired scenes (Gröll et al., 2007;Feng et al., 2016;Ma et al., 2018). Recently, SAVI has been applied for the field estimation of the severity of cotton root rot caused by the fungus Phymatotrichopsis omnivora (Zhao et al., 2020), while OSAVI has been used to sense Fusarium Head Blight on wheat by computing Sentinel-2 multispectral data (Liu L. et al., 2020). Similarly to our findings, OSAVI has been found highly correlated with Rhizoctonia crown and root rot severity on sugar beet assessed with a non-imaging remote sensing approach (Reynolds et al., 2012). On the other hand, TVI is the triangular vegetation index associated to leaf chlorophyll content (Cui et al., 2019) and plant vitality (Broge and Leblanc, 2001). It has been calibrated for the leaf area index estimation (Xing et al., 2020) and is also known for describing spectral variations due to wheat leaf rust symptoms caused by Puccinia triticina (Ashourloo et al., 2014a,b). To the best of our knowledge, this is the first study that retrieved hyperspectral VIs with high discriminatory capability for the biocontrol ability of Trichoderma against developing soilborne diseases of leafy vegetables. Previously, Silva et al. (2018) have tried to apply a laser speckle based on a light signal at 632 nm to assess the efficacy of maize seed treatments with T. harzianum on the germination, vigor and sanitation of seedlings. Instead, Pishchik et al. (2016) have calculated VIs on VIS, RED (red-edge), NIR and MID (middle infrared) spectral information acquired with a field pulse photometer, to tentatively track the synergistic effect of the plant growth promoting bacteria, Bacillus subtilis and a humic fertilizer on lettuce plants quality and vitality. The four indices of this study, each applying its own peculiar algorithm, work in the spectral range 550-800 nm, just on the border between VIS and NIR regions, suggesting that this part of the spectrum could be sensitive to the reflectance shifts occurring at canopy level during the plant-pathogen-antagonist interaction. Marín-Ortiz et al. (2020) have found in the VIS/NIR range 448-995 nm the distinctive spectral response of tomato to the Fusarium oxysporum infection that has been also associated to changes in the leaf concentration of chlorophyll and carotene. Similarly, the soil-borne pathogens studied in our systems could bring to the decline of chlorophyll and other pigments, as-well-as growth reduction conditioning the reflectance reaction. As a matter of fact, decreases in chlorophyll content has been noticed in Rhizoctonia diseased carrot (Ahmad et al., 2019), in cucumber affected both by R. solani and S. rolfsii (Kotasthane et al., 2015) and in soybean attacked by S. sclerotiorum (Vitorino et al., 2020). On the contrary, Trichoderma can enhance the phothosynthetic performances of the colonized plants by increasing their chlorophyll content and, at the same time, determining an improvement of their general physiological status (Singh et al., 2013;Doley et al., 2014;Kotasthane et al., 2015) exerting an antagonistic action with respect to the pathogen in promoting the vitality of the plant. Therefore, according to these inferences the plant functional imaging as applied here may return valuable information about how the biocontrol agents is working. Findings of the present study indicate the potential to boost the sustainability of disease management protocols trough highperforming hyperspectral VIs that can drive the biocontrol practices, such as, for example, the microbial augmentation, based on the early recognition of the worsening of the plant state and of the possible effectiveness reduction of the adopted plant protection strategy. Functional plant imaging can be used to track the plant progression under biocontrol effect using a restricted number of bands. The digital imaging has been proposed for the early diagnosis of plant diseases (Lowe et al., 2017), for the realtime field estimation of phytopathological conditions (Golhani et al., 2018) and to provide useful information for pest and disease control . Here, it helped to scout effective biological control agents against baby-leaf salad pathogens, demonstrating the potential to sense the biocontrol making on developing soil-borne diseases. The association between BCAs and hyperspectral imaging, concurring at reducing chemical pressure of fungicides on the environment and avoiding crop losses for uncontrolled pathogenic attacks, opens to the concept of precision biological control. The availability of digital tools for the automatized large-scale evaluation of biocontrol evolution will be useful both in field/greenhouse systems to rapidly assess the success of biological measures against phytopathogens as well as Susič et al. (2020) have recently pointed up for pest control. CONCLUSIONS The high-effective Trichoderma strains identified in this study are able for protecting baby-leaf vegetables from a wide-spectrum of soil-borne pathogens, such as R. solani, S. sclerotiorum, and R. rolfsii. Strains belonging to T. longibrachiatum, T. atroviride, and T. harzianum are suitable for large-scale preventive applications in greenhouses that host wild rocket and baby-lettuces in succession and/or in rotation and have a perspective to work in consortia since they sourced from a unique niche. The scenario of applying digital imaging as innovative scheme to boost biological control, from the high throughput screening of the microorganisms to their field application, is highlighted. OSAVI, SAVI, TSAVI, and TVI, that were found highly correlated to disease severity, are promising and informative hyperspectral VIs to track biological control activity against multiple soil-borne pathogens of baby leaf vegetables. In future studies, digital imaging will be able to integrate metabolomic linked to transcriptomic analyses, which, supported by machine learning processing, can contribute to further improve the accuracy of the forecasting models by imaging applied to the plant protection practices. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. AUTHOR CONTRIBUTIONS GM and CP conceived and designed the study and wrote the initial manuscript. GM, NN, MC, and CP conducted the experiments. GM and NN analyzed data. CP assisted in data analysis and interpretation of results. MZ and TC reviewed and edited the final version of the manuscript. All authors have read and agreed to the published version of the manuscript.
v3-fos-license
2018-12-27T02:58:07.578Z
2016-01-01T00:00:00.000
63734426
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://journal.r-project.org/archive/2016/RJ-2016-053/RJ-2016-053.pdf", "pdf_hash": "b7e1546c00ea1f5d038f998363f81ceb3dd195bf", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42536", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "b7e1546c00ea1f5d038f998363f81ceb3dd195bf", "year": 2016 }
pes2o/s2orc
nmfgpu4R: GPU-Accelerated Computation of the Non-Negative Matrix Factorization (NMF) Using CUDA Capable Hardware In this work, a novel package called nmfgpu4R is presented, which offers the computation of Non-negative Matrix Factorization (NMF) on Compute Unified Device Architecture (CUDA) platforms within the R environment. Benchmarks show a remarkable speed-up in terms of time per iteration by utilizing the parallelization capabilities of modern graphics cards. Therefore the application of NMF gets more attractive for real-world sized problems because the time to compute a factorization is reduced by an order of magnitude. Introduction Dimension reduction techniques are commonly used in machine learning and data mining tasks.For instance in text mining a corpora with thousands of words in the vocabulary could be too complex to be learned by Support Vector Machines (SVM) directly.Therefore the most important structure within the data must be extracted prior to the learning process.In the context of text mining new data axes at best represent topics in the corpora, which are used to approximate the original documents.Furthermore by reducing the feature space of the data it is less likely to be influenced by the Curse of Dimensionality (CoD) (Bellman, 1961). There are several methods to reduce the dimension of a data matrix, for example Principal Component Analysis (PCA) (Pearson, 1901) and Latent Dirichlet Allocation (LDA) (Blei et al., 2003).Another powerful technique namely Non-negative Matrix Factorization (NMF) (Lee and Seung, 1999) will be discussed in the first section of this work.Currently available NMF implementations require a prohibitively long computation time, which make the usage for real-world applications impractical.Therefore we present an implementation using the Compute Unified Device Architecture (CUDA) platform with a binding to the R environment.Furthermore the package is developed platform independent and is compatible with all three major platforms for R: Windows, Linux and Mac OS X. Overview of non-negative matrix factorization Let X ∈ R n×m + be a matrix with n attributes and m observations in the dataset, then the data matrix X is approximated by the product of two new matrices W and H (Lee and Seung, 2001): Each column of the matrix W ∈ R n×r + represents a single basis vector, whereas each column of the matrix H ∈ R r×m + represents an encoding vector.Therefore a column of the data matrix can be approximated by the linear combination of all basis vectors with one encoding vector (Lee and Seung, 2001).The importance of each basis vector can be seen by analysing the row sums of matrix H. Row sums with a low value identify basis vectors with very little influence on the dataset and vice versa (Skillicorn, 2007).It is also important to note that the data matrix as well as both matrices W and H contain only non-negative values. Besides the general convention in the context of data mining, NMF expects columns to represent observations of the dataset instead of attributes (Skillicorn, 2007), as visualized in Figure 1.For that reason it is very important to read the data matrix definition in the literature carefully. Contrary to PCA or Singular Value Decomposition (SVD), the basis vectors are not linearly independent and thus the solution is not unique.However the reconstruction of the data matrix is purely additive and yields a more natural parts-based decomposition (Lee and Seung, 1999). As the factorization should represent a compressed form of the original data matrix, one approach is to choose r depending on the number of rows and columns of the data matrix (Lee and Seung, 2001): The In general, one should choose r m (Shahnaz et al., 2006).However, choosing the right parameter depends on the dataset and usage of the factorization. Pseudo-code with non-negative values and set k = 0 3. while k < k max and not converged: (a) Fix matrix W (k) and compute matrix k+1) and compute matrix W (k+1) (c) Evaluate error function to check for convergence Initialization of factor matrices Using a good initialization of the matrices can decrease the required number of iterations and further improve the factorization's quality.Depending on the chosen algorithm either only matrix W or both matrices need to be initialized. Several different approaches were presented to execute step 2 of the pseudo-code, the most simple one by Lee andSeung (1999, 2001) namely initializing both matrices just with random values.A more complex initialization uses the SVD of the data matrix (Boutsidis and Gallopoulos, 2008), a very expensive approach which should be only used if the SVD is already available (Langville et al., 2014).However this initialization yields a unique factorization because SVD is also unique. In general, the convergence theory of NMF is not researched enough.For example, Lee and Seung (2001) had shown that the multiplicative update rules converge to a local minimum.However Gonzalez and Zhang (2005) disproved that and clearly state the algorithm is only proven to converge at most to a saddle point.In fact most of the newer algorithms are only guaranteed to converge to a local minimum.This is mainly because NMF is a non-convex optimization problem (Lee and Seung, 2001).In each computation step only one of two matrices gets updated, independently from the other one.Hence finding a global minimum is unlikely, however multiple local minima do exist.If the execution time of an algorithm is short enough, then a Monte-Carlo like approach can be chosen (Berry et al., 2007).That implies executing the algorithm multiple times using different initializations each time and picking the factorization with the best quality. Error function In the literature, different error or loss functions are proposed.The most common are Kullback-Leibler Divergence (Lee and Seung, 1999) and Frobenius norm (Paatero and Tapper, 1994;Lee and Seung, 2001).Since only the Frobenius norm is used in this work, Kullback-Leibler divergence won't be discussed. In an abstract sense, the Frobenius norm of a matrix A ∈ R n×m + is equal to the Euclidean distance of a vector a ∈ R n•m + .To be more precise the Frobenius norm is the square root of the sum of all The R Journal Vol.8/2, December 2016 ISSN 2073-4859 squared matrix elements (Reinhardt et al., 2013): Besides this general definition there do exist alternative representations, among others the representation using the trace of a matrix (Reinhardt et al., 2013): For optimized computation the widely used minimization problem is rearranged using this equivalence: Upon first sight the error function seems to be more expensive to compute but actually most terms get computed during the algorithm execution anyway (Berry et al., 2007;Langville et al., 2014).Furthermore, the trace X T X is constant and can be precomputed. The following algorithms minimize the Frobenius norm, but can also easily be derived for other error functions. Updating with multiplicative update rules Multiplicative update rules have been first described by Lee andSeung (1999, 2001) and are the fastest algorithms in terms of computational cost per iteration.In fact this type of algorithm is a special case of the gradient-descent algorithm with a specific step size (Lee and Seung, 2001).Both update rules for the matrices W and H are applied in an alternating fashion to solve step 3a) and 3b) of the NMF pseudo-code: Where ⊗ denotes the element-wise matrix multiplication and the element-wise matrix division.However it is advised to add an epsilon to the denominator, e.g.≈ 10 −9 for double precision floating point values, to avoid divisions by zero (Berry et al., 2007).Referring to table 3 in the implementation section, multiplicative update rules are used in mu and nsNMF for both matrices, in gdcls only for matrix W. Updating with alternating least squares Alternating Least Squares (ALS) type algorithms are another approach to solve step 3a) and 3b) of the NMF pseudo-code.The central idea is that for one given matrix the other one can be computed using a least-squares projection (Paatero and Tapper, 1994). In the first step, Equation 8 gets solved to H (k+1) whereby the computation of matrix W becomes possible.Equation 9 gets solved for W (k+1) T , followed by transposing the solution to acquire the matrix W (k+1) . Since solving a linear equation system possibly yields negative values, the non-negativity constraint for both matrices W and H must be ensured after each solving step.One possible solution for this problem is to set all negative values to zero (Langville et al., 2014).Langville et al. (2014) describe ALS extensions like Alternating Constraint Least Squares (ACLS) and Alternating Hoyer Constraint Least Squares (AHCLS), which use additional parameters to provide a more The R Journal Vol.8/2, December 2016 ISSN 2073-4859 sparse factorization.Therefore both diagonal and non-diagonal values of the covariance matrices W T W and HH T get manipulated.For example, the AHCLS uses the additional parameters λ W , λ H , α W and α H to solve the following equations: Where I ∈ R r×r denotes the identity matrix and E ∈ R r×r a matrix of ones, furthermore β W and β H are defined as: 12) , where α W and α H should represent the requested percentage of sparsity.As a head start all four values should be set to 0.5. Once more referring to Table 3 in the implementation section, ALS update rules are used in als, acls, and ahcls for both matrices, in gdcls only for matrix H. The NMF algorithm for R using CUDA: nmfgpu4R There already exist some approaches to compute NMF in R, for example the NMF (Gaujoux and Seoighe, 2010) and NMFN (Liu, 2012) packages on CRAN.However both packages use the CPU for the computational process and even with parallelization of multiple runs the usage for real-world datasets is limited. CUDA-based implementations of NMF are already part of the GPUMLib1 (Lopes and Ribeiro, 2010), which itself contains various algorithms of machine learning tasks for CUDA platforms.Currently, as of version 0.3.4,there are two algorithms available, one additive and one multiplicative, for both Frobenius norm and Kullback-Leibler divergence.Besides that no complex initialization strategies or algorithms incorporating constraints are available.Furthermore the computation of NMF is restricted to single precision format, which might not be suitable for every dataset. In this work we propose a new package called nmfgpu4R2 , which is a binding to a separate library called nmfgpu3 written in C++11 using CUDA (version ≥ 7.0) for Nvidia GPUs with compute capability ≥ 3.0 (Kepler).When using CUDA, different build tools must be chosen depending on the platform.This limitation is induced by Nvidia's nvcc compiler, which only supports one compiler per platform (nvcc itself is built on top of one compiler).By splitting the package and C++ library in two separate modules, it is possible to provide both nmfgpu4R and nmfgpu for all three major platforms: Windows, Linux, and Mac OS X. Modern Graphics Processing Units (GPU) can also be used as High Performance Computing (HPC) devices using either OpenCL or CUDA.Latter is restricted to only Nvidia hardware but is more common and can be integrated directly into C/C++ source code.One advantage of the GPU over CPU parallelization is that algorithms have to be developed scalable and data parallel.Synchronization and data transfer logic has to be handled by the developer and therefore these algorithms are able to profit more from new and more powerful hardware generations.For more information about the CUDA platform please visit the Nvidia CUDA website4 . Supported data matrix formats Internally the library computes the algorithms using dense matrices, so one option is to pass in a numeric matrix with proper dimensions.Furthermore the nmfgpu4R package currently supports S4 classes from the Matrix package, developed by Bates and Maechler (2014), and the SparseM package, developed by Koenker and Ng (2015).A complete reference about supported S4 classes is listed in table 1.It is important to note that the sparse matrices get converted into dense matrices on the GPU-side.At the moment, a computation using sparse algorithms does not take place at any time. Storage Format Matrix SparseM Dense "dgeMatrix" -Coordinate (COO) "dgTMatrix" "matrix.coo"Compressed Sparse Column (CSC) "dgCMatrix" "matrix.csc"Compressed Sparse Row (CSR) "dgRMatrix" "matrix.csr"However this feature allows large sparse matrices to be converted much faster in GPU memory.For example this might be quite useful for Bag-of-Words (BoW) in text mining (Salton and Buckley, 1988) or Bag-of-Visual-Words (BoVW) in image classification / retrieval (Cula and Dana, 2001), where the vocabulary is commonly very large but the frequencies are mostly zero. Customizing the initialization Algorithms of the Non-negative Matrix Factorizations solve a non-convex optimization problem.Thus choosing a good initialization can reduce the number of iterations and yield better results.In NMF four different initialization strategies are implemented.There are different approaches to choose an initialization for both matrices W and H.It is important to keep in mind that when an ALS type algorithm is chosen only matrix W has to be initialized.Matrix H will be computed in the first iteration from only matrix W and the data matrix.All supported initializations by nmfgpu4R are listed in Table 2. Strategy CopyExisting can be used to provide user-defined initializations for both matrices W and H which get copied directly into GPU memory.When using AllRandomValues both matrices W and H get initialized by random values which is the most common but also the simplest strategy (Pauca et al., 2006).Langville et al. (2014) presented a method called MeanColumns to form initial basis vectors from data columns.The idea behind this initialization is that if the data columns are sparse then the initial basis vectors should be sparse as well.Furthermore, k-means clustering can be used to find initial basis vectors (Gong and Nandi, 2013). Strategy If matrix H has to be initialized in the context of k-means based initializations, then there are different approaches.Most complex is the EIn-NMF initialization which computes the membership degree of each data column (Gong and Nandi, 2013). Using different algorithms There are currently six different algorithms implemented in nmfgpu4R, because NMF models can be computed in different ways and, furthermore, can be restricted by constraints.Those algorithms which do have extra constraints, can also be adjusted through parameters.In Table 3 all implemented algorithms and their corresponding publications are listed. A few of these algorithms will be evaluated in the benchmark section, using two different image datasets.In general the right choice of algorithm depends on the data and noise within the data.For an overview of all required parameters for a specific algorithm, please have a look at the package documentation. The Adjusting convergence tests Most NMF implementations only use the number of iterations as a convergence test, as this is a very cheap test.However, for a mathematically correct convergence test an error function has to be computed and observed during the algorithm execution.In NMF there are four different stopping criteria implemented, which can also be combined.The nmfgpu4R package implements both: the convergence test by observing an error function, as the primary and an upper limit of iterations, as the secondary convergence criterion. Setting the threshold value can be done by passing in the parameter threshold.This value is actually interpreted differently depending on the configured error function.Currently the Frobenius Norm and Root Mean Squared Deviation (RMSD) are supported.One advantage of the RMSD error function is that it is normalized by the number of data matrix elements and therefore independent of the data matrix dimension.By passing in the parameter maxiter the maximum number of iterations can be overwritten, which is by default set to 2000.For example, execute the algorithm until the delta error is less than 0.1 regarding the RMSD error function but at most 500 iterations: result <-nmf(data, r, threshold=0.1,thresholdType="rmsd", maxiter=500) Depending on the datasets the ALS type algorithms are sometimes not stable and therefore not monotonically decreasing.In such a case the convergence test using the threshold value will not work properly. Encoding matrix for new unseen data A simple but effective method to calculate an encoding matrix for unseen data was described by Lopes and Ribeiro (2011), which allows NMF to be used within learning algorithms.Using this method the training data gets factorized with a normal NMF step.However the factorization step of the testing data reuses the matrix W and only updates the matrix H. Thus the resulting matrix H is an encoding of learned basis vectors from the training data.A complete scheme of the process is visualized in figure 2. As a result, structures between both training and test data are being preserved, but the feature dimension in matrix H can be reduced to a fraction of the original dimension.Hence, learning, for example, a Support-Vector-Machine (SVM) can be speeded up and furthermore prediction accuracy can be improved. In the following example code the nmf method is used to train the basis vectors for the training dataset.After that, the generic predict method can be used to either retrieve the encoding matrix H of the training data or to generate an encoding matrix for a new data matrix.The objective here is to reduce the 4 dimensions of the iris dataset (Fischer, 1936) Using the iris dataset is just an example and should be replaced with a much larger dataset to fully utilize the GPU.Furthermore an improvement in speed and possibly in accuracy over non-reduced data is more likely to be observed when the dimension is reduced by a larger magnitude.This example learns basis vectors from a training dataset and predicts the encoding matrix for the test dataset.To visualize the encoding matrices of both datasets and their relationships, a simple scatter plot can be made with the following code: # Plot encoding matrices library(ggplot2) data.plot<-data.frame(rbind(encoding.train, encoding.test), class=unlist(list(iris[idx,5], iris[-idx,5])), type=c(rep("Train", nrow(data.train)),rep("Test", nrow(data.test))))ggplot(data.plot,aes(x=r1, y=r2, color=class, shape=type)) + geom_point() As shown in Figure 3, both datasets share the same structure.Observations from each of the three classes are predicted to belong to the same area as the training observations. Issues during developement The nmfgpu4R package provides a binding to an independent C++ library, which uses the latest C++ features from the C++11 standard.In order to support multiple platforms deploying an extra library is a necessary step since the Nvidia CUDA compiler nvcc only supports the Microsoft Visual C++ compiler on Windows platforms.But R uses its own compilation tool chain and therefore does not allow the Microsoft Visual C++ compiler. The main problem is that C++ compilers emit an object code which is not compatible with the object code of another compiler.R uses g++ from the MinGW tool chain and therefore both compiled binaries are not link-compatible, virtual tables are only compatible in some situations and struct returns simply do not work.Furthermore since the object code is not link-compatible one must fall back to an extern "C" interface, which then can be loaded using native system calls like GetProcAddress on Windows or dlsym on Linux/Mac OS.Such issues do not come up on Linux or Mac OS because R uses on these platforms the default configured compiler which is also supported by the nvcc compiler. The R Journal Vol.8/2, December 2016 ISSN 2073-4859 q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q 2 3 4 5 6 7 0.0 2.5 5.0 7.5 10.0 r1 r2 class q q q setosa versicolor virginica type q Test Train q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q −1.(Fischer, 1936), which is reduced by the nmf method (left) and by the pcromp method (right) to 2 dimensions. Benchmarks In this section multiple benchmarks are described which were performed on the Yale Face Database (Belhumeur et al., 1997) and Cropped Extended Yale Face Database B (Lee et al., 2005).As a preprocessing step all images were scaled to a common height of 64 pixels while preserving the aspect ratio.For testing, a system server with CentOS 7.2.1511,Intel Xeon E5-2687W v3 @3.10GHz (10 physical cores), 256GB RAM, Nvidia GeForce GTX Titan X and two Nvidia Tesla K80 was used.R is a custom build of version 3.3.1 using OpenBLAS5 as BLAS back-end. In this benchmark the nmfgpu4R (version 0.2.5.1) package is compared to the CRAN packages NMF (version 0.20.6) and NMFN (version 2.0), which both provide CPU implementations of common NMF algorithms.The NMF package does provide optimized C++ algorithms as well as pure R implementations.Regarding the package documentation parallelization is only performed using clusters for parallelizing multiple runs of the same algorithm with different initializations.In order to fully utilize the CPU cores, pure R algorithms were benchmarked using an OpenBLAS backend with OPENBLAS_NUM_THREADS=10. Algorithms from the NMFN package were modified to accept preinitialized matrices to be able to compare the algorithms with identical starting points.Both the CPU and GPU algorithms were executed 10 times each. As already stated in the previous section Alternating Least Squares algorithms seem to perform poorly on very dense datasets, leading to a non-stable factorization or even no solution at all.However the execution times of the ALS algorithms in nmfgpu4R are the highest of all GPU algorithms, but they are still very low compared to the ALS implementation in NMFN, which is shown by Figure 4 (top).Furthermore, the optimized C++ algorithms in the NMF package are much slower when computed in sequential mode compared to the R implementations, which are accelerated by the multi-threaded OpenBLAS back-end. Overall the multiplicative algorithm is the fastest algorithm for both GPU and CPU.Depending on the dataset it might be useful to compute the factorization in single precision format, because modern GPUs have still more single precision than double precision floating point units.As shown by Figure 4, GPUs of Nvidia's GeForce series are optimized for single precision calculations, which is sufficient for end-user gaming experience.However double precision computation is very limited on those cards, whereas the Tesla series also provides enough double precision units for fast calculations.As Table 5 indicates, there is no noticeable difference in terms of factorization quality but very much in execution time.Small variations between error functions can be caused due to computational ordering and on GPU-side due to dispatching of thread blocks. Figure 1 : Figure 1: NMF model which approximates the data matrix by a linear combination of basis vectors and an encoding matrix. Figure 2 : Figure 2: (a) Prediction of an encoding matrix for unseen data.The data matrix of the existing NMF model is "extended" by new data, but the basis vectors are fixed.(b) Data flow visualization of the prediction process in the context of a SVM (derived from Lopes and Ribeiro (2011)). Figure 3 : Figure3: of the encoding matrices for the iris dataset(Fischer, 1936), which is reduced by the nmf method (left) and by the pcromp method (right) to 2 dimensions. Figure 4 : Figure 4: Computation time for one iteration on the Yale Face Database with r = 32 (top) and Cropped Extended Yale Face Database B with r = 128 (bottom) shown on a logarithmic scale. Table 1 : Supported S4 classes as input data matrix to nmfgpu4R. Table 2 : Supported initialization strategies for initializing matrix W and H. Table 3 : Overview of implemented algorithms in nmfgpu4R. Table 4 : The resulting matrix dimensions can be taken from table 4. Dimensions of data matrices which where used to benchmark existing CPU implementations as well as GPU implementations by the nmfgpu4R package. Table 5 : Benchmark results for the Yale Face Database with r = 32 features and Cropped Extended Yale Face Database with r = 128.Each measurement was taken at iteration 2000 with n = 10 computations.
v3-fos-license
2014-10-01T00:00:00.000Z
2006-12-28T00:00:00.000
9292217
{ "extfieldsofstudy": [ "Geography", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ij-healthgeographics.biomedcentral.com/track/pdf/10.1186/1476-072X-5-60", "pdf_hash": "f5b9830db11c9a74f9eea71e2534794028a035e1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42537", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "859e2f27f15a90626fe15c6dd2d0400f50b1298f", "year": 2006 }
pes2o/s2orc
Developing global climate anomalies suggest potential disease risks for 2006 – 2007 Background El Niño/Southern Oscillation (ENSO) related climate anomalies have been shown to have an impact on infectious disease outbreaks. The Climate Prediction Center of the National Oceanic and Atmospheric Administration (NOAA/CPC) has recently issued an unscheduled El Niño advisory, indicating that warmer than normal sea surface temperatures across the equatorial eastern Pacific may have pronounced impacts on global tropical precipitation patterns extending into the northern hemisphere particularly over North America. Building evidence of the links between ENSO driven climate anomalies and infectious diseases, particularly those transmitted by insects, can allow us to provide improved long range forecasts of an epidemic or epizootic. We describe developing climate anomalies that suggest potential disease risks using satellite generated data. Results Sea surface temperatures (SSTs) in the equatorial east Pacific ocean have anomalously increased significantly during July – October 2006 indicating the typical development of El Niño conditions. The persistence of these conditions will lead to extremes in global-scale climate anomalies as has been observed during similar conditions in the past. Positive Outgoing Longwave Radiation (OLR) anomalies, indicative of severe drought conditions, have been observed across all of Indonesia, Malaysia and most of the Philippines, which are usually the first areas to experience ENSO-related impacts. This dryness can be expected to continue, on average, for the remainder of 2006 continuing into the early part of 2007. During the period November 2006 – January 2007 climate forecasts indicate that there is a high probability for above normal rainfall in the central and eastern equatorial Pacific Islands, the Korean Peninsula, the U.S. Gulf Coast and Florida, northern South America and equatorial east Africa. Taking into consideration current observations and climate forecast information, indications are that the following regions are at increased risk for disease outbreaks: Indonesia, Malaysia, Thailand and most of the southeast Asia Islands for increased dengue fever transmission and increased respiratory illness; Coastal Peru, Ecuador, Venezuela, and Colombia for increased risk of malaria; Bangladesh and coastal India for elevated risk of cholera; East Africa for increased risk of a Rift Valley fever outbreak and elevated malaria; southwest USA for increased risk for hantavirus pulmonary syndrome and plague; southern California for increased West Nile virus transmission; and northeast Brazil for increased dengue fever and respiratory illness. Conclusion The current development of El Niño conditions has significant implications for global public health. Extremes in climate events with above normal rainfall and flooding in some regions and extended drought periods in other regions will occur. Forecasting disease is critical for timely and efficient planning of operational control programs. In this paper we describe developing global climate anomalies that suggest potential disease risks that will give decision makers additional tools to make rational judgments concerning implementation of disease prevention and mitigation strategies. other regions will occur. Forecasting disease is critical for timely and efficient planning of operational control programs. In this paper we describe developing global climate anomalies that suggest potential disease risks that will give decision makers additional tools to make rational judgments concerning implementation of disease prevention and mitigation strategies. Background The El Niño/Southern Oscillation (ENSO) is the most well-known phenomenon influencing the global climate variability at interannual time scales. The National Oceanic and Atmospheric Administration's (NOAA) Climate Prediction Center (CPC) has recently issued an unscheduled El Niño conditions advisory that indicates that El Niño conditions will peak during the Northern Hemisphere winter, followed by weakening during March -May 2007 [1]. The term El Niño refers to the large-scale ocean-atmosphere climate phenomenon linked to a periodic warming in sea surface temperatures across the central and east-central equatorial Pacific (between approximately the International Date line and 120 degrees west longitude), and thus represents the warm phase of the ENSO, and is sometimes referred to as a Pacific warm episode. The opposite of which is La Niña, a cold phase of ENSO. Given the large size of the Pacific Ocean, changes in the sea surface temperature patterns and gradients across the basin influence atmospheric circulation with pronounced impacts on global tropical precipitation and temperature patterns. Climate variability has a demonstrated impact on infectious diseases [2], and increased disease transmission has been linked to ENSO driven climate anomalies [3][4][5][6][7]. Outbreaks of insect transmitted diseases such as Murray Valley encephalitis, bluetongue, Rift Valley fever (RVF), African Horse sickness, Ross River virus disease [8][9][10][11][12] and malaria [13,14] have been associated with El Niño. Hence, forecasting the risk of ENSO related human and animal disease outbreaks is critical for timely and efficient planning of operational control programs. However, for decision makers to respond effectively the forecast must be accurate and timely [5]. Here we describe developing global climate anomalies that suggest potential elevated disease risks in the hope that decision makers will have additional tools to make rational judgments concerning implementation of a wide-range of disease mitigation strategies. Using the 1997/98 period as a reference template ( Figure 4) and the forecast of likely conditions for the next 3-9 months, there is a high likelihood for drought conditions to prevail over south-east Asia, Mexico, north-east Brazil and Southern Africa, and above normal rainfall and flood conditions to occur over coastal Peru, southern California, the U.S. Gulf Coast and Florida and Eastern Africa. Potential elevated disease outbreaks Some of the above climate extremes are already being experienced in equatorial East Africa, Australia and the Indonesian Peninsula. These extremes in climatic conditions will likely affect vector abundance in different ways elevating the risk of outbreaks of various infectious diseases [3]. Drought conditions can suppress predators of Anopheles malaria vectors [14,16]; however, heavy rains will boost food supplies -a synergy that can for example elevate rodent populations [17] and create appropriate conditions for mosquito breeding and propagation [11]. Previous ENSO events have been strongly associated with disease outbreaks over time and with spatial clusters of mosquito-, water and rodent-borne illnesses. Given current observations and forecast information the following regions ( Figure 5) are at increased risk for disease outbreaks. 1. Indonesia, Malaysia, Thailand and most of the Southeast Asia Islands: Increased dengue fever transmission caused by drought conditions which (1) increase water storage around houses leading to elevated Aedes aegypti populations and (2) elevate ambient air temperatures which will reduce the extrinsic incubation period for the virus in vector mosquitoes increasing vector capacity [ [18], Linthicum et al., unpublished observations]; increased respiratory illness due to haze from uncontrolled burning of tropical forests when extreme drought occurs. 2. Coastal Peru, Ecuador, Venezuela, and Colombia: Increased risk of malaria due to elevated Anopheles vector populations which will develop when various types of immature habitats are flooded after heavy rainfall follows a period of drought [14,16]. 3. Bangladesh and coastal India: Elevated risk of cholera due to elevated sea surface temperatures and of incursion of plankton-laden water inland rich in Vibrio cholerae, the bacterium that causes cholera [7]. In addition to elevated SSTs, heavy rains wash nutrients into waterways and may trigger plankton blooms. 4. East Africa (Ethiopia, Kenya, Somalia, and Uganda): Increased risk for RVF and malaria resulting from elevated mosquito vector populations, and cholera caused by flooding due to heavy rainfall in dry land areas [7,11,19,20]. 5. South West USA (New Mexico, Arizona): Increased risk for hantavirus pulmonary syndrome and plague due to elevated rodent populations caused by heavy rainfall [17,21] 6. Southern California: Elevated potential for transmission of arboviruses, such as West Nile virus, caused by heavy rainfall and resulting in elevated Culex species mosquito populations. 7. Northeast Brazil: Drought conditions leading to increased dengue fever and respiratory illness. Discussion Currently weak El Niño conditions exist, but there is a potential for this event to strengthen into a moderate event by winter. Conclusion An El Niño conditions advisory has been issued by the NOAA CPC which indicates that anomalously warm SST conditions exist in the equatorial Pacific in October 2006 and are likely to continue into early 2007 [1]. Global products including SST and OLR, which are useful in illustrating the current situation of global climate anomalies, are being monitored for their implications for public health [22]. Impacts of the current El Niño include above normal precipitation over the eastern Pacific and East Africa regions, and drier than average conditions over Southeast Asia, Malaysia and Indonesia. These conditions will likely persist for the remainder of 2006 and early 2007. The development of El Niño conditions has significant implications for global public health. Extremes in climate events with above normal rainfall and flooding in some regions and extended drought periods in other regions will occur. Forecasted elevated rainfall in coastal Peru, Ecuador, Venezuela and Columbia will increase malaria risk due to elevated Anopheles vector populations. Heavy rainfall in East Africa may elevate mosquito vector populations and lead to RVF, and increased malaria and cholera risk. Elevated rainfall in the south west of the U.S. will increase the risk of the rodent-borne diseases hantavirus pulmonary syndrome and plague. Above normal rainfall in southern California will elevate the risk of West Nile virus. Elevated sea surface temperatures near Bangladesh and India will increase the risk of cholera. Drought conditions in Southeast Asia and islands of Indonesia and northeast Brazil will increase the risk of dengue fever and respiratory diseases. Forecasting epidemics or epizootics is critical for timely and efficient planning of operational control programs if the forecast is accurate and delivered in a timely manner. In this paper we describe developing global climate anomalies that suggest potential disease risks so that decision makers will have supplemental tools to make rational judgments concerning implementation of a widerange of disease prevention and mitigation strategies. Methods We Anomalous climatic conditions caused by ENSO are now recognized to be linked with outbreaks of various human and animal diseases in various countries [28]. The eco-climatic conditions associated with disease outbreaks can now be effectively monitored using satellite data as illustrated here.
v3-fos-license
2023-02-01T14:40:39.169Z
2021-12-12T00:00:00.000
256424966
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40737-021-00253-5.pdf", "pdf_hash": "9c79408b64c940fc37aec631beb5709c3e6a78b5", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42539", "s2fieldsofstudy": [ "Education", "Psychology" ], "sha1": "9c79408b64c940fc37aec631beb5709c3e6a78b5", "year": 2021 }
pes2o/s2orc
The Journey to My Student Identity: A Grounded Theory Study on Supported Education for Young Adults with Mental Health Problems Support for developing a work identity has been shown to be essential for the recovery process of young adults with mental health problems. Since research shows that the development of a student role during the educational years for these young adults may be interrupted, this time period may be relevant to explore in order to support career development and the critical transition to adulthood for this target group. To explore young adults’ experiences of participating in supported education that is integrated with vocational and mental health services, reflecting the process of developing a student identity while struggling with mental health problems. A grounded theory design was used. The material consists of 17 individual interviews with young adults aged 18–29 years who were receiving supported education. Young adults who study while having mental health problems encountered structural barriers and challenged engagement in education that created a gap between the students and the regular education system. Access to supported education was reported to decrease this gap and formed a bridge that to facilitate educational achievements. The achievements were related to several personal benefits that were important for the experience of meaning and identity development in the future. Supported education can contribute to enabling the development of student identity for young adults with mental health problems. This involves an engagement process and positive identity formation that may reduce stigma and is therefore important for the personal recovery process and career advancement. Introduction Persons experiencing mental health problem are less educated than persons of the same age, and this puts them in a weaker position in relation to the labour market [21,47]. Mental health problems increased 71% among individuals aged 18-25 years between 2008 and 2017 [54]. Being young is a crucial period when the person is to explore and develop their identity, manage relationships, increase emotional and economical independence, and develop vocational engagement [8]. Experiencing mental health problems at this age influences social, occupational, and other important areas of functioning [2,49]. This may prevent the development of important milestones and changes associated with adulthood [9,26,48,49]. Increasing the risk for long-term adverse outcomes is low educational level and interrupted career development [42]. Of those not involved in education, more than half want to return to school to develop skills and enhance career advancement, as well as to find different forms of opportunities for personal growth [25, 33. Were being able to continue education is a fundamental first step in the process of developing natural age-related life roles [12,43]. Creation of an evidence-based mental health service that integrates recovery-oriented models and interventions [15,17] that focus on the service users' personal recovery as well as their clinical needs [11,28], is critical to support long-term employment and mitigate the risk of marginalisation for the group of young adults with mental health problems. Personal recovery is defined herein as the processes of developing identity and meaningful life roles despite mental health symptoms and problems [3,35]. Evidence is overwhelming for the benefits of integrated mental health and vocational services that support service users to gain employment in the regular labour market, compared to traditional services [16,41]. The evidence is lacking, however, for the effectiveness of supported education (SEd) that is integrated in the same way. However, our current knowledge base substantiates the relevance of integrating educational support and mental health services for achieving longterm vocational goals and sustainable career development [30, 38,39,43]. Not only educational outcomes can be achieved with such integrated support. Earlier research demonstrates that the development of a work identity becomes possible, and forms an essential part of young adults' identity development and personal recovery process from mental health problems [37]. The research also suggests that integrated mental health and vocational services may support young adults' mental health and is linked to both their personal and clinical recovery [36]. Similarly, SEd may facilitate the development of a student identity and similar effects. Therefore, exploring the student role when participating in SEd is relevant to understand critical passages of identity development during transition into adulthood and future career development. This grounded theory study aims to explore young adult experiences of participating in SEd, a person-centred intervention that intends to integrate vocational and mental health services, as well as school and other welfare actors, and reflects on the process of developing a student identity while struggling with mental health problems. Design and Context A grounded theory research design was chosen as an appropriate method of studying the process of developing a student identity [23,24]. The COREQ guidelines for qualitative studies were used for reporting [53]. The study was part of a supported education research project within the national research network Centre for Evidence-based Psychosocial Interventions (CEPI) that is focused on developing knowledge of SEd for young adults experiencing mental health problems. The study was approved by the Ethical Review Board (Reg. No. 2014-277). The material consisted of 17 qualitative interviews conducted with young adults aged 18-29 years who were experiencing mental health problems and participating in SEd services in four medium-sized Swedish cities and one large Swedish city. The SEd services integrated educational support into vocational services that were serving young adults with a variety of mental health problems who had education as their primary support goal. Some of the services had been active for years and others had more recently implemented SEd into their services. Informants The SEd services included in this project were purposively selected for the present study because they had begun to integrate SEd support for the service users. Initially, the principal investigator (DR) asked the SEd services whether they were willing to participate in the primary study that was investigating the integrated services model. Then SEd service education specialists asked appropriate informants if they were interested in participating in the study. Seventeen service users agreed to participate in interviews. The inclusion criteria were ages 18-29 years, having mental health problems, and receiving educational support from SEd services. The age range of 18-29 years is in agreement with the Swedish welfare system definition of young adults with a disability [46]. All informants (n = 17) were given verbal and written information about voluntary participation, use of data in relation to the research aim, confidentiality protections, and ability to withdraw from the study at any time [10]. Written informed consent was obtained from each informant, and all data were handled confidentially. The study intended for informants to feel that they were a resource and contributed critical knowledge to the research field. Nine informants were women and eight were men, with a mean age of 24 years. Self-reported diagnoses included neuropsychiatric disorders, affective disorders, personality disorders, and psychosis disorders. Comorbidity occurred among 41% of the informants. More than half of informants (53%) studied at a high school/upper secondary school. The other 41% studied at college/university, and 6% at a grade school. Informants' primary income was from the Swedish Board of Student Finance (CSN). However, 29% had support from the Swedish Social Insurance Agency and 18% received income support (municipality). A description of the informants is presented in Table 1. Supported Education The SEd model in the present study resembles an integrated and person-centred intervention with practical and ongoing support to assist people with mental health problems to achieve their educational goals [30,31]. There are several different models of SEd: those that emphasize skills training or teaching in separate classroom, and others with more individually adapted support. They differ in the extent of integration with mental health services and school systems. Most SEd services integrate needed support and offer learning opportunities for academic skills and social competencies through time and stress management, financial counselling, outreach to education contacts, and career and educational planning [42,55]. A pilot study of the SEd model shows that it was feasible to support young adults with psychosis to return to education within six months [31]. The emerging trend in recovery-oriented services emphasizes that SEd should individually support vocational and educational needs of the services user, and existing supported employment services should explicitly focus on SEd. We proceed from this SEd model in the current study. The education specialists who delivered the SEd followed the following principles: (a) mainstream education is the goal; (b) educational and vocational services are integrated with mental health or other welfare services; (c) eligibility is based on service user's desire to study; (d) personalized benefit counselling is provided; (e) rapid linkage to educational activities; (f) collaboration with outside educational institutions, programs and organizations; (g) continuous support; and (h) service user's interests, preferences and needs are honoured [20,22,27,30]. The education specialists had the key role in coordinating the process and supporting individuals according to the principles. They insured that the service was a person-centred and that education-related aims were integrated with mental health and vocational support. To be two instead of one in the process has previously shown to enhance experiences of self-esteem and engagement in everyday activities [37,56) and community life in general [15,17]. The delivery was approximately one hour per week, with a focus on enabling motivation for education, planning educational development activities, mobilized a supportive network, and supporting various lifestyle strategies, as well as developing a career profile and plan. Data Collection In total, 17 individual interviews were completed between March-July 2019. The interviews were conducted by the co-authors (HH, VL, DR, UB) in varied constellations, and an additional three interviews were completed by the first author (UL). The educational specialist first explored the informant's interest in participating, and then an appointment was booked with those who were positive to the interview. Interviews were retrospective and aimed to have informants tell about their experiences and process of participating in SEd as openly and freely as possible. One informant chose to have the education specialist present during the interview. Interviews lasted between 20 and 70 min (median 47 min) and were conducted face-to-face in a private room at the SEd service premises, except for one interview when the informant chose to be interviewed by telephone. All interviews were digitally recorded with the informant's consent. An interview guide was collectively developed by the authors. The guide formed the basis for each interview and started with six broad, openended questions based on experience of education prior to receiving SEd and during the process of receiving SEd. The interview was conducted as a conversation and included three central areas of questioning: (1) Have there been factors that have affected your ability to participate in education? (2) What did you think was the most important support you received from the SEd service? (3) Has the opportunity to study affected your life in any way? To reduce the risk for bias, efforts were made to focus on the informant's experiences and minimize the researcher's opinions during the interviews. Data collection initially started by purposefully sampling informants participating in SEd services. After a first analysis, theoretical sampling governed the data collection [23] and continued until no new information that added new dimensions to the analysis emerged and theoretical saturation was reached. For example, interview guide adjustments were made prior to the second half of interviews with the goal of raising further questions and deepening the preliminary categories around barriers to study. For a time, theoretical sampling was used to guide future selection of informants with experience of education in order to raise further questions about developing a student identity to deepen the data. Memos were used throughout the entire process to document emerging categories and their characteristics [24], including statements from informants that could be explored further in future interviews. Data Analysis According to grounded theory, the analysis stayed close to the empirical evidence through constant comparisons in the entire analysis process [24]. Analysis preceded through the stages of open and axial coding [23]. The process of open coding focused on dividing and joining the data into theoretical codes and common categories. Emerging categories were checked through axial coding where the relationships were investigated, and questions about created conditions were then posed. This part of the analysis focused on codes and categories that refer to the process, i.e., Income support (municipality) 3 (18) ongoing actions and interactions, of developing student identity among young adults' experiencing mental health problem. Notes about links and contact points between the categories were taken throughout the analysis of initial interpretations [24]. From the emerging categories, three core categories corresponding to the research questions developed that reflect informants' views and experiences in their studies. The three core categories are represented by 11 categories with supporting subcategories that further characterize the variation in the process. They are illustrated in Table 2. According to the grounded theory framework, the categories are presented in a storyline to facilitate the theoretical integration of the findings [19]. In a final analytic step, integration of the categories formed a tentative model of the development of a student identity, describing a process of informant experiences in their studies [23]. The analysis was conducted in ongoing consultation with the co-authors (DR and UB), and the first author (UL). All authors agreed to the final analysis of findings. Results Informants experienced structural barriers and challenges to engagement in education, which presented a wide gap between the informants and available regular education in the community. Access to support from SEd was reported to decrease this gap and formed a bridge that helped to facilitate educational achievements. These achievements were related to several personal benefits important for the experience of meaning, identity development, and career advancement in a future life perspective. Figure 1 gives an overview of the informants' experiences of participating in SEd and of studying while having mental health problems. Three developed core categories illustrate the experienced aspects and are further represented in categories (subheadings) that characterize links in the processes. Structural Barriers and Challenges When Attempting to Study The first core category reflects challenges with structural barriers that informants experienced when engaging in studies prior to participating in SEd. Realising educational goals whilst struggling with mental health problems was made difficult by structural barriers. The barriers are grouped in five categories, presented below. A Restrictive Welfare System Limits Possibilities for Studying The Swedish welfare system with its regulations and bureaucracy, including both the authorities and the educational system, presented challenges for engaging in studies while experiencing mental health problems. To apply for and receive study allowances and loans from the Swedish Board of Student Finance (CSN) or part time activity compensation from the Swedish Social Insurance Agency were described as a challenging, prohibitive bureaucratic process. Informants found it almost impossible to get a study allowance for part time studies, and a further complicated task to combine this with other welfare benefits. Utilizing the study allowance involved taking financial risks that required courage to manage several parallel life changes. Informants felt trapped in a dead-end situation where they were dependent on the welfare system and had difficulty re-establishing financial independence or stability. One informant explained, I only receive a study allowance of 50%, which means that I have to start working at the same time because I am not allowed to keep the Social Insurance Agency benefit for 50%. There is no support for re-entering the labour market, which I think should be in the interest of the public welfare system. (Interview 12) Regarding the education system, opportunities for individual adaptions were experienced as few, with a low acceptance of social deviations. At times informants were stigmatized and discriminated against when they carried out their education. They had low self-esteem and were uncertain as to whether there was a way to proceed with their studies while having mental health difficulties. One informant stated, When we asked the school if it was possible to reduce the hours and study part time, they found out that I was not feeling so well. When I came back to school after a week, I set out to start my studies again, but the principal took me in and said that it might be good if you end your studies, The understanding of mental health problems was experienced as poor within the education system. This situation decreased acceptance of deviations and opportunities to assimilate, which in turn made it difficult to accommodate special needs. The education system missed opportunities for success among students with mental health problems. Furthermore, the mental health service system did not integrate treatment and support in relation to educational programs. This presented informants with challenges in achieving their education goals and receiving treatment at the same time, and involved high demands to complete education or risk losing treatment continuity. The overall experience was a misfit between the different systems and a lack of understanding that did not encourage young adults to invest in education while experiencing mental health problems. Having to Fit into a Standardised Structure Within Education The education system was experienced as being primarily designed to fit conventional students, and as lacking responsiveness to individual needs or requests. This affected access to support of mental health needs and increased feelings of being relegated to standard contexts. One informant stated, During the time I was in high school, I was depressed. I did not think the school considered this, they just wanted me to apply to another, special school and did not get me any of the help that I needed. (Interview 16) The informants' opportunities to influence their education or have a dialogue with the school were often limited. Having a diagnosis could be experienced as a circumstance for receiving study support. However, being labelled with a diagnosis could also be perceived as stigmatising. One informant stated, I started in a high school that targets those who have difficulties like me. They said it would be good for people like me. It was not good for me. My goal was to go to college, but my mentor said that college would not suit me, and that vocational high school would suit me better. I started and was barely there for a few weeks. (Interview 15) Having little influence and control over the conditions for studying, including the choice of education, contributed to lower self-esteem and feeling stigmatized. The informants found it as difficult to fit into the standardised pedagogic structure and regulations in the education system, and this decreased the opportunity to manage their studies. Feeling Overlooked When Needing Study Support Gaining access to support and helpful adaptions within education was a complicated process. Most informants studied without adaptations and support since this situation was granted on the basis of a diagnosis, and hence not available for undiagnosed students. Furthermore, the administrative system used to apply for support was complicated to manage, as was communicating the decision to responsible teachers. One informant said, I have to submit a decision on special educational support for each course. Even when I In addition, having an approved application was no guarantee that the adaptions worked well. The adaptations could be overlooked or not materialize. The informants felt exposed when they repeatedly had to remind the supporting teachers and to ask that the adaptions be implemented. This sometimes led to students ignoring the support to which they were entitled. An informant said, I have asked to sit separately when there is an exam. But they have not always allowed this. I did not pass the first exam because they put me with many other students. I misunderstood the questions, (and) could not or did not dare to ask the teacher. Because I feel that I disturb the others when the teacher has to stand and explain to me, and they wonder why I do not understand the question. 'Is she stupid?' (Interview 10) The informants suggested that to get their approved support enacted, a working relationship with the teachers was essential. However, having individual contact with teachers was not common and was difficult to accomplish. The responsibility for obtaining support relied on the informants and required that they were high-functioning in terms of planning and managing necessary contacts. Lacking Integration of Mental Health Services and Education Support The informants found it difficult to combine mental health treatment with ongoing education. Struggling with mental health problems necessitated several essential treatment contacts that needed to be coordinated with educational obligations. Rescheduling booked meetings, treatment and group therapy within the mental health care was difficult. Hence there was a risk of falling behind in studies due to absenteeism. Missed educational experiences were always a possibility while struggling to coordinate mental health care contacts and treatment with education. Further, mental health services did not pay attention to the individual's everyday life activities during treatment. An informant told about the experience of being a service user as well as a student, It was school and then psychiatry in a mess, and I tried to balance both. I got help from the mental health services but it is mostly with my thoughts. There was a lack of focus on me in the practical world. It would have been reassuring if it was included. When you are in the middle of your illness, it would be good to have help with efforts to return to normal chores, your daily chores, if you work or study. I know because I was very worried when I fell ill and was at the hospital. I walked around and said: 'No, I want to finish my degree.' (Interview 9) In cases where the mental health service system did not pay attention to their support needs for education, informants found it a challenge to continue their education. Challenging to Maintain Motivation on a Long Journey Many informants felt that it was challenging to maintain motivation to study over a long period. Mental health problems early in life often affected educational achievements. Preparatory studies were usually necessary in order to resume education and qualify for new educational goals and one's desired education. Furthermore, resuming studies after a year implied starting new relationships with fellow students and this made it difficult to maintain social contacts with peers. One informant explained, Now I was not so keen on getting to know people anymore. I got to know people well the first and second time. But now, the third time, it did not feel worthwhile. (Interview 2) Along with the structural barriers presented above, maintaining motivation throughout the journey of education was crucial and a personal challenge, both in regards to educational achievements and social contacts. Supported Education Helps to Overcome the Gap to Engage in Studies The second core category reflects the SEd support that helped to facilitate educational achievements and form a bridge for overcoming barriers, reducing the gap between the education requirements and the informant's ability to engage in studies, despite having mental health problems. The core category included three categories, presented below. Being Treated From a Personal Perspective Enhanced Motivation SEd was an important prerequisite for managing education. Informants were treated from a personal perspective with responsiveness to personal conditions and wishes. Having one's own influence over the choice of education and the design of support spurred motivation to address challenges and make necessary life changes. SEd increased opportunities to obtain the support needed in order to engage in studies. One To receive support to find and explore different educational alternatives was also important. This included finding education that matched the informant's personal conditions and preferences in relation to available schools, universities, and educational forms. Informants found that it was possible to implement education when it was selected on the basis of personal preferences. The combination of being treated as a person and having personalised support was viewed as crucial for maintaining motivation to study. Being Two Instead of One in the Process The value of ''being two instead of one'' throughout the process of applying for and accomplishing studies was reported as an essential ingredient of the support. The continuous and personalised support contributed to a partnership between the education specialist and the informant. Participating in SEd provided an opportunity to share and connect with someone when facing challenges. Ongoing support also made the education specialist familiar with pre-conditions and challenges of the informant as they established the student role. The specialist was available for personal guidance and confirmation during difficult parts of the studying process. An informant explained, To talk about what I find difficult, to sort things out, and get advice on how I can move forward. The feeling that you are doing it together. If I lose my grip, someone else will notice. You can slow down before it goes downhill, instead of giving up. I would not have made in through education without this support. (Interview 4) ''Being two'' when interpreting educational information and conditions or when exploring perceived challenges was decisive for being able to study. The support also created security in order to manage difficulties that had previously led to dropout from education. In addition, shared follow-up and study planning were experienced as valuable support. An informant said, We talk about how it has been, how much I have read and for how long. What has worked and not. We plan how I build up the week, what I will do and when. How to prioritize, what topics and tasks. So even if she (education specialist) is not familiar with what I study, she can still help me a lot. (Interview 9) Not feeling alone was attributed to regular shared planning with the education specialist, and this decreased the experience of self-stigma among informants. ''Being two'' was important for informants during the SEd process. The establishment of longterm contact with the education specialist functioned as a protective network that gave capacity and strength to cope with what they faced and thereby facilitated sustainable progress in education. Having Support to Implement Practical Things SEd provided valuable support to handle practical things connected to the student role. To be a student required having to deal with administrative bureaucracy and manage several contacts, e.g. with welfare authorities when applying for and getting study allowances, with educational counsellors when trying to find suitable education, making the application or an individual study plan, and applying for and getting access to needed adaptions or materials. Obtaining partial or complete practical support increased the availability of education and the opportunities to manage education, despite having a mental health problem. As an informant described, She (education specialist) helped me find schools and to apply. She checks if I need help with any contacts… She has contacted the school, my teacher, the public employment service. When there was trouble with the teacher, she helped me write to the principal. She also went with me to visit the psychiatrist. (Interview 6) The education specialist helped with all studyrelated administration and offered follow along support to sort out practical things connected to the student role. Prior to participating in SEd, many informants had experienced these as unmanageable and needed to depend on parents and family for support. Receiving this practical support contributed to development of a greater degree of independence, which in turn supported the transition to adulthood. Overall, access to practical support made it possible to take part in student life, and was decisive for being able to choose, apply for and conduct education while experiencing mental health problems. Participating in Education Implies Personal Benefits and Opportunities The third core category reflects the benefits of studying and how it contributed to personal development process at several levels. This is presented in three categories below. Finding Meaning and Hope for the Future as a Driving Force Finding personal meaning proved to be important for the entire process of starting to study and developing a student identity. Once one's own personal meaning for study was identified, this meaning became an important driving force and made education part of a personal plan. Envisioning meaningful goals for a future career and working life, such as working with people in need or obtaining a special vocational education, are examples of the driving force. Although what was meaningful varied among the informants, a common goal was the desire for a future that had previously been lacking. One informant described, It has been a great help to find an education that is important to me, and to be able to study what I want. In the past, it was very hopeless, but now I feel there is a future, a road for me to go [on]. I have goals now for the first time in my life. It feels so good. I see a meaning now, I want to be something, it really matters. When I wake up every morning, I know I want to go to school. (Interview 15) To identify the personal meaning for education became a driving force to study and to engage in other everyday life activities-a determinant factor for finding the strength to struggle and achieve educational and future goals. Enabling Engagement on an Activity, Social Environmental, and Personal Level To perform educational activities increased engagement in everyday life activities, and that in turn affected the informants' pre-conditions in life. In addition to achieving academic competence, educational activities provided opportunities to develop practical competence and to explore and develop personal interaction skills. They also helped to create daily routines and structure, and to increase the feeling of being in control of one's life. One informant said, I have been coming back to this with routines. To wake up in the morning and know that you have tasks that need be done. You have structure for the day. I never had the control I have now, in life. A lot has happened that I did not think I could handle. The whole process has made you feel more alert, stronger, and above all, that you are capable. I really feel better today. (Interview 13) To engage in activities connected to the student role influenced the informants' view of themselves as being more competent, and this strengthened their self-esteem. Being engaged in studies with SEd support facilitated a development process that influenced self-confidence, and most importantly, how the informants viewed themselves in everyday life. The educational activities further increased informants' social coherence, and provided new opportunities to build relationships with others. As a result, self-worth increased and they were able to re-evaluate themselves and develop social roles as students. The increased engagement in study activities enabled the development of academic, personal and social competence that influenced self-worth, identity, the feeling of control, and having a positive direction. Developing a Student Identity When taking into account all the beneficial experiences of studying, an active component was the personal identity development process that occurred. This process was influenced by the dynamics of their own perspectives and the outside world. The student role contributed to the development of competence, achievements that were affirmed by others, and development of student identity. Informants experienced real changes for career advancement with more possibilities for future working life. As one informant explained, When I went to the Daily Activity Centre, it felt a bit like a place where you could get stuck for the rest of your life. To get help to study makes you feel that life is moving forward. I think it has been extremely motivating. Before I was terrified to start working, now I am eager to start working when my studies are finished. When I was at a job fair and met employers, they were interested in recruiting me when I finished with my education. (Interview 11) When informants became aware of opportunities that could move their lives forward, their beliefs about their abilities changed and made it possible to identify themselves as more independent and as adults. Dependence on family and parents for personal, practical and financial matters decreased with the help of SEd. The opportunity to provide for themselves through study allowances, and later in working life, increased their feelings of independence. Being a part of student life was experienced as reassuring. The fact that they were on their way forward in life influenced their identity and future perspectives in a positive way. Discussion The current study provides an understanding of the challenges young adults with mental health problems experience as students in the regular education system and how SEd may contribute to sustainable education and personal growth. In this sense, SEd may be viewed as bridging the gap between experienced barriers posed by the welfare system and the opportunity of engaging in education regardless of the presence of mental health problems. Participating in SEd and entering social roles as a student enabled the development of student identity. This study identifies important components of SEd, and their critical role in the development of a student identity among young adults with mental health problems. The informants met several barriers to education in the welfare system. In particular, they were challenged by the negative attitudes from others while they were struggling with mental health problems. This situation may be due to low mental health literacy among professionals in the education and welfare systems, including little knowledge about mental health and education, negative attitudes, and few strategies to support persons with mental health problems who wanted to study. Lövgren et al. [38] advocates that improved mental health literacy for educational actors is a key factor for accommodating the needs of these students. In addition, previous grounded theory research on supported employment that focused on professionals in the welfare system and employer support corroborates the understanding generated in the present study [45]. Applying for, and engaging in, education often is a stigmatising experience for the student. The social environment in the regular education system was experienced as less available for young adults experiencing mental health problems. This is in line with previous research that shows that students with so-called ''hidden disabilities'' are the ones who are the most adversely affected by stigmatising attitudes and bear the risk of discrimination [40]. The informants felt that they needed to fit into the regular education system. Being divided into different groups and categories may thus contribute to the worsening of self-esteem and decrease hopefulness among students with disabilities. This in turn negatively affects their identity development and transition to adulthood. Moreover, when the support needed in everyday life was lacking from mental health services, informants found it difficult to have mental health problems while pursuing educational goals. The current study recognized that a unintegrated mental health and educational welfare system builds barriers for young adults who dare to invest and participate in education. Mental health literacy within the education system is critical in increasing the ability of SEd to facilitate education. As recently observed, the relationship between students and professionals in a school is more important to student mental health than good peer relationships [32]. Investing in the creation of good and supportive relationships with students is therefore urgent. This is most clearly seen in the category of ''Being two instead of one''. Here the partnership and supportive relationship between the informant and the education specialist was decisive for managing studies, and allowing students to be themselves and develop their goals while having mental health problems. This kind of togetherness and support that makes it possible to develop optimism has also been observed in previous supported employment research [6]. This partnership support is a critical factor for all recovery-related efforts. The present study shows that a personal development process occurs when informants start to influence their decisions and study goals, and engage in study activities. The three categories reflecting the benefits of studying can be anticipated to construct a process that contributes to the development of student identity and future perspectives in life. From a theoretical perspective, this can be understood in relation to the transaction between the person, the occupation and their environment [34]. At the intersection between the informants' engagement in activity, the study environment, and development of personal factors is the process of identity formation. This provides an understanding of how essential the engagement and meaning-making process are for personal identity development. A tentative model of the process of developing student identity (Fig. 2) illustrates that study increased engagement in activity and gave a sense of meaning and motivation in life, which in turn made further engagement in productive activities and social coherence possible. This ongoing engagement provided the students with opportunities to practice and develop competence as a student, and thus develop new beliefs and self-perspectives. As stated in occupational therapy theory, the lived experience is the basis for a person's occupational identity, involving the feeling of who we are, wish to become, and our position in the community [52]. Research has shown that increased engagement in activities, and provision of routine and structure in everyday life is important for the recovery process [5,7,14,36,37]. By becoming involved in study activities through SEd, overall activity engagement increased and an engagement process began. This resulted in changing a previously passive lifestyle, often linked to social isolation and little activity, into increased engagement that was conducive to personal development. The effects of integrated vocational and mental health services on engagement in everyday activities has also been shown in a longitudinal study among young adults [36]. The model also shows that evolving independence and self-worth may facilitate the transition to adulthood and career advancement, give a future perspective, and result in a more positive student identity. This model of developing student identity has several similarities with the theory of developing work identity, as reflected in a previous supported employment study [37]. These similarities include the value of a hopeful, future-oriented perspective, the relevance of vocational and educational activities in relation to establishing oneself as an adult, and the benefits of these activities for self-esteem and selfefficacy [37]. Both models contribute to our Fig. 2 A tentative model of the development of student identity understanding of how change and self-development may be facilitated in a positive direction. In previous research, this change and shift of experiences are shown to be important in reduction of self-stigma and facilitation of personal recovery while experiencing mental health problems [56]. Furthermore, changes taking place during the identity development process may help explain the statistically significant changes in mental health found in previous supported employment longitudinal and trial results [5,36,44]. The process of student identity development includes several components important for personal recovery that are identified in the CHIME (Connectedness, Hope and optimism, Identity, Meaning in life, and Empowerment) recovery framework [35]. The CHIME components form a valid basis for measuring recovery [18]. The informants' experience of finding meaning and hope for the future, such as hope for a future worker role, was an important factor for the study preconditions. According to the CHIME framework, positive thinking, being able to dream and have aspirations about the future, and influencing motivation to change are important components for the recovery process [35]. Bejerholm & Roe [15] explain that recovery refers to a process where a person acts to develop new goals and meaning in life despite limitations of an illness and its consequences. This is seen in the process of student identity development that gave a future perspective and a more positive student identity, and included parts of rebuilding a more positive sense of identity and self-worth that are close to components of rebuilding identity, hopefulness and empowerment. That is seen in the CHIME framework as the experience of hope and optimism, being able to picture a positive future, and belief in one's own possibility of recovery. SEd made informants feel they were not alone, ''being two instead of one in the process'', which helped students gain access to community offerings and the ability to choose. These are found in the CHIME components of connectedness, representing support from others, being a part of the community, and empowerment. These are identified as important components for recovery-oriented clinical work [35]. Forming of a student identity with the support of SEd facilitated the recovery process. We therefore assume that the process of student identity development may be a part of the personal recovery process for young adults with mental health problems. Education is cited as the most important factor in marginalised young adults leaving the NEET-group (Not in Education, Employment, or Training) [1]. However, SEd is not widely available for young adults with mental health problems in community mental health services. The current study suggests that providing SEd with a model that integrates mental health services contributes to good conditions for recovery of young adults, as well as positive student identity formation with possible reduction in stigma [9]. Clinical Implications Mental health services should include person-centred career-oriented models for young adults that take into account support for both education and employment [30]. This includes services such as SEd [4], supported employment [13,16,20] and the Södertälje Supported Employment and Education model [36]. This conclusion is already in progress in international research [15,17,28]. In Swedish national guidelines and policies, supported employment is recommended to be integrated with mental health services [50,51]. Understanding the specifics of the student experience and the development of student identity as a form of work identity can support clinical reasoning among professionals who support young adults in their transition to working life with interventions that respond to their needs. Paying attention to young adults' resources, needs and preferences in selecting interventions is critical to a person-centred practice [35], and should be applied to the design of resources and support, and development of a practice that sees studies as a natural part of a young adult's recovery process. Methodological Considerations Grounded theory is an appropriate inductive research method for studying young adult experiences of SEd and the process of developing a student identity when the aim is to generate theories from data and apply them to practice [24]. We used several measures from the Consolidated Criteria for Reporting Qualitative Research guide [53] and checklist to ensure the quality and credibility of the study and to strengthen the dependability of the results. A theoretical sampling was used to deepen data, check interpretations, and test evolving categories [24]. Use of a study protocol and description of study design and procedures according to grounded theory were used throughout the study to strengthen trustworthiness of the results [23]. Credibility and trustworthiness of the data are about truth and believability [29]. The first author (UL) has extensive experience of working with mental health and vocational rehabilitation. The other authors have extensive experience in mental health services research. Preconceptions provide a good starting point for qualitative research, with recognition of informants' experience, context of the study, etc. Preconceptions may also influence data, and therefore triangulation was used throughout the analysis process. To reduce the effect of preconceptions, triangulation was used between the first author (UL) and DR and UB, and secondly among all authors. Because qualitative researchers and study participants always interact in a social process, the pre-understanding was viewed as part of the results in the form of experiences, perspectives, and interaction with the study structure [23], 29]. The informants who agreed to participate might have been those who had positive experiences of the SEd service, which may have influenced the results in a positive way. Finally, our findings are useful as they identify important components of support for young adults toward their educational goals, and they are anticipated be a part of a personal recovery process from mental health problems. In this way, the findings may contribute to future research and policy-making. Conclusions Currently there is a lack of adequate support from the Swedish system for young adults with mental health problems who wish to study. Supported education can (1) assist with bridging the gap between service users and available education in the community by creating greater accessibility to that education, (2) enabling development of a student identity that involves an engagement process, positive identity formation, reduction of stigma, future-oriented perspective. Young adults with mental health problems who want to study should be supported with SEd that is integrated with mental health services, and other welfare services and the educational system in a coproduced manner since the development of a student identity seem essential for career advancement and can be anticipated to be a crucial part of the personal recovery process for the individual. Finally, the authors suggest further research with a longitudinal perspective to give a more comprehensive picture of SEd and the implications for study tenure and healthand career related outcomes. It would also be helpful to investigate this model of support in other contexts, for example those where SEd might be valuable as a preventative intervention.
v3-fos-license
2019-12-12T10:22:12.059Z
2019-12-11T00:00:00.000
213014763
{ "extfieldsofstudy": [ "Political Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/D8387182C1073FD2BD3ACA0D6534D5F5/S0963926819000968a.pdf/div-class-title-controlling-walking-in-stockholm-during-the-inter-war-period-div.pdf", "pdf_hash": "a98a7a22e549fef12b98a44bcf07d3913ae62e10", "pdf_src": "Cambridge", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42540", "s2fieldsofstudy": [ "History" ], "sha1": "c6bf74d077b193d078499bb0ed20fc175b8a4902", "year": 2019 }
pes2o/s2orc
Controlling walking in Stockholm during the inter-war period Abstract This article offers an analysis of different approaches to control walking in Stockholm in the inter-war period. Various social actors engaged in controlling pedestrians through legislation, police monitoring, educational campaigns and traffic control technologies. But the police, municipal engineers, local politicians and road user organizations differed in their aspirations to privilege motorists over pedestrians. While the inter-war period saw a shifting balance between pedestrians and motorists in Stockholm, the transition in terms of legitimate use of city streets was incomplete. Moreover, taking pedestrians’ viewpoints into consideration, what many observers and motorists understood as rebellion against traffic rules or simply bad manners, many pedestrians found to be the safest way to cross the street. Introduction The downgrading and disciplining of pedestrians is omnipresent in the small but growing scholarship on the history of urban walking. This scholarship suggests that during the twentieth century streets were reconstructed mainly for cars, while pedestrians were marginalized by means of legislation, police control, planning measures and infrastructure provision, although not without resistance. 1 This article will not question the marginalization thesis, but it seeks to explore it further through a case-study of pedestrian control and walking practices in Stockholm during the inter-war period. 2 The inter-war period, with its sudden growth of car ownership and traffic, stands out in previous research as a turning point, during which the balance between pedestrians and motorists and their representatives shifted. In his work on the coming of car cities in the United States, Peter Norton refocuses attention from urban planning and competition with urban railways to 'prevailing conceptions of the city street'. Like Clive Emsley in the British case, Norton convincingly demonstrates a resolute resistance from pedestrians, concerned parents and the police to cars in US cities in the early twentieth century. 3 Only by reinterpreting streets as a place where cars belonged could motoring interests ('motordom') successfully press for the realization of the automobile city. Language and cultural representations were essential to their struggle, and they were only successful when they complemented traffic control measures with ingenious behavioural campaigns: by 1930, a previously widespread conception that cars at high speeds did not belong in cities had been replaced by the view that pedestrians should not obstruct cars in city streets. 4 Following Norton, Tomás Errázuriz finds a similar but less resolute transformation in dominant representations of pedestrians and motorists in Santiago de Chile 1900-30. Chilean car owners were many fewer than in the United States, but the motoring elite still managed to press for change. Whereas pedestrians had earlier been depicted as victims of motorization, in the late 1920s they were increasingly represented as unpredictable and reckless, and as obstructions to traffic. Errázuriz finds that the city authorities' attempts to control pedestrian conduct through decrees and education largely failed, and proposes a more immediate reason for pedestrian submission: concern about their own life in Santiago's increasingly car-oriented streets. 5 Authors differ in their descriptions about the relationship and tensions between different road user groups and other social groups engaged with urban traffic. Errázuriz, for example, argues that 'the elite and the local authorities campaigned to hold pedestrians responsible for most traffic conflicts'. 6 Joe Moran, on the other hand, in his study of pedestrians' possibilities to cross the road in Britain in the period 1930 to 1970, stresses how national authorities sought to 'reconcile the competing aims of different interest groups, working within the constraints of what was deemed acceptable to public opinion'. 7 The Stockholm case was full of the same negotiations as those described by Moran, but it questions any suggestion that a local motoring elite and 'the authorities' as a homogeneous assembly joined forces to penalize pedestrians. In spite of frequent attacks on pedestrians' competence and manners, authorities were involved in a balancing act, and they were often concerned about pedestrian welfare. Furthermore, elected politicians and different professional groups showed considerable differences in their approaches to walking. 8 Elsewhere, I have identified an inter-war 'reactive' city traffic regime in Stockholm. In contrast to the post-war, forward-looking and 'proactive' regime, where urban planners and traffic engineers seized control over city traffic issues in Stockholm, during the inter-war period, the traffic police, municipal engineers and road user organizations were largely in charge of defining problems and solutions of city traffic. They primarily chose measures that could be implemented short term and with the present rather than the expected future constitution of traffic in mind. Since pedestrians and cyclists made up a major part of road users, they wereif subordinateat least part of the equation. 9 This article offers a detailed analysis of different approaches to the control of urban walking. Next to variations in concern about and urges to control pedestrians, it highlights that many pedestrians felt less safe when conforming to traffic control innovations. What observers and motorists understood as rebellion or bad manners, many pedestrians simply found to be the safest way to cross the street. Traffic law: customary walking vs. the demands of modern traffic The 1920s was the first truly expansive period for automobility in Stockholm. According to traffic counts, during this decade, motorized vehicles for the first time outnumbered horse-drawn ones. 10 As motorized traffic increased, motorists' and pedestrians' freedom of movement were increasingly at odds. According to Gustaf Hårleman, Stockholm's chief of police 1918-30, pedestrians had to adapt to big city circumstanceslike they already had in model metropolises such as London, Paris and New York. '[A]s a consequence of automobile traffic', Hårleman stressed in a memo from 1921, it was 'of utmost importance that pedes-trians…as much as possible avoid to walk across the roadway.' 11 Three years later, in a radio speech, he claimed that pedestrians were much better integrated in traffic in the greater cities of foreign countries, not least because the pre-car 'bad habits' had refused to yield in Stockholm. 12 Working against him was what Hårleman saw as the old-fashioned practice of always walking on the left pavement. Pedestrians frequently traversed the street for no other reasons than to walk on the left-hand side, where they could more easily go with the flow, but with negative consequences for both car mobility and pedestrian safety. 13 Hårleman sought to regulate pedestrian conduct so that they kept to the left-hand side of each pavement. This way, they would be able to walk without 8 On the different 'technological frames', that is shared interests and views, of the traffic police (order) and traffic engineers (efficiency) in the US case, see P. Norton, Fighting Traffic: The Dawn of the Motor Age in the American City (Cambridge, MA, 2008), 4-5. 9 M. Emanuel, Trafikslag på undantag: cykeltrafiken i Stockholm 1930-1980(Stockholm, 2012; M Emanuel, 'Constructing the cyclist: ideology and representations in urban traffic planning in Stockholm, 1930-70', Journal of Transport History, 33 (2011 A. Dufwa, Stockholms tekniska historia: trafik, broar, tunnelbanor, gator (Stockholm, 1985), 76-7. 11 Stockholms stadsarkiv (SSA), Överståthållarämbetet för polisärenden 1, Polismästaren (ÖÄ1), F 9:1, 'PM ', 3 Dec. 1921. 12 SSA/ÖÄ1, F 9:1, Polismästarens föredrag från radiostationen vid Alströmergatan, 21 Feb. 1924. 13 SSA/ÖÄ1, F 9:1, 'PM', 3 Dec. 1921. friction on either side of the street, and would less frequently need to cross the road. To Hårleman's regret, however, this conduct of pedestrianswhich was the rule not only in London but also in the comparatively smaller cities of Malmö, Copenhagen and Kristiania (Oslo)had never been properly inscribed in the local traffic ordinance in Stockholm, making it subject to fines. When the ordinance was revised in 1910, introducing fines was met, as Hårleman described it, with 'fierce opposition' by Stockholm politicians due to expectations about 'harassments from the police' and 'mass prosecution' of pedestrians. Without any fines, the prescription to keep to the left on pavements had little effect. As the traffic police had in fact little power to correct such misbehaviour, Hårleman argued, pedestrians had failed to change their practice. 14 On two occasions during the first half of the 1920s, Hårleman asked the city council to introduce pedestrian-related regulations. In 1920, in collaboration with the Royal Automobile Club (KAK), he proposed a stricter segregation between driving and walking, by stating clearly that roadways were primarily for vehicle traffic and pavements for pedestrians. Also, pedestrians should cross the roadway in a straight line (not diagonally) and with proper attention to road traffic. A preparatory committee of the city council supported the changes, and furthermore added that pedestrians should not unnecessarily spend time in the roadway or in other ways hinder traffic. 15 The issue was put on hold due to an upcoming revision of the national road charter, but only a year later Hårleman returned to the city council with a new proposal. In addition to some rules relating to pedestrian-motorist interaction, he sought to strengthen the function of pavements as a site for pedestrian circulation; they should yield to the left and never stop in a way that hindered other pedestrians from passing by. 16 The city council agreed to the regulations as such, but several councillors opposed the proposal that pedestrians who failed to respect them would be subject to fines (2-20 kronor). 'Why', asked one councillor, would one use fines to 'make people walk in another way than how it is naturally organized?' 17 As the new national road charter came into effect in June 1923, Hårleman managed to introduce a new rule which stipulated that pedestrians on pavements would have to yield to the left. Moreover, KAK managed to convince the city to include a paragraph that forced pedestrians to cross the street straight across, not diagonally. 18 While Hårleman was partially successful in regulating walking, to his, and motorists, regret he failed to convince the city to introduce sanctions when pedestrians failed to respect old and new rules. Motorists' organizations managed to 14 SSA/ÖÄ1, F 9:1, 'PM', 3 Dec. 1921; 'Gångtrafikens ordnande', Särtryck Svenska vägföreningens handlingar, Hårleman, 21 Jan. 1922. 15 Stockholm stadsfullmäktiges handlingar (SF) Utlåtande (Report) 1920: 79;Protokoll (Protocol) 1920:80. 16 On the process of defining pavements as sites for pedestrians' circulation rather than other, competing usages in Los Angeles in the decades around 1900, see R. Ehrenfeucht and A. Loukaitou-Sideris, 'Constructing the sidewalks: municipal government and the production of public space in Los Angeles, California, 1880-1920', Journal of Historical Geography, 33 (2007 bring about some changes in the local traffic regulations that clarified pedestrians' need to adjust to vehicular traffic. But Stockholm's chief of police was not running the errands of the car lobby. As we will see, Hårleman was involved in a balancing act between the customary right of pedestrians and the demands of resourceful proponents of automobility. In addition to complaints about pedestrians' conduct, the press featured criticism of the traffic police's failure to safeguard their rights to the streets. In November 1925, a reporter at Dagens Nyheter criticized how the traffic police, rather than helping pedestrians to cross the street safely favoured cars. 19 He apparently struck a nerve with the walking public: during the coming few days, the paper received a stream of phone calls from grateful pedestrians. 20 Axel Norlander, managing director of KAK, disagreed. Pedestrians felt neglected, but Norlander found them partly to blame themselves. As long as they expected to be able to cross a street 'anytime, anywhere and anyhow', he argued, 'then all the world's traffic police officers are not enough to help them'. In bigger cities such as London, the 'compact wall' of vehicles did not allow the individual pedestrian to try and cross the streets. Such a 'natural blockage' still never occurred in Stockholm and, consequently, 'pedestrians gad about here and there in between the vehicles and least of all allow themselves to be impeded by the sign of an officer'. 21 Some police representatives agreed with Norlander. One police superintendent found that pedestrians in Stockholm were 'void of traffic culture'. Other higher rank officers compared the traffic culture and situation in Stockholm with that in bigger cities and argued that pedestrians failed to respect the traffic rules because Stockholm's traffic was not yet metropolitan enough. 22 Chief of police Hårleman, however, was as concerned about motorists' respect of their fellow road users as about pedestrians learning how to handle the new traffic situation. In 1923, when he filed a new 'order' to guide the work of traffic policemen, he specifically pointed to the importance of considering pedestrian needs: 'The task of the traffic officer is not solely or primarily to facilitate the mobility of drivers, but he has to guide and arrange traffic for the safety and convenience of all road users, both drivers and walkers. The constables are particularly reminded about their obligation to consider pedestrians who might be in need of assistance to get across the roadway safely.' 23 The new order may have been directed at those traffic police officers who did not appropriately attend to the needs of pedestrians, but it may also have been a way to lend them support in countering motorists' demands vis-à-vis pedestrians. In 1926, Hårleman also proposed a reduction of the speed limits in Stockholm from 35 to 30 km/h. Two years earlier, the limit had been raised from 25 to 35 km/h, according to Hårleman with dire consequences for pedestrian safety: 'it is obvious', he argued, that more traffic at higher speeds had brought 'increased risks and difficulties not the least to pedestrians'. Whereas many proponents of automobility frequently claimed that regulation of the speed of car traffic was 'anachronistic', Hårleman claimed the reverse: according to him, it was 'old-fashioned to make claims to be allowed to drive, on the streets of Stockholm, with a speed of 40 kilometres per hour'. A reduced speed limit would lower anxiety, improve safety and facilitate the mobility of pedestrians. 24 The local traffic ordinance for Stockholm was revised again in January 1927, and Hårleman proposed that drivers should be obliged to yield (slow down and if necessary stop) to pedestrians to allow them to cross the street at mid-block. Referring to a similar regulation recently introduced in New York, he argued the change would better serve the needs of pedestrians than the vague stipulation that motorists should adapt their speed to the place and situationaccording to Hårleman, this was usually freely interpreted by motorists ('no less selfish than other people') to their own benefit. 25 The proposal was heavily attacked by KAK's Axel Norlander. He feared 'deplorable harassments against drivers' who, according to pedestrians, had not slowed down sufficiently or stopped. While the driver in such cases could be charged and sentenced, pedestrians were only advised to stay off the roadway, and could thus walk away without any consequences. Such a regulation would also inhibit traffic in Stockholm, where pedestrians sought their way over the roadway 'at any time and any place'. 26 When Hårleman returned with a new proposal three months later, he had consulted the New York regulation more closely and found that pedestrian right of way did, in fact, not refer to mid-block street crossings, but intersections without signal or police regulation. Pressed by motoring interests, he agreed to change his proposal in accordance with the one in New York. Motorists were still not pleased. The unions of professional drivers (chauffeurs and taxi drivers) argued that Hårleman's proposal would give pedestrians 'absolute right of way', with 'unreasonable consequences'. The traffic department sided with the chief of police, but the politicians in the traffic board supported the professional drivers. Echoing their concerns, the board feared that a general right of way could 'easily be abused and thus lead to unnecessary hardships for road traffic'. 27 The 1920s saw a number of attempts to regulate pedestrian movement on pavements as well as their use of the roadway. While the motoring interests managed to block the proposal to give pedestrians the right of way to cross at mid-block, the chief of police and city council often showed concern about the customary rights of pedestrians. Like the motoring interests, the chief of police was keen to fine failing pedestrians, but this was repeatedly voted down by a city council majority. In the late 1920s, motorists did not have full support to redefine the appropriate use of city streets in Stockholm. It is also worth noting the common understanding that, as Stockholm followed in the footsteps of major European cities, the intensity of traffic would, in itself, enforce better compliance to traffic regulations. Education: pedestrians as giddy geese The battles over pedestrian practices continued, and they were as much a 'cultural war', about cultural representations, as they were fought on the street. More motor vehicles had brought more accidents. The number of police-reported accidents more than tripled from 600 to 2,100 between 1914 and 1927. Pedestrians were the most exposed road users: they constituted two-thirds of the dead and almost 60 per cent of the injured. If the city's traffic accident reports are any measure, car drivers were increasingly held responsible. In 1914, the time of the first study of its sort, pedestrians were blamed for 85 per cent of the accidents involving a pedestrian and a car driver; 13 years later, in 1927, drivers' behaviour was considered the cause of 70 per cent of such casualties. 28 An exchange on the letters page of Svenska Dagbladet in December 1920 reveals the growing tension between pedestrians and motorists. A pedestrian questioned the way traffic casualties with injured or killed pedestrians were sometimes described in the press as 'pure accidents', implying that they were caused by the pedestrian's 'absentmindedness and nervousness' when it was, in his view, drivers' responsibility to manoeuvre their cars with consideration taken to other road users' unpredictability. While pedestrians felt threatened, motorists objected to what the press sometimes referred to as 'car terror'. To one driver, the frequent use of this label had spurred the public to consider pedestrians as 'lords even of the roadway'. Other drivers blamed pedestrians for lack of discipline. 'They walk as giddy geese, helter-skelter', noted one, but still, when an accident occurred, blamed it all on the driver. 29 Peter Norton, in his work on US cities, argues that pedestrian control was not entirely successful until it was supported by educational campaigns orchestrated by 'motordom'. Only through such campaigns, and by labelling what they saw as pedestrian misconduct as 'jaywalking', did they manage to reconceptualize the right to the street. As we will see below, in 'giddy geese', Stockholm (and Sweden) had its own local variant of framing pedestrians as ignorant. The rise in traffic accidents continued throughout the 1920s, peaking in 1930 with more than 3,500 reported accidents in Stockholm. 30 Fewer accidents during the 1930s probably had more to do with motoring being halted during the depression than with successful traffic education. Nevertheless, traffic education and propaganda aimed at the general public had its breakthrough in Sweden during this decade. The Swedish National Association for Traffic Safety (NTF) was founded in 1934, and shortly thereafter the governor of Stockholm formed his local Traffic Committee (Trafikrådet). Both were accused of being overly concerned about automobility. Almost simultaneously with NTF, the Swedish Cyclists' 28 Dufwa, Stockholms tekniska historia, 98-100. Note that far from all traffic accidents involving damages on property only were reported to the police. 29 'Folket röst i trafikfrågan: tre uttalanden om "bilterrorn"', Svenska Dagbladet, 1 Dec. 1920; 'Bileller fotgängarterror?', Svenska Dagbladet, 11 Dec. 1920. 30 Dufwa, Stockholms tekniska historia, 98-100. Federation (Cykelfrämjandet) was founded, one purpose being to educate (working-class) cyclists. 31 Pedestrians lacked an organization devoted to the rights and obligations of pedestrians, but shortly after its founding NTF would take the lead in organizing traffic safety campaigns, so-called 'traffic weeks', in Stockholm, which often had pedestrians as a key target group. The traffic weeks were the brainchild of Otto Wallenberg, an engineer at the city's Traffic Department, a conservative city councillor and the first director of NTF. Inspired by events in German cities, Wallenberg envisioned so-called 'pedestrian weeks', during which pedestrians would be instructed about proper walking through the radio and press, exhibitions, school instructions, information leaflets, cinema screenings and practical demonstrations, during which police officers regulated pedestrian crossings. 32 In Frankfurt am Main, he had seenand apparently liked what he sawfour police officers at every intersection, correcting every pedestrian who crossed inappropriately: 'As soon as anybody…tries to cross diagonally, one of the policemen will bring him back to his starting point and have him walk as he should.' When asked if this caused irritation, Wallenberg stated that 'Yes, naturally the person exposed was furious, but all onlookers were seized by great and genuine joy.' 33 In 1935, Wallenberg pleaded with the city authorities to fund a dedicated pedestrian week in Stockholm. Claiming that the majority of accidents in which pedestrians were hit by a car were 'likely caused by the pedestrian's carelessness or, to an even larger extent, lack of knowledge about [the necessary] precautions', he argued for 'a conscious, systematic and continuous information and propaganda'. 34 The city authorities, however, reduced the ambitious programme outlined by Wallenberg, and also wanted it to attend to all road user groups. The resulting traffic week had its critics. Ragnar Schlyter, representing pedestrians, wanted a 'pedestrian week' to focus on their particular needs rather than trying to foster and educate them. Motorists had to respect pedestrians, not the other way around; according to Schlyter, 'many "gentlemen motorists" seem to regard pedestrians as "fair game"!' 35 The traffic weekand traffic education in generalalso had an ardent opponent in the mayor of Stockholm, social democratic city councillor and member of the Swedish parliament, Carl Lindhagen. In a 36-page petition to the city council in 1935, Lindhagen lamented what he referred to as the 'militarization' of pedestrians sought by 'motordom' (sv. motorism) and the police. Educational efforts meant blaming individuals for what was, in his view, a societal responsibility and, he argued, educating pedestrians would simply not work: 'every intervention to seek to prevent pedestrians from "walking incorrectly" will fall short', since they neither could nor wanted to change their practice. Lindhagen was a dedicated defender of pedestriansand critic of automobility, or at least of society's indulgent response to its growth. 37 His 1935 petition to the city council mirrored a similar petition in the first chamber of the Swedish parliament. Through its ties with 'influential higher circles of society', Lindhagen argued, 'motordom' had enabled the rights of pedestrians to be overridden. Lindhagen sought an overview of the traffic regulations so that they did not primarily attend to the needs of motorists and secure drivers' perceived 'privilege to retain high speed'. 38 His opponents, however, cast Lindhagen as a figure of the past. According to the newly constituted Traffic Committee, Lindhagen wanted to 'turn back development'. 39 In the city council debate, motor-minded councillors stressed the important role of motorized traffic in the city: 'We simply have to adjust and adjust the City of Stockholm to the new demands of the modern age.' What is more, Lindhagen's long petition and debating style were ridiculed. One of his main opponents, Martin Erikssonrepresenting professional drivers in the Traffic Committee and NTFsaid plainly that he was tired of listening to Lindhagen. Lindhagen fought back, criticizing Eriksson for calling pedestrians 'giddy geese' in a radio debate some months earlier: 'They require constant consideration from motorists', Eriksson argued then, 'while they themselves lack consideration and run like giddy geese in front of trams and cars.' To Lindhagen, the term was little more than 'an artificial construct to be able to keep the "speed" and privilege to drive as serves the car owner best'. 40 Eriksson's reference to pedestrians as 'giddy geese' was not his own invention. It had been around at least since 1920, but was increasingly in circulation by the mid-1930s. At times it was used to distinguish respectable pedestrians from 'all the hysterical old ladies who run like giddy geese back and forth when they are crossing a street'. Even pedestrians themselves pointed to the 'giddy geese' in order to stand out as more considerate. 41 The traffic weeks in Stockholm would become a yearly event, continuing after World War II, and they continued to devote special attention to the conduct of pedestrians. In NTF's 1936 educational brochure 'Can you walk or do you walk like Laura?', Laura was portrayed as a whimsical hen who made a mess of traffic in Swedish cities. 42 In 1938, the yearly campaign was a dedicated 'pedestrian week', especially geared towards pedestrians. 43 Lindhagen had criticized the Traffic Committee and NTF: lacking any representative of pedestrians whatsoever, no wonder, he argued, the proposals and initiatives from these bodies subordinated pedestrians to motorists. 44 had representation in the mediating bodies where key issues related to Stockholm's mobility were negotiated and resolved. Their representatives had key positions within the city traffic regime. But although motorist interests appear to have had an upper hand vis-à-vis other road user groups, the struggles of distributing blame and responsibility remained unresolved throughout the inter-war period, as seen in the debates around the introduction of pedestrian crossings and traffic signals in Stockholm. Police control of pedestrian crossings While the chief of police, city politicians and the motorist organizations negotiated over decrees and traffic rules and thought up behavioural campaigns, police officers on the ground had the difficult job of monitoring traffic and making sure rules were respected. Interviewed in Trafiken in 1928, several police officers complained about pedestrians' conduct. Their criticism ranged from general condemnations ('they exhibit complete lack of traffic culture'; the 'problem children of traffic') to more specific concerns, such as failing to respect traffic policemen's signals and guidance. Other road user groups also had their fair share of critcism. 45 The traffic police force's leadership was no less critical. In 1931, the traffic police were reorganized into a centralized traffic division headed by traffic inspector Carl Stawström. In their proposal to the city for the new division, the police requested funds for an initial 30 constables, one of their main tasks being 'traffic regulation to the benefit of pedestrians'. 46 Interviewed in the press on his first day in the new office, however, Stawström voiced different concerns. Indeed, the bureauwhich would eventually count over 100 officerswould devote much attention to pedestrians, but rather to imprint in them traffic culture than to see to their needs in modern traffic. If pedestrians only realized the benefits of 'adjusting to traffic', he argued, things would become much smoother and more pleasant for everyone. 47 One of many tasks of the traffic police officers was to oversee pedestrian crossings. In the early 1920s, the traffic department had begun to experiment with different kinds of street markings: steel plates fastened in the street surface a few decimetres apart were soon replaced or complemented with different kinds of painting. 48 The traffic department continued to implement pedestrian crossings through the 1930s, often urged to do so by the traffic police. Based on international experience, they tried out different ways to mark the crossing in the pavement (white lines, aluminium plates, glass bricks) and other ways to increase their visibility, such as through dedicated warnings signs. In October 1929, so-called 'white lines' were introduced at three busy junctions to guide pedestrians across the street and as the white lines multiplied Stawström stressed how pedestrians as well as motorists had to respect them better. 50 During spring and summer of 1932, the police arranged practical instructions for a few hours every afternoon in the busy downtown intersection Kungsgatan-Sveavägen, during which extra assigned officers taught pedestrians 'the art of crossing difficult intersections'. During the exercise, pedestrians were only allowed to cross between the white lines. But once the constable was gone, Stawström reported, 'well, yes, then Barabbas is often loose'. 51 Svenska Dagbladet confirmed: as soon as a lesson ended, 'pedestrians completely ignore the "white lines"'. 52 Why did pedestrians not respect the white lines? To many observers, the answer was clear: because their violations never led to any fines. This changed in 1931. Although the rules aimed at pedestrians were to be seen as 'advice' only, pedestrians were subject to the general rule of care and consideration in traffic that could lead to fines. 53 In July 1931, Aftonbladet reported the first fines issued to a pedestrian due to 'lack of consideration and caution in traffic'; by September 1932, some 20 pedestrians had been sentenced to pay fines for carelessness in traffic. 54 Stawström was pleased that pedestrians could now be prosecuted and subject to fines for their misconduct. But he was still concerned about the 'deep incomprehension of modern traffic' among Stockholm's pedestrians. They failed to realize the risks they ran in 'thoughtlessly threading the roadway'. He had even seen them 'walk around, hands in their pockets, in the middle of the worst traffic congestion, just as if it was the good, old days'. 55 Pedestrians were, of course, not pleased about the new rules; even a pedestrian who was run over by a car could, if he or she was found guilty of carelessness, be sentenced to pay fines. 56 They also had their own explanation of why they did not use the pedestrian crossing: because motorists failed to respect them. 57 During his visit to Stockholm in 1935, a representative of the British Pedestrian Association reacted to motorists' lack of care at pedestrian crossings. 58 Stawström also stressed 50 '"Fållorna" farliga om ej trafikpolis leder fotgängaren ', Dagens Nyheter, 9 Oct. 1929;Sign. 'Refil', 'Vita streck äro dyra att underhålla', Aftonbladet, 2 Jun. 1932. 51 'Klarabilisterna få snart parkera på Norra latin ', Svenska Dagbladet, 31 Mar. 1932;'Polisförstärkning vid vita linjerna', Dagens Nyheter, 12 May 1932;Sign. 'Svale', '800 karlar, ståtliga män, vilja bli polis', Svenska Dagbladet, 11 Sep. 1932. 52 'När katten är borta… ', Svenska Dagbladet, 3 Nov. 1932. 53 H. Björkman, Trafikförordningar: nya vägtrafikstadgan och motorfordonsförordningen (Göteborg, 1931), 8-11. 54 'Fotgängare bötfälld för ovarsamhet ', Aftonblandet, 24 Jul. 1931;Svale, '800 karlar, ståtliga män, vilja bli polis', Svenska Dagbladet, 11 Sep. 1932. 55 Sign. 'Bg', 'Liten intervju om gångtrafiken ', Dagens Nyheter, 10 Mar. 1934;Chester, 'VEM är den skyldige?', Aftonbladet, 30 Sep. 1932. 56 Sign. 'Blanche', 'En stackars fotgängare ', Dagens Nyheter, 6 Jul. 1933. See also Lindhagen's criticism in SF Petition 1935:3. 57 'Lojal gångare', 'Bilisterna och gatans vita linjer ', Aftonbladet, 20 Sep. 1934. 58 'Engelsk fotgängare ser på den tysta trafiken ', Svenska Dagbladet, 4 Aug. 1935. The British Pedestrian Association's representative pushed for the so-called Belisha 'orange' used in London (named after the British minister of transport who introduced them, Leslie Hore-Belisha) as a way to indicate pedestrian crossings better. For more on the Belisha beacon, see Moran, 'Crossing the road in Britain'. that motorists had to respect the white lines better, hoping for a change in national legislation that made violations subject to fines. 59 This happened two years later. In 1936, a game changer in urban traffic occurred in Sweden. In 1930, the motoring interests had successfully lobbied for abolishing the earlier speed limit (45 km/h) on national roads; now the speed limit on city streets was abandoned as well. 60 That is, motorists could theoretically drive their cars at any speed they cared toalthough they were expected to adjust to the situation. The same year, a special pedestrian crossing was introduced in the Swedish road charter, meaning that drivers would be fined if they failed to yield to pedestrians. 61 In writing to the government, the governor of Stockholm had pointed to the need, especially with the introduction of free speed, to 'attend to the pedestrians' possibilities to cross the very busy streets safely' by means of implementing more pedestrian crossings with proper marking and signage. 62 Stawström seemed pleased: while in the past, motorists had seldom stopped for pedestrians, now he expected 'mutual consideration', although he admitted that the police's resources for careful monitoring had to be restricted to a single week. 63 Lack of control and monitoring turned out to be a problem. Only one third of the 230 pedestrian crossings in the city were regulated by a police officer or by traffic signals. The rest, some 160 crossings, were wholly unobserved. Here, both motorists and cyclists were reported routinely to refrain from slowing down to allow pedestrians to pass, and posting police officers in the intersections only had an effect so long as they stayed there. Since the police had limited resources they had to try other alternatives. Thus, in November 1936, the police introduced a special task force of officers, dressed in civilian clothes, to patrol and make spotchecks at these otherwise unobserved crossings. According to the traffic police, the result was a success. The control had brought an improved traffic culture among cyclist and motorists, who passed the crossings much more carefully. Not surprisingly, the system had won over the 'great mass of pedestrians', which could now pass an unobserved pedestrian crossing without risk and delay. 64 The daily press reported positively from the first day. The police were pleased, as were the pedestrians, one exclaiming that the 'secret police' had 'worked wonders'. 65 But the control with civilian-dressed police officers met fierce resistance from motorists and their organizations. Interviewed by Aftonbladet, professional drivers questioned the new control, comparing the 'one-sided' control with 'persecution'. 66 Liberal Dagens Nyheter argued that the novelty of 'free speed' was more 59 important for 'traffic peace' than economically unviable measures to control and monitor traffic. Higher speeds, the editor argued, would be beneficial to mutual respect and understanding, since it 'automatically forces greater vigilance and care among reasonable people', both motorists and pedestrians. 67 Similarly, one motorist found that Stockholm's traffic would not improve until traffic reached the intensity of Berlin, Paris and other continental metropolises, where 'it would never cross a pedestrian's mind…to cross a street other than in those places dedicated for them'. 68 The traffic police's attempts to assure motorists that only the most flagrant violations would be fined were unsuccessful in stifling the hostility of motorists. 69 Not least due to the strong voice of the motor organizations, the control, as efficient as it was for the pedestrians, had to be abandoned. 70 In the mid-1930s, motoring interests were successful in lobbying nationally to facilitate fast driving even in cities. Pedestrians were compensated by (in international terms) generous rights of way at pedestrian crossings. But this was challenged by car-friendly investigators at national level who found the Swedish legislation's 'absolute rights of way' 'outrageous' in allowing a continuous stream of pedestrians to 'hinder all moving traffic'. 71 As we have seen, many motoring advocates even hoped that more traffic at higher speeds would eventually force pedestrians to submit, due to fear, to the 'demands' of modern traffic. The introduction and development of traffic signals offers an opportunity to scrutinize further traffic control and priorities between road user groups. Traffic signal innovation and pedestrian safety The first traffic signals were introduced in Stockholm in 1925. They had two colours (red/green) and were manually operated by traffic police officers. Pedestrians were pointed out as the main problem. They rushed straight into the swarm of cars, the newspapers reported, across the roadway, with complete disregard of the risks they took. 72 Pedestrians continued to be singled out as the main problem in signal-controlled intersections. They stepped out in the roadway and tried to navigate between the cars, causing difficulties for both themselves and motorists. The failure to respect the signals, one city official argued, should be punished equally among pedestrians and motorists. 73 KAK's Axel Norlander also argued that pedestrians had failed to grasp that the optical traffic signals applied to them as much as drivers. 74 They, and others with them, hoped that innovation 67 'Trafikdebatten ', Dagens Nyheter, 4 Nov. 1936. 68 Sign. 'S', 'Reta bilistenockså ett nöje ', Aftonbladet, 20 Oct. 1936. 69 'Överdriven bilistskräck för civila trafikkontrollen ', Aftonbladet, 2 Nov. 1936. 70 Torsten Nothin, överståthållare 1933Nothin, överståthållare -1949 in traffic signal technology would improve pedestrians' adherence as well as their safety. 75 In 1927, the city carried out trials with sound signals to indicate to pedestrians an upcoming signal change. The trial was not successful and was cancelled shortly thereafter. An alternative way to prepare road users for a signal change was to introduce a third colour. 76 The idea of a three-coloured (red-yellow-green) system had already been adopted in the US and also a few European cities. 77 Their introduction in Stockholm was sparked by an inquiry from a bank manager, G. Hedman, who, following a visit to Berlin in 1930 urged the city to introduce it in intersections with intense traffic, such as the one in Kungsgatan-Vasagatan. Key to his arguments in favour of the novelty was the safety of pedestrians: in the present traffic situation, and 'when the signal changes occur instantly from red to green and vice versa', Hedman argued, 'pedestrians are only lucky to save themselves up on the pavement from the roadway'. 78 The police authorities in Stockholm, however, advised against the use of the yellow light. They preferred the approach that had been tried for some time, to set the signals so that the lights were red in all directions for a few seconds, which, they argued, had a similar effect. The traffic department, however, referred to the successful adoptions abroad of systems with a third, yellow, light in between. Among the benefits of the yellow light, it prepared waiting road users for the fact that they would soon have green to goand here the all-red preferred by the traffic police fell short. This 'preparedness' was important, the department argued, 'since both methods implies less traffic capacity, time without any movement, why it is necessary to get moving as quickly as possible'. 79 In the view of the engineers, the yellow light was primarily about speeding up traffic and only secondly to secure pedestrians' ability to cross the street. The traffic police were still not convinced about the benefits of the yellow light, but they were soon won over. 80 As 1933 drew to a close, the traffic department decided that experimentation with traffic signals would cease and be followed by a process of standardization. 81 During the 1930s, all 12 traffic signal-controlled intersections in Stockholm had 75 adopted the three-coloured design with a phasing that by then was known as the 'Stockholm system', which became a national standard in 1937. 82 Innovation stopped, and was picked up only after World War II, but this did not mean that traffic signals served pedestrians well. Already when the yellow light was planned, the traffic department knew that pedestrians, without their own phase, would have to 'worry about turning traffic' even when they walked the green light. 83 In February 1935, in a letter to the editor of Dagens Nyheter, a pedestrian asked rhetorically: 'Are the signal installations at intersections only for motorists?' The writer found the implementation of the yellow light flawed: pedestrians who entered the street during the last second of a red light ran the risk of making it only half-way over the street. As the journal had the signals tested by one of the paper's employees, the yellow light lasted 1-1.5 seconds at most and sometimes 'only a mere blink'in any case not enough to cross even a narrow street. Confronted with the criticism, Stawström explained the short period with the particularities of the place, a complex intersection in the city centre. In most other places, he stated, the timing was set automatically, and always to at least three seconds, sometimes a bit more. The automatic installations, however, could also be operated manually and the police considered providing a notice at every control board, indicating that, in fact, not all traffic officers granted pedestrians the full three seconds. 84 Criticism prevailed. Many pedestrians found the yellow signal time too short to make it across the street safely. 85 In April 1937, readers of Dagens Nyheter complained about the city's traffic signals in letters to the editor, asking whether the signals really served anybody other than drivers. The new traffic inspector Anders Johansson admitted that, in the beginning, the traffic signals had served motorists in particular, but he said that nowadays (with the yellow light), pedestrians also benefited. Still, turning traffic was unavoidable, and Johansson advised pedestrians to use the 'three seconds of yellow light', but to look out carefully for cars once the light turned to green. 'Some may think that the three seconds are cheaply meted', he wrote, anticipating some of his critics, 'but traffic cannot be brought to a complete stop any time longer'. 86 Otto Wallenberg argued pedestrians in Stockholm had the wrong impression that the traffic lights were only meant to keep vehicular traffic running. Consequently, they failed to respect the red lights in the way they did in Germany, France and even in Copenhagen. 87 a pedestrian argued it was, in fact, safer not to comply with the rules, since no signal served him well. At a red light, he [the pedestrian] must not pass, and at the green light, he often cannot pass due to all the vehicles, which then turn around the corner and take him by the flank. The only 'bright spot' for the walker is the yellow light, when traffic in both directions is at a standstill, but it shines for only a couple of seconds. Actually one is safer walking across a street against a red light than at greenwhen one does not have to watch out for the turning vehicles. My experience is, in fact…that the safest place to cross a street is at mid-blockwhen you have a clear view of traffic from both directions and no horrible surprises to be anxious about. If there is even a traffic island in the middle of the street, one feels downright safe. 88 Pedestrian readers regularly repeated the claim that walking against the red light was safer, while the green light gave the pedestrian a false sense of clear to go. 89 The introduction of traffic lights can be regarded as a first step to replace traffic controls involving direct supervision by the traffic police with automatic control measures based on engineering, although in Stockholm the transition was only partial in the inter-war period. 90 Norton has demonstrated a shift in the handling of traffic problems in cities in the United States in the same period. As traffic engineers gradually replaced policemen as the most important professional group, restrictive traffic regulation to maintain the order of the street was traded for measures which sought to improve efficiency and reduce congestion. 91 In Stockholm also, municipal engineers appear to have used the introduction of traffic signals as an opportunity to engage in traffic control. Traffic signals were only one of their tools, but one which most clearly manifested their wish to speed up car traffic at the expense of pedestrians. 92 Even the introduction of the third yellow light, sometimes highlighted as a safety measure to the benefit of pedestrians, was primarily a motorist promoter. The short period of the yellow light, three seconds, and the risks of conflict with turning cars when using the green light, made many pedestrians still opt for walking against red. What many framed as rude rule breaking, some pedestrians found to be the safest way. Conclusion This case-study of walking and pedestrian control in Stockholm largely confirms the findings of previous scholarship that the inter-war period stands out as one of tension between road user groups as well as professional groups engaged in organizing city traffic. But compared to US cities, the reconfiguration of walking and motoring in Stockholm was incomplete. The traffic police, municipal engineers and politicians negotiated between the needs and demands of different road user groups. Although motorists and their representatives became more influential in the 1930s, they were not always successful in promoting their case. Car use had not yet become widespread enough to make it fully dominant. Yet the marginalization of walkingnot necessarily in numbers (indeed statistics on urban walking are scarce), but in allocation and access of spacehad set in. Stockholm shared with US cities an early focus on trying to regulate pedestrian conduct through decrees, worked out in negotiations between the police and road user representatives, but finally approved, or rejected, by the city council. This approach was, in the 1930s, complemented by educational campaigns run by organizations affiliated to motoring interests, in which pedestrians were portrayed as unruly, absent-minded, ignorant of rules ('giddy geese') and, in the end, themselves the cause of many traffic accidents. Meanwhile, pedestrian crossings and traffic signal technology embodied partly contradictory ambitions: they should ensure pedestrian safety while not endangering the smooth flow of cars. And in Stockholm as in London, over time, the second ambition, to expedite motorists, outgrew the attention to pedestrians' needs and demands. 93 But if the marginalization of pedestrians was relative, the definition of the pedestrian as a road user and traffic participant was already completed. Put differently, streets in Stockholm had not been completely reinterpreted as exclusive spaces for cars, but indeed as spaces primarily for traffic, not trade, play or socializationpavements included. Pavements were regulated to secure circulation; pedestrians were educated in proper traffic behaviour; and they were steered, as traffic subjects, to cross the streets at particular places and particular times by means of pedestrian crossings and traffic signals. Their non-compliance with these innovations was part of a fight for their rights as road users rather than a refusal to be considered traffic. The street as mono-functional space was an urban reality for most of the twentieth century, a reality that many present-day policy-makers struggle with in their ambitions to accomplish liveable urban environments. Analysis of the implementation of pedestrian crossings and traffic signals also allows for alternative interpretations of pedestrians' 'misbehaviour'. Contrary to the claims of some contemporaries that pedestrians lacked manners, the findings suggest that their reasons for resisting formal regulations and traffic control innovations were that they found the prescribed practices less safe. Their 'rebellion' was, according to pedestrians themselves, only sound and safe conduct. What is more, pedestrian 'misbehaviour', criticized as it was among traffic experts, made its way through feed-back loops into new traffic legislation, street designs and traffic control innovation. As highlighted by Frank Trentmann in his criticism of the historical scholarship of urban governance, usersin this case pedestrianshave other capabilities than mere anti-disciplinary tactics; they also productively influence the functions and organization of systems such as urban transport. Finally, the Stockholm case points towards promising avenues for further historical research on urban walking and its control. Case-studies of more cities, covering larger time spans, could provide a basis for a more thorough comparative analysis. The many similarities between the city cases are striking. Local regulation of walking was peculiarly similar, and the 'traffic week' in Stockholm was immediately inspired by German pedestrian weeks, but was also preceded by similar events in Chile, the US and England. At the same time, there were differences. References to innovations and traffic situations elsewhere were commonplace. Traffic signal installations and pedestrian crossings were inspired by examples from abroad, but were also adapted to suit local circumstances in Stockholm. The circulation of ideas, knowledge and technologies, and the varied ways they were put into use, appears to be a promising subject for future research.
v3-fos-license
2018-04-03T05:50:07.891Z
2006-06-12T00:00:00.000
22330791
{ "extfieldsofstudy": [ "Physics", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1364/oe.14.005508", "pdf_hash": "bd3160aa732261a8e2b13628100c2e3cf76ac426", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42541", "s2fieldsofstudy": [ "Physics" ], "sha1": "8538652bd628f68d5c1816532b12bda0503f5d23", "year": 2006 }
pes2o/s2orc
Observation of discrete quadratic surface solitons. We report the first observation of discrete quadratic surface solitons in self-focusing and defocusing periodically poled lithium niobate waveguide arrays. By operating on either side of the phase-matching condition and using the cascading nonlinearity, both in-phase and staggered discrete surface solitons were observed. This represents the first experimental demonstration of staggered/gap surface solitons at the interface of a semi-infinite nonlinear lattice. The experimental results were found to be in good agreement with theory. Introduction The interplay between discreteness and nonlinearity has led to a host of new phenomena in physical sciences. This has been most pronounced in the area of optics where high-quality discrete structures can be fabricated and the optical power levels required to induce nonlinear effects can be easily achieved [1]. Discreteness has resulted in the prediction of new classes of spatial solitons and other phenomena that have no counterparts in continuous systems [2][3][4]. And indeed, many of these processes have been observed in a variety of Kerr, quadratic, photorefractive and liquid crystal media [5][6][7][8]. Thus far, the arrays used for discrete optics experiments have been fabricated by a variety of techniques, some of which lend themselves to small and controllable index differences at the array boundary with continuous media. This feature can now facilitate new experimental studies in the area of nonlinear surface guided waves which received a great deal of theoretical attention in the 1980's and early 1990's [9][10][11][12]. The theoretical feasibility of guiding waves along an interface between two media, at least one of which exhibits a selffocusing nonlinearity was discussed extensively. Yet, in spite of these efforts, no successful experiments have been reported along these lines. Part of the problem was to find media combinations whose linear index difference was of the order of the maximum index change allowed by self-focusing nonlinearities, i.e. typically 10 -4 and less. For the weakly guiding arrays currently in use, such small index differences are available at the interface between the array and the host medium. This can in turn facilitate the observation of interface solitons as recently suggested by our group [13]. Theory has already shown that such interface guided waves do exist at the boundary between arrays and continuous media [13], and in fact they have been observed for the first time in self-focusing Kerr lattices [14]. Discrete quadratic solitons have been previously demonstrated inside arrays governed by the "cascading" quadratic nonlinearity [6]. One of the unique features of this nonlinearity is that it can change from effectively self-focusing to defocusing depending on the wavevector mismatch conditions. Thus both signs of the nonlinearity are accessible in the same sample just by, for example, changing the temperature. This property has been used to demonstrate both in-phase and staggered (adjacent fields are π out of phase with each other) spatial solitons in these arrays [6]. In this paper we show theoretically and experimentally that both types of quadratic surface discrete solitons exist for both signs of the cascading nonlinearity. We note that this represents the first observation of gap surface solitons in arrays with defocusing nonlinearity as earlier predicted [13,15]. Theory The system shown in Fig. 1 was modeled by employing a coupled mode formulation for quadratic nonlinear media [3,6]. In our system, the adjacent waveguides comprising the array are weakly coupled by their evanescent fields. Given the fact that the second harmonic (SH) TM 00 -modes are strongly confined, the coupling process between the SH fields is negligible. Therefore, here we only consider coupling between the modal fields of the fundamental wave (FW). In physical units, the pertinent coupled mode equations describing the wave dynamics in a semi-infinite array are given by the following: ,where u n and v n are the FW and SH modal amplitudes in the n th waveguide respectively, c is the linear coupling constant and γ is the effective quadratic nonlinear coefficient. Furthermore, Δβ=2β(ω)-β(2ω) is the wavevector mismatch between the FW and SH. Stationary solutions of the form u n =f n exp(icμz) for the FW and v n =s n exp(2icμz) for the SH were numerically determined by applying Newtonian relaxation techniques. Here μ is the soliton eigenvalue and is related to a nonlinear change in the propagation constant Δk NL =cμ. In-phase solitons are possible when 2Δk NL +Δβ>0, while staggered solitons exist for 2Δk NL +Δβ<0 [3]. The power versus nonlinear wavevector shift diagrams for both the in-phase and staggered surface soliton families obtained are shown in Fig. 2 and Fig. 3, respectively, along with the corresponding typical intensity profiles. Throughout this study we use the parameters typical of the experiments. More specifically, the coupling length in this array is taken to be 25 mm and the quadratic nonlinear coefficient is 18 pm/V [6]. A number of interesting features are predicted for these quadratic surface solitons. Different from the infinite arrays case, these surface self-trapped states exist only when their power exceeds a critical level -a direct consequence of the semi-infinite geometry of the lattice. This is a feature common to surface solitons at the interface between continuous media, also found recently for surface solitons propagating due to self-focusing and selfdefocusing nonlinearities in Kerr media [13,15]. As the soliton power increases the fields become progressively more confined in the n=0 channel. The fraction of power carried by the SH is decreased as Δk NL increases. Furthermore, just as found for discrete solitons in infinite 1D media, the solitons consist of coupled FW and SH fields. In addition to the expected staggered solutions, in-phase solitons were also found under negative phase mismatch conditions for 2Δk NL +Δβ > 0, i.e. with selffocusing nonlinearities. See the blue curves in Fig. 2(a) for the existence curves and the field distributions in Fig. 2(d). Note that this family of solitons can only be excited if the SH is considerably stronger than the FW. Similarly in regions of positive phase-mismatch, both stable in-phase and staggered (for 2Δk NL +Δβ < 0, i.e. a self-defocusing nonlinearity for the blue curves in Fig 3(a) and the fields in [ 3(d)] surface solitons are predicted to exist. This mirrors the case predicted for infinite quadratically nonlinear 1D arrays [3]. We emphasize that in all cases the branch associated with the SH wave in the existence curves [see Figs. 2(a) and 3(a)] does not depend on the value of phase-mismatch Δβ. This can be formally proved based on the fact that the waveguides are uncoupled for the SH wave. Finally, we note that stability analysis of Eqs. (1) indicates that the predicted surface solitons are stable in the regions where the slope of the curve is positive, in accordance with the Vakhitov-Kolokolov criterion [16]. Experiment The arrays used here consist of channels formed by Ti diffused into the surface of LiNbO 3 as shown in Fig. 1. Phase-matching for second harmonic generation is achieved by periodic poling of the lithium niobate (PPLN) ferroelectric domains along the propagation direction. This poling extends beyond the array but in that region the periodicity required for efficient SHG is different from that required for the channels and the generation of the second harmonic is very weak and can be neglected. There is no Ti in-diffused outside of the array, therefore the array boundary corresponds to an interface between the 1D waveguide array and a semi-infinite half-space. The samples contained four waveguide arrays each consisting of 101 coupled channel waveguides with propagation along the X-axis. The spacing between the arrays was sufficiently large (> 100μm) that the region beyond each array boundary can be considered as a half-space. Seven cm long waveguides were formed by titanium in-diffusion into the Z-cut surface. TM 00 -mode waveguide losses were 0.2dB/cm for the FW at λ ≈1550 nm and 0.4dB/cm for its SH. The center-to-center channel separations was d = 16 μ m resulting in a coupling length of L c =25 mm for the FW TM 00 mode. These were determined from the output intensity distribution under single waveguide excitation conditions [17,18]. The sample was periodically poled with a period of 16.75 µm by electric field poling to achieve phasematching between the TM 00 modes for SH generation at temperatures elevated to the range of 200-250°C. The required wave-vector mismatch was adjusted by varying the sample temperature T. In our experiments the relation between the phase-mismatch ΔβL and sample temperature T was measured to be ΔβL=8.1(234-T [ 0 C] ) [17]. A 5-MHz train of bandwidth limited 9-ps-long pulses at a wavelength of 1557 nm was produced by a modified Pritel fiber laser [17,18]. The pulses were stretched, amplified in a large area core fiber amplifier, and then recompressed in a bulk grating compressor to give up to 4 kW of peak power in nearly transform limited pulses 7.5-ps-long. The recompressed pulses were spatially reshaped into elliptical Gaussian beam with 4. The output of the array was observed with separate cameras for the FW and the SH, and quantified by measuring temporally averaged output intensities and total powers. Figure 5 shows the observed FW discrete diffraction pattern obtained at low powers. It is in good agreement with the theoretical pattern generated from Eqs. (1). of Ref. [13]. The evolution of the output intensity distributions versus input peak power of the fundamental for single channel excitation (n=0) is shown in movies in Fig. 6 (positive phase mismatch) and Fig. 7 (negative phase mismatch). Increasing the input peak power leads in both cases to localization into surface solitons, as predicted theoretically. At peak powers of 600W for the focusing case and 500W for the defocusing case the localization is essentially complete. The observed intensity decay into the array from the boundary out to distances typical of the low power discrete diffraction pattern is a direct consequence of the pulsed excitation used which contains a continuum of powers. That is, not the full pulse is trapped as a surface soliton in the boundary channel and the weaker parts of the pulse appear as part of a modified linear discrete diffraction pattern. We want to mention here that we controlled the input powers with a combination of a polarizer and a half-wave plate, and thus the power scaling is sinusoidal. The weak second harmonic component is localized almost completely in the n=0 channel, in agreement with theory in Fig. 2(b) and 2(c) and 3(b) and 3(c). An important problem is to verify which field distributions were generated, staggered or in-phase for each sign of the cascading nonlinearity. Theory has shown the ratio of the FW to SH powers are very different in the two cases. In order to compare experiment approximately with theory, the assumed hyperbolic secant temporal profile was decomposed into cw temporal slices and the pulse response was simulated by adding the slices together. A fourth order Runge-Kutta method was then used to propagate the fields under the influence of Eq. (1). Comparing the measured and the calculated ratios of the FW to SH powers at the output, clearly the observed surface solitons were the staggered ones for negative and the in-phase ones for positive mismatch since the experimentally measured power ratio FW/SH was much bigger than unity. It would be necessary to also input the appropriate SH field in order to excite the other surface solitons. Using the same numerical approach, the output intensity distributions across the array were calculated versus input peak power for both positive (+36π) and negative (-15.5π) phase mismatches for in-phase and staggered solitons respectively. A sampling of these results, along with the corresponding experimental data is shown in Fig. 8 and Fig. 9: Measured (let-hand-side) and calculated (right-hand-side) output field distributions for single channel excitation for two input power levels corresponding to partial collapse into a surface soliton for FH (first row) and full collapse into a surface soliton for FH (second row) and SH (last row). Phase mismatch = +36π (self-focusing nonlinearity). The red curves represent theoretical results (at 435 W and 441 W) and the blue experimental data (at 430 W, and 600W) FW input powers. Measured (let-hand-side) and calculated (right-hand-side) output field distributions for single channel excitation for two input power levels corresponding to partial collapse into a surface soliton for FH (first row) and full collapse into a surface soliton for FH (second row) and SH (last row). Phase mismatch = -15.5π (self-defocusing nonlinearity). The red curves represent theoretical results (at 310 W and 321 W) and the blue experimental data (at 420 W, and 580W) FW peak input powers. In fact there is good qualitative agreement between experiment and theory considering the non-ideally hyperbolic secant temporal profile of the input beam and the coupling efficiency estimated from low power throughput experiments [17,18]. If a coupling efficiency of 50% is assumed into the input channel, the resulting quantitative agreement is also good Summary In summary, discrete quadratic solitons guided by the interface between a 1D array and a semi-infinite medium have been predicted theoretically and observed experimentally. Two of the four predicted soliton types were generated by exciting the first channel with a beam at the fundamental frequency. The additional solitons not observed will probably require the excitation of both fundamental and second harmonic fields. Finally, the results reported here represent the first observation of staggered discrete surface solitons in any periodic system. This research was sponsored by NSF.
v3-fos-license
2019-01-22T22:35:26.732Z
2018-12-18T00:00:00.000
58034174
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1155/2018/7267920", "pdf_hash": "be387d1116bc19c5c935517eefd9f6165098aaf8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42545", "s2fieldsofstudy": [ "Medicine" ], "sha1": "30001fc3138c4a361a9c084e751be43a6b7a0c65", "year": 2018 }
pes2o/s2orc
The Use of Natural Health Products Especially Papaya Leaf Extract and Dandelion Root Extract in Previously Untreated Chronic Myelomonocytic Leukemia Chronic myelomonocytic leukemia (CMML) is a clonal hematopoietic disorder which shares clinical and morphological features of myelodysplastic syndrome and myeloproliferative neoplasms. Conventional therapeutic options include hydroxyurea, hypomethylating agents, and systemic chemotherapy, which are all palliative measures and are associated with potential side effects. Allogeneic hematopoietic cell transplantation is the only curative option. Natural health products such as papaya leaf extract and dandelion root extract have been shown to demonstrate anticancer activity in preclinical and clinical studies, respectively. We present a case study of a 76-year-old male with previously untreated CMML, whose hematological parameters remained stable and whose bone marrow blast counts vastly improved while taking papaya leaf extract and dandelion root extract. Introduction Chronic myelomonocytic leukemia (CMML) is an aggressive and generally resistant form of hematopoietic stem cell neoplasm with the potential of progression to an acute myelogenous leukemia and with a median survival of 12 to 24 months from diagnosis [1]. Because of lack of CMMLspecific clinical trials, the optimal treatment of CMML is unclear. Allogeneic hematopoietic cell transplantation (alloHCT) is the only disease-modifying therapy, but many patients are not candidates for an alloHCT due to multiple comorbidities and/or advanced age. Numerous anticancer therapeutic agents have been derived from natural products [2]. Research into the antimalignancy properties of natural health products (NHPs) dates back to the 1980s. A handful of preclinical studies have demonstrated anticancer activity with the use of papaya leaf extra (PLE) and dandelion root extract (DRE) [1,2]. Case A 76-year-old Caucasian male with a history of stage Ic prostate cancer, gleason 3 + 3, status after radiotherapy and antiandrogen therapy in 1998 (in remission), hypertension, and IgG2/IgG4 subclass deficiency presented with CMML-1 with deletion 7q (q22q32) in 11 of 20 cells (55%) diagnosed in December 2008. JAK 2 mutation analysis and quantitative RT-PCR for the BCR-ABL transcript were negative. He had an isolated thrombocytopenia at presentation, his diagnostic bone marrow biopsy demonstrated trilineage hematopoiesis without dysplasia and 1% blasts, and atypical intermediatesized monocytoid cells increased at 20%-30% of the bone marrow cellularity. No systemic treatment was commenced at the time. His other comorbidities include gastroesophageal reflux disease, lumbar spinal stenosis (L3-L4), and lumbar neuroforaminal narrowing due to degenerative disc disease. In early 2009, he started PLE in the form of papaya leaf tea, 4 grams once daily in the morning and one teaspoon of elixir at night. He also had IgG2 and IgG4 subclass deficiencies. For his IgG subclass deficiencies, he was treated with cimetidine 400 mg 2 times daily starting from October 2014. He also started DRE 520 mg capsules, two capsules once daily in early 2015. His bone marrow blasts peaked at 11% in October 2009 and have been less than 5% since March 2013. Of note, both the papaya leaf extract and DRE supplements were commercial products. Prior to his diagnosis of CMML, he had been on numerous over-the-counter supplements (commercial products): pomegranate XT 1 mg daily; vitamin C 500 mg daily; L-carnitine, red yeast rice, niacin, vitamin B6, and omega-3 fatty acids (all started in the 1990s); nettle root daily, pyrroloquinoline 1 tablet daily, melatonin 1.5 mg daily, betaglucan 400 mg daily, and ellagic acid (all started in 2000); and one red rooibos tea bag daily with the PLE tea. Following the diagnosis of CMML, he started numerous additional OTC supplements: vitamin K daily (2011); bio-curcumin daily and vinpocetine 10 mg daily (2011); inositol 500 mg daily (2012); resveratrol 1 tab bid, vitamin B12, folic acid, and boswellia 100 mg daily (2013); S-adenosylmethionine (SAM-E) (2014) and rice bran 500 mg bid (2014); and maitake mushroom elixir about 2-3 cc once daily (2017). A trend of his blood counts and bone marrow results are illustrated in Table 1. e patient had no side effects and no worsening of comorbidities attributable to the supplements. He was hospitalized in June 2017 with acute hypoxemic respiratory failure and reactive airways treated with antibiotics and bronchodilators, which was not attributed to the supplements. Additionally, he was not neutropenic (ANC was always greater than 1200 cells/μL) during the hospitalization. Clinically, he continues to feel well and his hematological parameters remain consistently stable. Discussion e World Health Organization (WHO) has classified CMML as a myelodysplastic syndrome (MDS) overlapping with a myeloproliferative disease (MPD) [3]. Its dismal prognosis indicates a pressing need for more effective alternatives therapies. e use of NHPs may well provide a nontoxic and less-expensive therapeutic alternative. Our hypothesis is that our patient's CMML disease course has been modified by NHPs and OTC supplements. In vitro data demonstrate anticancer activity with the use of PLE, DRE, and curcumin [1,[4][5][6]. It is unclear whether any of the other supplements have contributed to the stable disease course. [5]. ey also demonstrated a reduction in cytokines IL-2 and IL-4 in peripheral blood mononuclear cells [5]. ere are most interesting findings; the PLE tea and elixir our patient consumed may have had an antiproliferative effect on his CMML. One study demonstrated a selective efficacy of DRE (Taraxacum officinale) in inducing apoptosis in CMML cell lines [1]. DRE is a NHP proposed to contain antioxidant properties; however, the exact mechanism of action in cancer cells remains elusive [2]. Studies of DRE have demonstrated a selective inhibition of the death receptormediated pathway of apoptosis [2]. An additional widely used NHP is turmeric (Curcuma longa) whose active ingredient is curcumin which has been studied for antimalignancy effects. Arber et al. demonstrated selective efficacy in induction of programmed cell death in aggressive and resistant CMML cell lines using DRE [3]. Hamm et al. described a case of a 70-year-old man with a history of primary refractory acute myelomonocytic leukemia who received DRE tea after failing induction chemotherapy [6]. What emerged was a diagnosis of CMML. e patient achieved a complete hematological remission with the ongoing use of DRE tea. When he drank less than three cups per day of the DRE tea, his peripheral blood monocyte count started to rise [6]. e same group reported an elderly female with CMML who only used DRE tea to treat her disease. She achieved a hematological remission but relapsed 3 months later [6]. is group reported a third case of a 60-year-old female with primary refractory acute myelomonocytic leukemia and a subsequent diagnosis of CMML who achieved a hematological remission after starting DRE tea, three cups per day. She remains in complete hematological remission at least 5 months after starting DRE tea. A fourth patient with acute myeloid leukemia (AML), who was not a candidate for intense systemic chemotherapy, was treated with low-dose cytarabine followed by DRE tea. She developed peripheral blasts when she stopped the tea for one month, but she continues the DRE tea 15 months from the AML relapse [6]. Conclusion NHPs may well provide a potential nontoxic therapeutic alternative to conventional systemic therapy in the treatment of CMML. Our patient remains on numerous NHPs, most notably on PLE tea and DRE. He remains asymptomatic, transfusion independent with stable counts. DRE and PLE may provide an improved quality of life of patients with CMML if systemic chemotherapy can be avoided. Conflicts of Interest e authors declare that they have no conflicts of interest.
v3-fos-license
2020-12-24T09:11:58.521Z
2020-12-18T00:00:00.000
230583018
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcsurg.biomedcentral.com/track/pdf/10.1186/s12893-021-01230-2", "pdf_hash": "21774b340f77bb1311c8c06b5da41bfa48c80a48", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42546", "s2fieldsofstudy": [ "Medicine" ], "sha1": "0aa462886d9c1106f4a9671fda1a95386bd22de8", "year": 2021 }
pes2o/s2orc
Spontaneous common bile duct perforation in full term pregnancy: a rare case report and review of literature Background Spontaneous biliary system perforation is a rare presentation in clinical practice especially in adults. It is rarely suspected and diagnosed preoperatively due to small number of cases, vague sign and symptoms, and ambiguous presentation. Case presentation We describe an interesting case of spontaneous perforation of the common bile duct in a 16 year-old female who presented a week after her first birth to the emergency department with complaints of diffuse abdominal pain, abdominal distention, fever, vomiting, and constipation. She was having generalized peritonitis but the etiology was unclear despite a thorough workup. She underwent exploratory laparotomy, and a perforation in the supra duodenal region of the common bile duct was found intraoperatively. The common bile duct was repaired over T-tube, and cholecystectomy was performed; the patient was recovered uneventfully. Conclusion Spontaneous biliary perforation is a rare cause of acute abdomen in adults and extremely rare in pregnancy. Its delayed diagnoses and management can lead to a high morbidity and mortality. All physicians, especially surgeons, should be aware of this possibility and consider it a cause of peritonitis on differential diagnosis particularly when there is no apparent etiology available for presentation. Background Most injuries to the bile duct are iatrogenic after some kind of interventions like endoscopic retrograde cholangiopancreatography (ERCP) as well as open and laparoscopic cholecystectomy [1]. Spontaneous bile duct perforation can also occur, but it is rare in adults. Etiologies include increased pressure in the bile duct secondary to obstruction by stones, strictures, tumors, and parasites; intra mural infection; necrosis of the bile duct wall secondary to thrombosis of intramural vessels; direct erosion by stones; cirrhosis; and weakness of the duct wall for multiple reasons. It can also occur without a known cause, but calculi were found in common bile duct (CBD) in 70% of cases. It is comparatively common in infants and etiologies are congenital biliary anomalies most of the time in this group [2,3]. Here, we present a young female who was diagnosed with spontaneous CBD perforation during surgery one week after giving birth. We also performed an extensive literature review and found only five cases of bile duct perforation during pregnancy or in the early postpartum period reported in the literature to date. intermittent fever and chills, right upper quadrant pain, and urinary retention for the last 2 weeks; she underwent normal vaginal delivery one week ago. She was hospitalized in another province for these complaints. The presenting complaints were diffuse abdominal pain, abdominal distention, and continuous fever from the last three days with vomiting and constipation for the last two days. The vital signs were BP 133/97 mmHg, RR 20/m, PR 136/m, temperature 39 C 0 , and SpO 2 90%. On physical examination, the patient looked ill with a distended abdomen, generalized tenderness, rebound tenderness, and guarding. Fluid thrill and shifting dullness was positive. Bowel sounds were not audible on auscultation. Blood workup showed TLC 21,000/mm 3 (normal 4000-11,000), neutrophil 89% (normal 40-75), Hb 8.2 g/dl (normal 11.5-16.5), serum creatinine 0.86 mg/ dl (normal 0.6-1.2), total bilirubin 0.71 mg/dl (normal 0.1-1.2), SGOT 20 IU/ml (normal 0-40), SGPT 34 U/L (normal 5-40), alkaline phosphatase 109 IU/L (normal 40-240), and blood group AB + ; the HbsAg, anti HCV Ab, and HIV were negative. She was initially received by the emergency doctor and was diagnosed with "postpartum sepsis" because she was having signs and symptoms of sepsis. Ultrasound showed distended bowl loops floating in free fluid in the peritoneal cavity. Erect chest X-ray showed no air under the right hemidiaphragm. CT with contrast was performed revealing gross ascites suggesting peritonitis but the cause of the ascites was unclear. An exploratory laparotomy was performed after initial management with a hollow viscous perforation in mind. Biliary peritonitis was found with about 4 L of bile-stained fluid in the peritoneal cavity; the stomach, small intestine (distended), and colon were normal. A perforation of about 7-8 mm was found on anterolateral wall of supra duodenal segment of CBD below the junction of cystic duct and common hepatic duct as shown in Fig. 1. Gall bladder and the rest of biliary tree was normal without any calculus. An intra-operative cholangiogram was not available so a 10 French feeding tube was passed to the duodenum through a perforation to confirm the distal obstruction that passed without any resistance. Peritoneal lavage with warm normal saline, cholecystectomy, and CBD repair over a T-tube was thus performed. The post-operative stay was uneventful, and the patient fully recovered and was discharged on the 7th post-operative day. The Ttube was removed on the 15th post-operative day after the cholangiogram showed no filling defects and normal contrast flow to the duodenum. The patient did well after one year of follow up. Discussion and conclusion Spontaneous perforation of the common bile duct was first described by John Freeland in 1882 when he found multiple diverticula around CBD with stones on autopsy [4]. Most common sites of biliary system perforation are the gallbladder that account for 91% of cases followed by common bile duct (4.4%), cystic duct (3.3%), and common hepatic duct (1.1%) [5]. Apart from the gallbladder, the most common site of extrahepatic biliary system perforation is the junction of cystic duct with hepatic duct [6]. The first case in infants was presented by Dijkstra in 1932 [7]. The incidence of spontaneous biliary perforation is 1.5 in 1,000,000 live births over the first year of life. The most common site of perforation in infants below age one was the junction of common hepatic and cystic duct (43%), common bile duct (23%), gall bladder (12%), hepatic duct (9%), and cystic duct (5%) [8]. The incidence of overall biliary system disease in pregnancy ranges from 0.05-0.3% [9]. The first case of spontaneous bile duct perforation in pregnancy associated with gall stones was reported by Piotrowski et al. in 1990 but bile duct perforation due to congenital anomalies in pregnancy has been reported prior to that in the literature. About 70% of spontaneous biliary perforation cases were associated with gall stones where the stone were found during surgery [2]. The prevalence of spontaneous bile duct perforation is rare in pregnancy with only a few cases reported in the literature. All cases (to the best of our knowledge) of normal anatomy biliary system perforation (including perforation of gallbladder and bile duct) in pregnancy or early postpartum period reported in literature are listed in Table 1. [2,[10][11][12][13][14][15][16][17]. Most cases are not spontaneous because they are secondary to some other underlying pathology. Out of all these spontaneous biliary system perforations cases given in the Table 1, only five cases of bile duct perforation have been reported: four in the common bile duct and one in the common hepatic duct. We present here a truly spontaneous common bile duct perforation case that we think is the 6th case of the bile duct and 5th case of CBD perforation during pregnancy in the literature. According to our analysis, the most common site of biliary tract perforation during pregnancy is gall bladder (9/15; 60%), CBD (5/15; 33.3%), and hepatic duct (1/15; 6.66%). The causes of spontaneous perforation include gall stones (6/9; 66.6%) and idiopathic (3/9; 33.3%). The causes of the six other cases mentioned in the Table 1 have not been clearly described, and they have been excluded from the analysis for the purpose of accuracy. The theories behind spontaneous bile duct perforation in adults are obstruction distal to perforation leading to high canalicular pressure, weakness of bile duct wall, or a combination of both. The etiologies leading to these phenomena mentioned in the literature are impacted stones or erosion of bile duct wall by stones without impaction, intramural infection, strictures, tumors, parasites, spasm of sphincter of Oddi, necrosis of the wall of the bile duct due to thrombosis of bile duct blood vessels, cirrhosis, birth trauma, biliary tract congenital anomalies like choledochal cyst or biliary diverticulum, connective tissue diseases, and previous biliary tract surgeries. Other comorbidities associated with spontaneous biliary perforation include HIV infection, tuberculosis of CBD, Hodgkin's Lymphoma, severe necrotizing enterocolitis involving duodenum, and viral infection of the bile duct [2,3,6]. Spontaneous bile duct perforation sometimes can be idiopathic like in our case but the possible reason for idiopathic cases might be thrombosis of those small blood vessels leading to ischemia and necrosis of bile duct wall and finally perforation. The presentation of the patient can be different from case to case because it can have both acute and insidious onset. Most patients having insidious onset (80%) may present with abdominal distention without abdominal pain and clay color stool; progressive jaundice may follow [3,6,18]. In acute cases (20%), the signs and symptoms of acute abdomen like generalized abdominal pain, abdominal distention due to bilious ascites, vomiting, fever, jaundice, high levels of bilirubin, or even shock may occur. The patient can present with a perihepatic collection or [3,19]. We believe our case was insidious onset and developed CBD perforation two weeks before she was presented to us but she was misdiagnosed by less experienced local health care workers in the rural area perhaps because she was a full term pregnant and they misinterpreted the symptoms as labor pain. She gave birth to her baby just one week after her symptoms started. She finally developed generalized abdominal pain that might be due to the rupture of the walled off biloma leading to fulminant peritonitis, infection with high fever, and high leukocyte counts or symptoms that became obvious when enough bile accumulated and infected within the peritoneal cavity. There is less of a chance of the first scenario because we could not find a walled-up cavity during surgery. The symptoms of bowel obstruction (abdominal pain, distention, vomiting, and constipation) can be explained by paralytic ileus due to infection and peritonitis. Such cases are rare, and suspicion and preoperative diagnosis of the condition is difficult. Diagnosis is mostly made during surgery. Perforation of the biliary system is a known complication of cholelithiasis or choledocholithiasis, and one should suspect biliary perforation if the patient presents with perihepatic abscess or signs and symptoms of peritonitis with a history of biliary stone disease [19]. If suspected, pre-operative diagnosis can be made by hepatobiliary iminodiacetic acid scan (HIDA scan), magnetic resonance cholangiopancreatography (MRCP), endoscopic retrograde cholangiopancreatography (ERCP), magnetic resonance imaging (MRI), and computerized tomography (CT) scan. These facilities are expensive and not widely available especially in low-income countries or in rural areas of many other countries; thus, an easy and cheaper biochemical test from the ascitic tap "ascitic fluid bilirubin concentration, and ascitic fluid to serum bilirubin ratio" can help to diagnose preoperative biliary peritonitis. The normal range of ascitic fluid bilirubin is 0.7-0.8 mg/dl, and concentrations above 6 mg/ dl support the diagnosis of choleperitoneum [18]. A study by Darwin et al. showed that a peritoneal fluid to serum bilirubin ratio (FSBR) greater than 5 is 100% specific and sensitive for prediction of bile leak [20]. Ultrasonography is a first study because it is cheap, readily available, can show free fluid in the abdominal cavity, fluid and pus collection in perihepatic area, and biliary system pathologies. In the case of CBD stones, the dilated biliary tree proximal to the obstruction can be nicely visualized as well; however, ultrasound may not find the exact site of the perforation but it is still a helpful investigation especially in low-resource settings. Chest X-rays in erect or left lateral decubitus positions will not reveal gas under the right hemidiaphragm; this is expected in many gut perforations cases. Thus, the suspicion should be higher if the patient is presented with peritonitis but with no gas under right hemidiaphragm on X ray, and the presence of other positive markers like bilious peritoneal tap with a history of biliary disease. Sharma et al. recommended peritonitis with bilious peritoneal tap, no pneumoperitoneum, and acholic stool pathognomonic for spontaneous biliary perforation [21]. The condition may be confused and challenging to diagnose because of similar presentation with other pregnancy-related diseases like pregnancy induced hypertension (PIH), preeclampsia, and hemolysis, elevated liver enzymes, and low platelet count syndrome (HELLP). McGrath et al. and Goodlin et al. discussed biliary system perforation confused with or misdiagnosed as pregnancy-associated problems [12,14]. This confusion may lead to delayed diagnosis that can lead to high morbidity and mortality. Spontaneous perforation of the biliary tree might be considered in differential diagnosis. Multidisciplinary approaches need to be adopted in such cases to avoid catastrophic events like these. Management of spontaneous bile duct perforation ranges from minimally invasive intervention to more aggressive surgical intervention. Based on the patient's condition, different treatment modalities can be chosen. The management mainstay in the case of suspected calculi in the biliary tree is threefold: 1) evaluate stone probability in biliary tree, 2) choledocholithotomy or removal of stones if present, and 3) removal of the source of stone (cholecystectomy) [22]. ERCP is of diagnostic and therapeutic advantage, and spontaneous biliary perforation cases can be diagnosed and treated at the same time with removal of the stone and placement of stent in the bile duct [14,21]. During failed ERCP cases or where ERCP facilities is not available, one can use laparoscopic or open exploration. If there is biloma and localized perihepatic abscess secondary to CBD obstruction by stone, then it can be drained percutaneously concomitantly with stones in the CBD removed via endoscopic sphincterotomy [19]. Patients presented with generalized biliary peritonitis need prompt exploration and thorough peritoneal drainage followed by management of the perforation site [21]. An intraoperative cholangiogram must be performed if available to check for stones and pathologies of the biliary tree. A conservative management of abdominal drainage and biliary tree decompression is recommended if there is no post-perforation obstruction; perforation will heal automatically once the biliary tree is decompressed [19]. Primary closure of the perforation site is hard and dangerous to perform because of inflammation around the perforation site; most patients will recover after external decompression and after treating the primary pathology [19,21]. Some authors believe that primary closure of the bile duct perforation can be performed if the facility of intraoperative cholangiogram is available and there is no distal obstruction; however, a biliary enteric anastomosis is required to avoid portal hypertension and biliary cirrhosis if there is distal obstruction like stricture or atresia [3,23]. Spigland et al. also suggested only external drainage for biliary perforation without ductal abnormalities but added that cholecystostomy may help in healing of duct perforation and also guide when to remove peritoneal drain [24]. Gurusamy They believed that routine T-tube drainage for CBD stones should be avoided because there is no justification for its use based on current available evidence. They also suggested that T-tube drainage in laparoscopic exploration of CBD also increases surgery time and postoperative hospital stay without significant difference in morbidity than primary closure without stent [25]. Simple T-tube peritoneal drainage is also acceptable even in the presence of distal obstruction when the exploration of porta hepatis is very risky in inflamed conditions, which may worsen the scenario. This mini surgery is less morbid and has good chances of healing. At the minimum, it will stabilize the patient for second surgery if the condition has not cured [23]. We recommend the use of peritoneal drainage and percutaneous cholecystostomy in cases where porta hepatis exploration is not feasible for any reason even if distal obstruction is present. Rouxen-Y bilioenterostomy was performed previously for biliary perforation but is now limited to untreatable distal obstruction, persistent biliocutaneous fistulae, or biliary leakage and CBD perforation associated with choledochal cysts [21]. If cholangiogram is not available, then the patient may be best managed by closure over the Ttube and cholecystectomy as in our case [3,18,23]. Spontaneous bile duct perforation is very rare in adults and even rarer during pregnancy. A preoperative diagnosis is hard to make because of its rarity and similar presentation with many other diseases. Physicians (especially surgeons) should be aware of this condition and must consider it a cause of peritonitis on differential diagnosis. It is best to strongly suspect patients presented with peritonitis in pregnancy or in early post-partum period with bilious peritoneal tap, without pneumoperitoneum, and no apparent cause for peritonitis. Peritoneal tap bilirubin concentration and its ratio with serum bilirubin are very helpful in diagnosis. The purpose of the management is to halt the catastrophic event, treat, and then remove the cause and source of perforation whenever possible. One should ultimately decompress the biliary tree. Simple peritoneal drainage can be done in patients unfit for porta hepatis exploration followed by another surgery after stabilization.
v3-fos-license
2024-05-16T05:21:54.025Z
2024-05-14T00:00:00.000
269768555
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "fdc77406a10924908b977313f1d050fae7d8a4df", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42548", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "sha1": "fdc77406a10924908b977313f1d050fae7d8a4df", "year": 2024 }
pes2o/s2orc
Role of Bacillus subtilis exopolymeric genes in modulating rhizosphere microbiome assembly Background Bacillus subtilis is well known for promoting plant growth and reducing abiotic and biotic stresses. Mutant gene-defective models can be created to understand important traits associated with rhizosphere fitness. This study aimed to analyze the role of exopolymeric genes in modulating tomato rhizosphere microbiome assembly under a gradient of soil microbiome diversities using the B. subtilis wild-type strain UD1022 and its corresponding mutant strain UD1022eps−TasA, which is defective in exopolysaccharide (EPS) and TasA protein production. Results qPCR revealed that the B. subtilis UD1022eps−TasA− strain has a diminished capacity to colonize tomato roots in soils with diluted microbial diversity. The analysis of bacterial β-diversity revealed significant differences in bacterial and fungal community structures following inoculation with either the wild-type or mutant B. subtilis strains. The Verrucomicrobiota, Patescibacteria, and Nitrospirota phyla were more enriched with the wild-type strain inoculation than with the mutant inoculation. Co-occurrence analysis revealed that when the mutant was inoculated in tomato, the rhizosphere microbial community exhibited a lower level of modularity, fewer nodes, and fewer communities compared to communities inoculated with wild-type B. subtilis. Conclusion This study advances our understanding of the EPS and TasA genes, which are not only important for root colonization but also play a significant role in shaping rhizosphere microbiome assembly. Future research should concentrate on specific microbiome genetic traits and their implications for rhizosphere colonization, coupled with rhizosphere microbiome modulation. These efforts will be crucial for optimizing PGPR-based approaches in agriculture. Supplementary Information The online version contains supplementary material available at 10.1186/s40793-024-00567-4. Background The rhizosphere microbiome is intricately linked with the host plant [1][2][3] and is primarily modulated according to the host genotype [4][5][6] and, consequently, by the plant exudate profile [7][8][9].In turn, the rhizosphere microbial community provides readily available nutrients for plant absorption, along with other molecules such as phytohormones and secondary metabolites, which enhance host development and health [10,11]. The symbiotic relationships between the rhizosphere microbiome and plants can also lead to intricate connections within microbial communities, ultimately benefiting the host plant [12].For instance, the Bacillus subtilis strain UD1022 can colonize Arabidopsis thaliana roots, establishing mutualistic interactions [13].As the host plant secretes fixed carbon through root exudates to nourish the bacteria, B. subtilis in turn facilitates rhizobacterium colonization, providing the plant with growth-promoting traits [13].B. subtilis is one of the most studied gram-positive plant growth-promoting rhizobacteria (PGPR) [14,15], and it has great agricultural and ecological importance [16][17][18][19].Their ability to induce plant development and protect against pathogens and abiotic stresses has been widely explored [20][21][22][23][24][25].B. subtilis promoted plant growth in tomato [25], cucumber [26], and wheat [27] and conferred resistance against the soil-borne pathogen Rhizoctonia solani in cotton [28] and Pseudomonas syringae pv.tomato in Arabidopsis [29].Martins et al. [20] and Allard-Massicotte et al. [13] showed that B. subtilis and B. amyloliquefaciens can also induce plant drought tolerance by forming biofilms in bean and Arabidopsis roots through bacterial exopolysaccharide (EPS) secretion.Most importantly, various B. subtilis strains are considered generalists for multiple crops [25][26][27][28][29].The application of plant growth-promoting rhizobacteria (PGPR) in agricultural settings not only impacts plant performance but also affects the resident soil microbiome.For instance, the use of bacillus-based products in crops such as tobacco [30], lettuce [31], and strawberry [32] can increase bacterial diversity in rhizosphere soils. Studies have reported the effects of mutant B. subtilis on plant growth promotion and protection, including sporulation [44], surfactin [45], and flagellar [13,47] mutants, but the effects of these mutants on rhizosphere microbiome assembly have not yet been properly considered.In this study, EPS and TasA double mutants of B. subtilis were used to investigate the impact of exopolymeric genes on the modulation of rhizosphere microbiome assembly.The use of a mutant strain was combined with the dilution-to-extinction approach to assess the assembly of bacterial and fungal communities in the tomato rhizosphere under a gradient of soil microbial diversity. Soil microbial diversity dilution Soil samples were collected from the "UD Fresh to You" farm (39°40′04.2″N75°45′03.5″W)at the University of Delaware.The specific soil type used was Delanco silt loam, which had previously been cultivated with organic tomatoes and was identified through the Web Soil Survey [48].The dilution-to-extinction method [49] was employed to obtain soils with the microbial diversity gradient used in the bioassays.Initially, 30 kg of sieved (< 2 mm sieve) and dried soil was divided into three bags, each containing 10 kg of soil.The soil in the bags was autoclaved four times at 120 °C and 1 atm pressure for 60 min.The serial dilution process involved suspending 450 g of natural soil (dry weight) in 900 mL of autoclaved deionized water, resulting in a concentration of 0.5 g mL −1 (10 -1 soil dilution).Subsequently, 100 mL of the 10 -1 dilution was transferred to 900 mL of autoclaved deionized water to obtain 10 -2 soil dilutions.This serial dilution process was repeated until a soil dilution of 10 -6 was reached, following the methods described by Wertz et al. [50] and Souza et al. [51].Three dilutions were selected for use in the experiment: 10 -1 , 10 -3 , and 10 -6 , in addition to the natural and autoclaved soils.To obtain the microbial diversity gradient across treatments, pots with 200 g of soil received 40 mL of each soil suspension, 10 -1 , 10 -3 , or 10 -6 , and the natural and autoclaved soils received 40 mL of sterilized ultrapure water.Pots were placed in a climatized chamber set at 25 °C, with a photoperiod of 12 h light and 12 h dark.The pots were incubated under these conditions for six weeks, allowing the establishment of the microbiome before the experiment [52]. Bacillus subtilis strains and inoculum preparation The B. subtilis strain UD1022 eps−TasA− , which is defective for the EPS and TasA genes, was obtained in a previous study [53].Wild-type B. subtilis was cultivated on Luria broth (LB) agar plates, and its respective mutant was cultivated on LB supplemented with 5 µg mL −1 tetracycline and 1 µg mL −1 erythromycin.The streaked plates were then incubated at 37 °C for 24 h.Subsequently, individual bacterial colonies were transferred to LB liquid media supplemented with antibiotics, as was the case for the mutant strain UD1022 eps−TasA− , and incubated in a shaker at 150 rpm for 6 h at 37 °C.After the incubation period, the B. subtilis cultures were washed and resuspended in autoclaved distilled water.Bacterial cultures were grown until they reached a concentration of 10 8 cells mL −1 .Seeds were disinfected by immersion in a 3% sodium hypochlorite solution and shaken for 30 s, followed by thorough rinsing with ultrapure water.Subsequently, the seeds were briefly soaked in 70% ethanol and shaken for 1 min, followed by another extensive rinse with ultrapure water.After disinfection, 1 g of tomato seeds was mixed with the bacterial suspension (10 8 cells g −1 ) and kept for 1 h in a shaker at 150 rpm before planting.A boost dose of 1 mL (10 8 cells mL −1 ) per plant was used on the 16th day of the experiment.In the control treatment, the seeds or plants were treated with autoclaved distilled water. Tomato bioassay and experimental design The plant bioassay used the tomato cultivar "Amish Paste" with four different treatments: (i) plants inoculated with UD1022, (ii) plants inoculated with UD1022 eps− TasA− , (iii) non-inoculated plants (control), and (iv) pots without plants (bulk soil) (Fig. 1A).Each treatment was performed using five different levels of soil microbial diversity: natural soil, 10 -1 dilution, 10 -3 dilution, 10 -6 dilution, and autoclaved soil.Thus, considering four treatments, soils with five levels of microbial diversity, and five replicates, 100 pots were used in the experiment.Each pot (8 × 6.7 cm) contained 200 g of soil (dry mass), and the plants received at least 10 tomato seeds, which were thinned after five days to leave just one plant per pot.The experiment was conducted using a randomized complete block design.Thirty days after germination, the entire root system was harvested by carefully removing the plants from the pots and gently shaking them to remove excess soil from the root system.The rootadhered soil (i.e., rhizospheric soil) was collected, transferred to 1.5 mL microtubes, and stored at -20 °C before downstream analyses.Various plant growth parameters, including plant height, root fresh and dry masses, and shoot fresh and dry masses, were measured and collected for further data analyses (Fig. 1B). Soil microbiome genomic DNA extraction and metataxonomic analysis Rhizosphere and bulk soil samples were subjected to DNA extraction using the DNeasy PowerSoil ® Kit (QIA-GEN) according to the manufacturer's instructions.The quality and concentration of the extracted DNA samples were evaluated using a NanoDrop spectrophotometer.In addition, to verify the integrity of the DNA, electrophoresis was performed on a 1.5% agarose gel at 80 V/400 mA for 45 min.To ensure sample DNA concentrations, quantification was performed using a QUBIT ® fluorometer. Rhizosphere and bulk soil Bacillus quantification using quantitative polymerase chain reaction (qPCR) Initially, B. subtilis UD1022 genomic DNA was used to prepare a qPCR standard curve.Total DNA was extracted from B. subtilis strain UD1022 liquid cultures using the DNeasy ® UltraClean ® Microbial Kit (QIAGEN) according to the manufacturer's instructions.The quality of the total extracted DNA was assessed using a NanoDrop ® ND-2000 Spectrophotometer (Thermo Fisher Scientific, Wilmington, DE, USA).DNA was quantified using a QUBIT ® 2.0 fluorometry system (Thermo Fisher Scientific, Wilmington, DE, USA).DNA was stored at -20 °C for subsequent analyses. To prepare the standard curve, serial dilutions of B. subtilis UD1022 genomic DNA were prepared at a 1:10 ratio.The starting concentration of genomic DNA used for the dilutions was standardized at 10 ng μL −1 .The target DNA was amplified using primers designed for the gyrB gene, which encodes DNA gyrase subunit B. The primers gyrB_5234_F (5′-CGG TCG TAA ACG CAC TAT C-3′) and gyrB-5391_R (5′-AGG GTC CGG GAC AAA ATG TGTCG-3′) were adapted from Xie et al. [57].Each qPCR reaction had a final volume of 10 μL and included the following components: 5 μL of PerfeCTa SYBR ® Green SuperMix (Quantabio, Baverly, MA, USA), 0.2 μL of each primer (10 mM), 1 μL of template DNA, and 3.6 μL of ultrapure water.The reaction conditions were adapted from Xie et al. [54] and involved an initial denaturation step at 95 °C for 10 min, followed by 40 cycles of denaturation at 95 °C for 30 s, annealing at 61 °C for 35 s, and extension at 72 °C for 40 s.Melting curve collection was performed at the end of the cycling program.Distilled water was used as the non-template control.The qPCR assay was performed in triplicate for each dilution to ensure the accuracy and reproducibility of the results.The standard curve served as a reference to quantify the abundance of Bacillus in soil samples by interpolating their Ct values onto the curve and then converting to the number of gyrB per gram of soil.For Bacillus subtilis quantification in the rhizosphere and bulk soil samples, qPCR was performed with standard curves under the same conditions described above. Data processing and statistical analyses Bioassay data, including plant height, shoot and root dry mass, and qPCR (number of copies of the gyrB gene), were compared using the Scott-Knott test (P < 0.05).To generate amplicon sequence variants (ASVs) from both genes, 16S rRNA and ITS, the raw data were processed using Dada2 version 1.21.0 [58].The primers were removed using Cutadapt version 3.4.[59] Quality control was performed and reads with low quality (Q20 or lower) were discarded, followed by taxonomic assignment using the Silva (v.138.1) [60,61] and UNITE (v.9.0) databases [62][63][64].To assess α-diversity, the Chao1 and Shannon indices were calculated.β-diversity was assessed using the Bray-Curtis distance.Principal coordinate analysis (PCoA) was employed to visualize the similarity matrix among various soil diversity dilutions and treatments.In both analyses, rarefied and normalized data were used.The significance and effect size β-diversity were determined using the vegan package (v.2.6-4) through permutation-based analysis (MANOVA) with the "adonis()" function [65].To identify differentially abundant taxa among the treatment groups, ANOVA-like differential expression analysis (ALDEx2) [66] was performed using the "run_aldex()" function from the microbiomeMarker package (v.1.28.1)[67].In addition, a co-occurrence network analysis was performed using the PhyloSmith package (v.1.0.6)[68] based on Spearman's pairwise correlation.To mitigate the influence of rare ASVs, ASVs occurring fewer than 20 times in each treatment with a relative abundance greater than 30% were excluded.Significant interactions were identified using Spearman pairwise correlations, with p-values less than 0.001 indicating robust associations.The nature of the co-occurrence relationships, whether negative or positive, was determined based on the strength of the correlation.In the network, each ASV represents a node, and the edges represent the correlations among the ASVs.The network layout was generated using the Fruchterman-Reingold algorithm in Gephi software (v.0.10) [69]. Quantification of Bacillus subtilis in the tomato rhizosphere Quantitative real-time PCR (qPCR) was performed to quantify the abundance of B. subtilis gyrB gene copies in the rhizosphere and bulk soils.In natural soil, a greater amount of B. subtilis gyrB was detected in the treatment group inoculated with the UD1022 eps−TasA− strain (Fig. 2A).At 10 -1 , 10 -3 , and autoclaved soil dilutions, B. subtilis UD1022 was more abundant in the wild-type soil than in the other treatments (Fig. 2BCE). Inoculation of Bacillus subtilis strains and plant performance Significant differences in plant growth were observed for root dry mass with Bacillus inoculation (Fig. 3).According to pairwise comparisons, compared with non-inoculated plants (controls), tomato plants inoculated with the mutant strain UD1022 eps−TasA− showed a reduction in root dry mass when grown in natural soil (Fig. 3A) or at a soil dilution of 10 -3 (Fig. 3C).At a soil dilution of 10 -3 , plants inoculated with the wildtype strain UD1022 showed significantly increased root growth compared with plants inoculated with the mutant strain (Fig. 3C).No differences were observed across treatments for plant height (Additional file 1: Figure S1A to E) or shoot dry mass (Additional file 1: Figure S1F to G).Within each treatment, plants generally grew better in soils with diluted microbial diversity (Additional file 1: Figure S2). Impact of the Bacillus subtilis strain UD1022 on rhizosphere microbiome assembly The bacterial community in natural soils was dominated by Bacillus and Pseudarthrobacter in non-inoculated or inoculated soils, respectively, with strain UD1022 (Fig. 4A).These two bacterial taxa decreased in relative abundance in soils with diluted microbial diversity (Fig. 4A).Bacillus inoculation changed the relative abundance of specific bacterial and fungal groups in the tomato rhizosphere.For example, inoculation with strain UD1022 increased the relative abundance of Pseudarthrobacter in natural soil and Chthoniobacter in autoclaved soil (Fig. 4A).Twelve bacterial taxa were exclusively found in the rhizosphere of plants inoculated with the UD1022 strain (Fig. 4B), including Mucilaginibacter spp.ASV-0058, Curtobacterium spp.ASV-0097, Kaistia spp.ASV-0098, Sumerlaea spp.ASV-0135, Nocardioides spp.ASV-0181, and Methylorosula spp.ASV-0216 (Fig. 4B). The fungal community was dominated by Clonostachys in natural soils inoculated with or without strain UD1022 (Fig. 4A).In soils with diluted microbial diversity, the communities were dominated by Trichoderma and Fusarium (Fig. 4A).Moreover, inoculation with the UD1022 strain significantly increased the relative abundance of Talaromyces at a dilution of 10 -3 (Fig. 4A).Compared with that in the control treatment, the abundance of Talaromyces increased at a dilution of 10 -3 when UD1022 was inoculated (Fig. 4A).Trichoderma ghanense ASV-0001, Trichoderma spp.ASV-0003, and Cryptococcus laurentii ASV-0025 were exclusively detected in the rhizospheres of plants inoculated with strain UD1022 (Fig. 4B). The α-diversity analysis using the Shannon index and HSD test (P < 0.05) revealed significant differences in the bacterial community across all the soil dilutions, except between the 10 -6 soil dilution and autoclaved soil (Additional file 1: Figure S3A).As expected, the natural soil exhibited the highest bacterial diversity, followed by the 10 -1 , 10 -3 , 10 -6 dilutions, and autoclaved soil.The same pattern was observed in the fungal community (Additional file 1: Figure S3B). Soil microbial diversity dilution significantly affected the assembly of the rhizosphere microbiome in plants inoculated with or without strain UD1022.The relative abundance of the Bacillota phylum (Additional file 1: Figure S4), particularly the Bacillus genus (Additional file 1: Figure S4), decreased as the soil diversity decreased.The abundances of the phyla Acidobacteria and Crenarchaeota exhibited the same pattern; they decreased with soil microbial dilution and were not detected in the most diluted or autoclaved soil (Additional file 1: Figure S4).Conversely, the relative abundances of the phyla Bacteroidota, Planctomycetota, and Pseudomonadota increased with decreasing soil microbial diversity (Additional file 1: Figure S4). For the fungal community, the relative abundance of the phyla Mortierellomycota and Basidiomycota decreased as the soil diversity decreased (Additional file 1: Figure S5).In the control treatment and in plants inoculated with the mutant UD1022 eps−TasA− , they were not detected in soil diluted 10 -6 or autoclaved soil (Additional file 1: Figure S5).The phylum Chytridiomycota was detected only in the 10 -1 dilution soil, and Rozellomycota was exclusively found in plants growing in natural soil inoculated or not inoculated with Bacillus strains (Additional file 1: Figure S5). To better understand how inoculation influenced the assembly of bacterial and fungal communities in the plant rhizosphere under low microbial diversity, β-diversity measurements were separately conducted using samples from each dilution (Additional file 1: Figures S6 and S7).Significant changes in the composition of the bacterial and fungal communities were observed when Bacillus strains were inoculated into autoclaved soil (Fig. 5A, B).The control and UD1022 treatments in natural soil and at dilutions of 10 -1 , 10 -3 , and 10 -6 did not significantly change the bacterial or fungal β-diversity (Figures S6 and S7). Analysis of bacterial β-diversity revealed significant differences in the structures of bacterial and fungal communities due to Bacillus inoculation.These differences in β-diversity were observed not only in the comparison between non-inoculated and inoculated plants but also between plants inoculated with the wild-type strain UD1022 or the mutant UD1022 eps−TasA− (Fig. 5 Co-occurrence network analysis of the rhizosphere microbiome The construction of the bacterial networks revealed that inoculation with Bacillus strains affected the complexity of the network (Fig. 7 and Additional file 1: Table S1).The number of edges in the network of plants inoculated with the wild-type strain UD1022 and the mutant UD1022 eps−TasA− was reduced compared with that in the network of non-inoculated plants.A decrease in the number of nodes was also observed in inoculated plants, especially when the mutant UD1022 eps−TasA− was inoculated (Fig. 7A).Moreover, a greater number of nodes, modularity, and number of communities were observed when UD1022 was inoculated than when UD1022 eps−TasA was inoculated (Fig. 7 and Additional file 1: Table S1).In contrast, compared with wild-type inoculation, mutant inoculation resulted in a greater total number of edges, including negative and positive edges (Additional file 1: Table S1).Thus, compared with the control, UD1022 eps− TasA inoculation led to a decrease in the number of positive connections and an increase in the number of negative edges, whereas wild-type UD1022 inoculation resulted in a decrease in the number of positive and negative edges (Fig. 7 and Additional file 1: Table S1). In terms of the fungal community, plants in the control treatment exhibited greater modularity than did those in the rhizosphere of plants inoculated with the wild-type strain UD1022 or the mutant strain UD1022 eps−TasA− (Additional file 1: Table S1).Notably, in contrast to the bacterial networks, inoculation with the mutant strain UD1022 eps−TasA− improved fungal connections (Fig. 7B).This was evident by an increase in the number of nodes, edges, and average clustering coefficient parameters, surpassing those observed in the control and UD1022 treatments (Additional file 1: Table S1).In addition, compared with the control treatment, UD1022 eps−TasA inoculation increased the number of positive and negative connections, whereas UD1022 increased the number of positive edges and decreased the number of negative edges (Additional file 1: Table S1). Discussion Previous studies have shown that the Bacillus subtilis strain UD1022 exerts beneficial effects on plant growth and offers protection against plant pathogens [13,23,45,46,53].Most experiments involving plant growthpromoting rhizobacteria (PGPR) are conducted using artificial soil or under controlled laboratory conditions [70].Under such conditions, numerous bacterial isolates exhibit promising traits for plant growth promotion, including siderophore production, phosphate solubilization, and phytohormone synthesis [71].However, when these microorganisms are applied in more complex systems, such as agricultural soil or under on-farm conditions, many of these traits may be subdued or even remain unexpressed owing to factors such as niche competition, nutrient limitation, antagonistic interactions, and environmental conditions [1,72,73]. Therefore, the inoculation of PGPR can encounter various challenges when interacting with the resident soil microbiome, primarily because of the diverse microbial community and the complexity of the relationships present around the roots [73][74][75].As plants shape the rhizosphere microbiome according to their needs at each life stage [76,77], soil diversity is considered an important microbial reservoir for plant root recruitment through exudation [3,78]. Inoculation with the wild-type strain UD1022 increased ~ 100 to 200-fold the number of B. subtilis gyrB genes detected in the rhizosphere of plants growing in soils with diluted microbial diversity compared with that in non-inoculated plants or those inoculated with the mutant strain UD1022 eps−TasA .This observation suggests that root colonization by Bacillus is enhanced in soils with lower microbial diversity because less diverse soils can generate more open environments and less nutrient and niche competition, favoring the establishment of inoculants [79][80][81][82][83].The inverse relationship between soil microbial diversity and the survival of an invading bacterial pathogen was previously demonstrated using the dilution-to-extinction approach [79]. As expected, in soils with lower microbial diversity, the wild-type strain UD1022 was more efficient at colonizing the rhizosphere than the mutant strain UD1022 eps−TasA− .This observation confirms the importance of the EPS and TasA genes in root establishment.Surprisingly, in natural soil, a higher number of gyrB genes were detected when plants were inoculated with the mutant strain.This difference may be attributed to the naturally higher abundance of Bacillus sp. in natural soil, suggesting that inoculation with mutated B. subtilis UD1022 eps−TasA stimulated the enrichment of native Bacillus sp. in the rhizosphere.The effects of B. subtilis mutation on biofilm production in soil are not well understood.Moreover, Zhu et al. [84] (See figure on next page.)Fig. 6 Heatmaps showing variations in the relative abundances of bacteria (A) and fungi (B) across treatments.ALDEx2 analysis was performed using Monte Carlo distances from the Dirichlet distribution with a P cutoff of 0.05.The color spectrum on the heatmap, ranging from dark purple to orange, represents statistically significant disparities in relative abundance (p < 0.05 to 0.01).A value of 0.01 indicates a more pronounced level of differential enrichment.NS = natural soil, D1 = soil dilution 10 -1 , D3 = soil dilution 10 -3 , D6 = soil dilution 10 -6 , and AS = autoclaved soil.Control = non-inoculated plants; UD1022 = plants inoculated with wild-type B. subtilis; and UD1022 eps−TasA− = plants inoculated with mutant B. subtilis recently demonstrated a growth-survival fitness trade-off in B. subtilis mutants lacking the master regulator sporulation gene (Spo0A-) in vitro.Their findings showed that Spo0A-null strains exhibited increased growth capacity, both in terms of rate and yield, compared with wild-type B. subtilis because of resource reallocation [84].However, considering that the primer pair employed to detect B. subtilis was not strain specific, we were not able to discriminate between soil-resident B. subtilis and the inoculated strain UD1022. Inoculation with the mutant strain UD1022 eps−TasA− at a soil dilution of 10 -3 resulted in plants with reduced root dry mass compared with that of plants inoculated with the wild-type strain.This result implies that the inability of strain UD1022 eps−TasA− to form biofilms prevents effective colonization of the plant rhizosphere, resulting in less effective plant growth promotion compared with that of the UD1022 wild-type.The genetic traits of the inoculant, such as the presence of chemoreceptors and mobility in soil, are crucial for establishing the inoculant in the plant rhizosphere [13,85,86].However, in this study, the resident microbial diversity found in natural soil may have played an important role in promoting plant growth [85]. In this context, the dilution-to-extinction method has been applied to better understand the impacts of microbial invasion on the rhizosphere microbiome [49][50][51][52]87].Such research has shed light on changes in soil microbial communities under biotic disturbances [87].For example, Ferrarezi et al. [88] demonstrated that the PGPR Azospirillum brasilense had a more pronounced beneficial impact on plants growing in soils with lower microbial diversity, as determined by applying the dilution-to-extinction method.It is important to emphasize that the autoclaved soil did not receive any microbial inoculum, and the microbial communities found in this treatment originated naturally from the soil.After autoclaving, the soil was pre-incubated to reach a microbial biomass similar to that of the other treatments but with reduced species richness [52].Natural ecosystems show variable resistance to invasion by alien species, and this resistance can be related to species diversity in the system [78].Mawarda et al. [89] reported that the response of the soil microbial community to an introduced organism is also contingent on the nature and extent of the invasion disturbance.This is closely linked to the ecological strategies and functional traits of each invader. Concurrently, beneficial bacteria and fungi were also enriched when the wild-type strain UD1022 was inoculated.For example, the bacterial genera Chthoniobacter and Pseudarthrobacter are known to contain endophytic/ PGPR strains and species that play a role in the transformation of organic carbon compounds in soil [90][91][92][93].In general, Pseudogymnoascus, Preussia, Humicola, Fusarium, Exophiala, and Cystobasidium were the most enriched fungal genera when UD1022 was inoculated.Interestingly, some of these groups, including Preussia [94,95], Humicola [96,97], and Exophiala [98,99], are known growth promoters in rice.On the other hand, the relative abundance of Actinomycetota, which is a phylum harboring well-known beneficial bacteria [99][100][101][102], decreased in plants inoculated with UD1022 eps−TasA− compared to that in plants inoculated with UD1022 (Additional file 1: Figure S4).This observation may imply the importance of the EPS and TasA genes in synergistic interactions among soil resident microbiome communities.Sun and collaborators [103] demonstrated the importance of Bacillus biofilm formation in syntrophic cooperation in soil.Inoculation with the wild-type B. velezensis strain SQR9 positively induced plant-beneficial indigenous Pseudomonas stutzeri in the cucumber rhizosphere by branched-chain amino acid (BCAA) production compared with a mutant defective in EPS and TasA [103].In the present study, the same pattern was observed in soil dilutions 10 -3 and 10 -6 , where Pseudomonadota, a phylum of the Pseudomonas genus, was significantly enriched when the UD1022 strain was inoculated compared with the UD1022 eps−TasA− strain.The impact of UD1022 inoculation on the bacterial community exhibited a stronger effect on the β-diversity of autoclaved soil than on that of natural soil.This observation suggested that the absence of competition and reduced niche occupancy in autoclaved soil may have allowed the inoculant to significantly alter the composition of the bacterial communities [104].Mallon et al. [81] demonstrated a similar pattern for foreign microbial invaders when the soil microbial composition was compromised, fitting the paradigm of diversity-invasion effects [80-82, 105, 106], where less diverse communities have limited abilities to use available resources, and consequently, their ability to mitigate external microbial invasion decreases. Previous studies have highlighted the important effect of the B. subtilis EPS and TasA genes on social interactions in rhizosphere soil using double-mutated bacteria and soil resident communities [34,45,107].These studies emphasized the effect of the extracellular matrix on the bacterial consortium between two bacterial species and its importance in salt stress tolerance.This study extends this understanding by showing how EPS and TasA gene knockout in B. subtilis affects rhizosphere microbiome assembly. Biofilm formation is essential for successful rhizosphere colonization, with the TasA gene playing a crucial role in stabilizing biofilm membrane dynamics and enabling cellular adaptation, mainly in plant interactions [34,107,108].In this context, compared with UD1022 inoculation, co-occurrence network analysis revealed that inoculation with the mutant strain UD1022 eps−TasA− decreased the number of nodes and increased the number of negative interactions in the bacterial network.On the other hand, plants inoculated with UD1022 exhibited denser connections within subcommunities than within the entire network, which could be one of the stages of biofilm production, which includes microcolony formation [108].The fungal community network was also altered by inoculation with the mutant strain UD1022 eps− TasA− .While the bacterial network decreased the number of nodes in the presence of the mutant strain, the fungal network showed more nodes than did the network in the presence of the wild-type strain UD1022.Therefore, as bacteria engage with eukaryotes, highlighting the significance of social interactions in the coevolution of fungi and bacteria, this dynamic process fosters specific interactions and the potential generation of metabolites that influence network outcomes [109].Notably, both the core components of the matrix, EPS and TasA, significantly contribute to establishing robust interactions with other microorganisms [110]. Taken together, the results of this study underscore the critical role of the EPS and TasA genes in B. subtilis strain UD1022 for effective plant growth promotion and modulation of soil microbial communities.The presence of these genes significantly influenced microbial β-diversity, especially in less diverse soils, demonstrating their importance in shaping the rhizosphere microbiome.The absence of these genes, as observed in plants inoculated with UD1022 eps−TasA− , altered the bacterial and fungal communities, demonstrating their role in social interactions and community dynamics.In addition, cooccurrence network analysis revealed that the absence of the EPS and TasA genes impacted the structure and dynamics of the bacterial networks in the rhizosphere.This study emphasizes that understanding genetic traits such as EPS and TasA is vital for comprehending how PGPRs interact with the rhizosphere microbiome and, consequently, influence plant health and growth.Further research on specific microbiome genetic traits and their implications for rhizosphere colonization will significantly contribute to the optimization of PGPR-based approaches in agriculture. Fig. 1 Fig. 1 Plant bioassay experimental design and timeline.A Each treatment, including plants inoculated with UD1022, plants inoculated with UD1022 eps−TasA− and non-inoculated plants (control), was cultivated in soils with five different levels of microbial diversity: natural soil, 10 -1 dilution, 10 -3 dilution, 10 -6 dilution, and autoclaved soil.Pots without plants (bulk soil) were also used as a control.B Bioassay timeline showing bacterial inoculation, sampling, and analyses Fig. 3 Fig. 3 Boxplot of the root dry mass of tomato plants 30 days after germination.A Natural soil; B Soil dilution 10 -1 ; C Soil dilution 10 -3 ; D Soil dilution 10 -6 ; and E Autoclaved soil.Control = non-inoculated plants; UD1022 = plants inoculated with wild-type B. subtilis; and UD1022 eps−TasA− = plants inoculated with mutant B. subtilis.The Scott-Knott test for pairwise comparisons of means was performed considering a 95% familywise confidence level (P < 0.05).Asterisks (*) indicate significant differences between treatments Fig. 4 Fig. 4 Composition of bacterial and fungal communities in the tomato rhizosphere.A Relative abundance of bacteria and fungal genera across soils with a microbial diversity gradient.NS = natural soil, D1 = soil dilution 10 -1 , D3 = soil dilution 10 -3 , D6 = soil dilution 10 -6 , and AS = autoclaved soil.B Venn diagram of bacterial and fungal taxa.Control = non-inoculated plant; UD1022 = plants inoculated with wild-type B. subtilis Fig. 5 Fig. 5 PCoA analysis of the rhizosphere microbiome of tomato plants cultivated in autoclaved soil.Statistical pairwise comparisons were performed using the Adonis method (P < 0.05, permutation = 999).A Comparison of bacterial communities between the control group and the UD1022 treatment group (P = 0.008).B Comparison of fungal communities in the control group versus the UD1022 treatment group (P = 0.025).C Comparison of the bacterial communities of the control group and the UD1022 eps−TasA− treatment group (P = 0.011).D Comparison of fungal communities in the control group versus the UD1022 eps−TasA− treatment group (P = 0.012).E Comparison of the bacterial communities of the UD1022 eps−TasA− and UD1022-treated groups (P = 0.009).F Comparison of fungal communities in the UD1022 eps−TasA− versus UD1022 treatment groups (P = 0.011).Control = non-inoculated plants; UD1022 = plants inoculated with the wild-type strain of B. subtilis; and UD1022 eps− TasA− = plants inoculated with the mutant strain of B. subtilis Fig. 7 Fig. 7 Co-occurrence network of ASVs according to the Fruchterman Reingold distribution.ASVs were filtered considering occurrence > 20 times and > 30% abundance.A Bacterial community networks.B Fungal community networks.Non-inoculated plant = Control; plant inoculated with wild-type B. subtilis = UD1022; and plant inoculated with mutant B. subtilis = UD1022 eps−TasA−
v3-fos-license
2017-12-03T13:31:41.708Z
2014-09-29T00:00:00.000
33457294
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=50210", "pdf_hash": "aaa16efd3fe055d53fb4e23d8d0d9754a0eb1c6f", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42550", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "aaa16efd3fe055d53fb4e23d8d0d9754a0eb1c6f", "year": 2014 }
pes2o/s2orc
IoT Testbed Business Model The Internet-of-Things (IoT) is today one of the hypes in the technological world but despite the enormous attention and research investment, the clear business value is still hard to perceive. IoT deployments are costly to be installed, managed and maintained, and need to provide a very clear value to justify the investments. For another viewpoint, IoT technologies need to be proven before deployment, which implies the need to test and assess IoT solutions in real settings and involve the actual target users. And as such, this presents an opportunity to have IoT deployments with a clear business model mainly focused on real-life large-scale research and technological experimentation. This would mean having a sustainable IoT infrastructure in-place based on the provision of experimentation services and a trial environment to industry and research, which then could also present an opportunity to establish added-value (business) services. This is the exact idea of the flagship SmartSantander testbed facility and especially its major deployment in the city of Santander, Spain. The SmartSantander facility business model is built around experimental-ly-driven research and technology development thus attracting many experimenters from industry and European research projects. This model makes it possible to sustain an outstanding large-scale IoT deployment of around 12,000 sensors and on top of it the development of new the development of new services and applications especially targeting the needs of users (citizens, businesses, authorities) in smart-cities. This paper studies the business model of outstanding SmartSantander facility in order to provide a generic Business Model for IoT testbeds that can provide guidance and be adapted by owners (or owners to-be) wishing to exploit their IoT dep-loyments as facilities supporting experimentation and trials of IoT solutions. Introduction The Internet-of-Things (IoT) is today a reality.Everyday flourishes new IoT deployments all over the world and in many areas, such as Smart Cities, Intelligent Transportation Systems, Domotics (Smart Buildings), and many others.The IoT is the idea of a global, dynamic network infrastructure where physical and virtual "things" (devices, sensors, smart objects, etc.) communicate and share information among each other in a highly heterogeneous environment.Several manufacturers (even key ones) are already engaged in developing new devices for many different purposes to target diverse application domains and services [1]. IoT is bringing diverse novel business models to the society due to its decentralized sensing and actuation capabilities, contributing also with high amount of data (big data), etc.Several cases could be mentioned, such as renting devices capabilities (e.g.testbeds) or the device itself (e.g.temperature sensor), selling information from either direct sensors data (e.g.orders tracking position) or already elaborated raw data (e.g.amount of free parking slots).These IoT technologies need however to be duly validated and assured and be hopefully with settings that closely relate to the target environments of such IoT services and applications. In this sense several testbeds have been flourishing worldwide which can both support experimentation of new IoT technologies and solutions, support the research and development of new services and applications [2].In Europe alone, several already exist for instance SmartSantander, Sunrise, Openlab, LOG-a-TEC, etc. that focus on IoT in disparate real-life settings and contexts of experimentation: smart-cities, smart-buildings, underwater, etc.Most of these have been co-funded under the scope of the FIRE (Future Internet Research Experimentation) initiative from the European Commission with the intention of establishing a working self-sustainable experimentation real-life living labs. Such IoT testbeds need to be sustainable to be kept operational (even and especially after the funding has ended).As such, these have planned a clear exploitation path to the experimentation facility enclosing a realistic business model.Now, doing business (or at least paying out to the infrastructure operational costs) of providing experimentation and trials services is not a simple work as many variables and issues need to be considered.This work intends to provide some initial guidance and support to make business out of experimental IoT infrastructures and deployments. In summary, this document intends to be a hitchhiker's guide, especially for IoT deployment owners and managers, by presenting them with a generic business model to make business out of an IoT deployment as an experimentation facility-an IoT Testbed.The model considers the four main aspects of a business: Infrastructure, Value proposition, Customers and Financials. The generic business model is worked out based on the business ideas and commercialization principles behind the world-class SmartSantander Testbed and especially its major deployment in the city of Santander (Spain) as a clear and working business case of an IoT Testbed.In this way, readers are also provided with a view of the scheme and operation of real and active large-scale IoT Testbed deployment. The following sections are divided as follows.Chapter 2 gives a general idea of what can be an IoT Testbed and what it has to offer.Chapter 3 presents the IoT Testbed Business Model development, explaining the different phases in the development process of the business model, and introduces the Business Model Canvas and its main areas used present the business model.Chapter 4 characterizes a practical and existing example, the SmartSantander IoT Testbed.Chapter 5 presents the IoT Testbed Generic Business Model which can be used by stakeholders as guidance to build their own IoT Testbeds, and finally Chapter 6 provides final considerations and possible future work trends based on the presented Business Model. Concept of an IoT Testbed IoT Testbeds serve as incubators for the Internet-of-Things (IoT), providing to companies and groups the necessary infrastructures in which ideas, technologies and best practices are tested and validated.In the past few years a necessity has emerged to experiment technologies, services, etc. in a more realistic environment, leading to the creation of new spaces rather than maintaining the research in the current IoT facilities, which are small, old laboratories and with a very controlled environment. A Testbed is considered to be a structure (hardware, software or both) for experiments of wide-scale deployments.Constantly Humans perform their discoveries through experimentation, early and realistic testing is necessary in any kind of research, large-scale or not.IoT Testbeds have emerged focusing on different domains, as smart buildings, smart cities, environmental monitoring, security and surveillance, health-care, etc. relying on the utilization of innovative IoT technologies to improve the quality of the Humans life. The Figure 1 displays an example of an IoT Testbed architecture which is divided in the IoT Environment IoT Testbed Business Model Development Business Model is a common word in the Management world, but in the last years it has been raising considerable interest for researchers [3].The business model is seen as a good starting point for business creation or restructuring an existing one.Therefore, it can work as a helper to line up "technology development and economic value creation" [4]. Several definitions have been made along the years of the expression "business model", but a common definition is missing [5], next it's presented two of them which we believe that better exposes what is a business model.It can be defined as "An architecture for the product, service and information flows, including a description of the various business actors and their roles; as a description of the potential benefits for the various business actors; and a description of the sources of revenues" according to [6], another definition is: "A Business model describes the rationale of how an organization creates, delivers, and captures value" [7]. To describe a business model it is needed a common base of work, a type of language that everyone understands, which can be easily managed, facilitates descriptions and discussion.The Figure 2 displays the framework that will be used to present the IoT Testbed business model, this framework is based on the "business model canvas" work produced by [7].This selection is grounded by the proof given in practice of the applicability of the model as for the innumerous references in publications (e.g., [8]). Business Model Development Process The development process of the business model was composed by two stages, an initial stage (during the first project year-FP7-288315 PROBE-IT "Pursuing ROadmaps and BEnchmarks for the Internet of Things") in which the authors identify key aspects regarding the deployments of IoT Testbeds through the analysis of existing roadmaps (e.g.IoT strategic research roadmap, vendor roadmaps market views, Artemis roadmap, IoT-I, etc.), through the attendance of workshops, conferences and even by strategic research agendas from Europe, South America, Africa and China.Having identified the essential trends, themes, a critical thinking phase was initiated in order to point out the main drivers. The second and final stage occurred during the second project year in which all the business model aspects were tuned, narrowing out any existing doubts.This was achieved through several presentations in workshops, namely the IoT World meets in Brazil event (held in São Paulo, Brazil), IoT-Week in Helsinki and at the IoT Interoperability Workshop (held in Beijing, China).The workshops allowed the interaction with stakeholders, experts and the scientific community.Likewise, IoT deployment pioneers were also consulted to retrieve ideas, opinions on Smart environments and their critical perspective on the document addressed topics. The business model validation was performed during the development process throughout a constant interaction with different clusters of persons, namely the internal consortium (i.e.Probe-IT partners) through various discussions to analyze results and improve, review, restructure, and refine of the business model.Another validation cluster was the European Research Cluster on the Internet-of-Things (IERC) Activity Chains (AC), in which were carried out concertations activities with relevant European projects (e.g.IoT-A, IoT-I) towards the consolidation of the business model.And also the IoT community cluster that embraces activities such as interops, conferences, workshops, etc. in order to facilitate the interaction with a wider group of different experts, as has been mentioned above. The development process also took advantage of several contributions.Luis Muñoz from SmartSantader which is the European flagship project in IoT for smart cities; Jian MA from Wuxi smart city which is the lighthouse project in Wuxi new district, China, in IoT for smart cities; Louis Coetzee connection to Johannesburg Broadband Network Project (JBNP) which is the urban networking project of reference in South Africa, Gabriel Marão a connection to IBM Smarter Cities in Brazil for the development of smart cities in Rio de Janeiro; and Sébastien Ziegler from HOBNET project/platform for smart buildings. Business Model Representation: Canvas The business model canvas (see Figure 2) is divided in four main areas of a business: Infrastructure, Value proposition, Customers and Financials, where the Infrastructure is subdivided in three themes: Key Partners, Key Activities and Key Resources.The Customer main area is also sub-divided in three themes: Customer Relations, Channels and Segments; on the other hand the main area of Financial is sub-divided in two themes: the Cost Structure and Revenue Stream. In the following paragraphs each one of the identified themes will be explained in more detail, and is also important to emphasize the fact that each one of the themes is influenced by the others, i.e. they are not standalone.The Infrastructure area works as foundation for business, composed by Key Partners, Activities and Resources.Suppliers and/or partners getting together with a common goal, creating alliances to improve their business model, to reduce costs, reduce the risks and uncertainty, to get specific resources and activities or even to create a new business, forms the Key Partners theme. Key Activities define the more important things a partner must do, in order to make the business model work.These activities can be of different types depending on what the business as to offer, it can be a problem solving if the objective is to offer consultancy, or a supply chain management if it is hardware production or in case of software provider the activity is the software development.Others could be identified depending of the value proposition, for example keeping the sustainability of a platform/network.Every business model needs some kind of resources to create value to customers, Key Resources are considered as essential assets to maintain and support the business.They can be of different type such as intellectual, financial, human or physical, as also they can be provided by key partners as owned or leased by the company. Value Proposition area specifies a collection of products, services or benefits that the company can offer to meet some customer's needs.In this sense, the more the customer needs are fulfilled higher will be the chances of the company's value proposition being selected.Value Proposition can offer a new, innovative or disruptive product as well a similar product but with additional attributes and features, i.e. a Value Proposition creates value for Customers Segments. The Customers area is formed by Customers Segments, Relations and Channels, is mainly the heart of a business model.No business can prevail without customers, therefore a company must select the clients or the group of clients wisely, i.e. which Customer Segments will serve.A business model can identify more than one Customer Segment meaning that each segment can focus in client groups with common behaviors, common needs or with other aspects. Having identified the Value Proposition and the target market, is necessary to know how the product is delivered, publicize, in what way can clients be informed about the product, services and company existence, or even helping customers in the evaluation of the Value Proposition, summing it up, how the communication between clients and company can be performed.This topic is covered by the Channels theme which can be executed by the company as by partners.Customer Relationship characterizes the way a company interacts with clients, this interaction could be in a more familiarly way (e.g.individual assistance) or automated (e.g.selfservice) depending on the approach taken by the company. Settled the stakeholders, the stakeholder's actions and the benefits it is time to make reference to the Financial area, where all of this stands on.As stated previously Financial area is sub-divided in two themes, Cost Structure and Revenue Stream.The Cost Structure defines the most important costs inherent to the business.These costs can be easily calculated after specifying who are the Key Partners, Key Activities and Key Resources.On the other hand the Revenue Stream represents the capability of generating money.This can be achieved through different types of revenue such as fees, advertising, sales, renting, etc. SmartSantander IoT Testbed Business Model By means of a practical and existing example was chosen the SmartSantander, which is an experimental test facility at a world city scale.SmartSantander provides a unique-in-the-world platform for IoT experimentation and evaluation under realistic operational conditions in a large-scale scenario (see Figure 3). The SmartSantander facility supports experimentation and research in the Internet-of-Things (IoT) scope regarding architectures, enabling technologies, services and applications for smart cities [9].Allowing users to test high-level services based on the data provided by the sensor network, to test sensor network algorithms, multihop protocols, radio interfaces, and key support services such as discovery, identity management and security.Another key aspect is the evaluation of social acceptance of IoT technologies and services. The SmartSantander IoT Testbed sensor network deployment (of 12,000 sensors) is done in electrical poles, building walls, bus, taxis, parking lots (on the ground), garbage bins, etc.The deployed sensors are capable of traffic monitoring as free parking lots checking, acoustic-based monitoring and environment surveillance (detection of emergency cars, traffic density monitoring, etc.), as well as participatory sensing.Figure 3 shows a satellite picture of Santander City with the sensors location.Towards the chosen IoT Testbed example, SmartSantander, and with the presented business model canvas, the authors will describe next the four main areas of the business model: Infrastructure, Value proposition, Customers and Financials (see Figure 4).The SmartSantander has the Santander Bank as strategic financial partner also the Santander Ayuntamiento (council) has a role in this aspect as in logistics.The Universidad de Cantabria is the strategic system integrator, concerning technology the TST (as strategic) and Libelium are the selected providers.The Certification aspect is being treated at the moment with ETSI (European Telecommunications Standards Institute) since it's the leading organization in standardization and certification, and therefore one of the desired partners.The Data Store & Data Service is assured by Telefónica through its IDAS-Intelligence Data Advanced Solution platform, and as coordinator the SmartSantander organization. Key Activities consist in the infrastructure operation performed by the coordinator and Telefónica, infrastructure maintenance actions are done by Uni.Cantabria, the development of Testbed software (middleware and the Portal) is the University Cantabria responsibility.The provision of user support and management of the Infrastructure is performed by the coordinator.The Key Resources are identified as the personnel and technicians for the SmartSantander maintenance, development, operations, marketing, management, and for the user support teams.The energy sources (throughelectric poles) are provided by Santander council, the financial support is assured by the Santander Bank.The Testbed hardware is provided by TST and Libelium partners. According to the raised activities and resources the costs are divided in an initial phase, with the SmartSantander planning, assembly of the Testbed and with organization costs.Afterwards the costs are with testing and validation techniques, maintenance and user support, expansion/renovation of the Testbed and in the on-going organizational costs. For Customer Relations, SmartSantander presents its brand, discussion forums and online support in order to interact with its customers that are identified as IoT application developers, IoT IoT Testbed Generic Business Model The existence of an IoT Testbed encourages stakeholders to perform tests, validation and certification of their IoT concepts, technologies and applications.Based on the introduced canvas (see Figure 2) the authors will present their interpretation of what should be the generic IoT Testbed business model.Figure 5 illustrates the generic IoT Testbed business model using the presented canvas which stakeholders can instantiate for a specific IoT Testbed case. The existence of a generic IoT Testbed business model provides thoughtful guidelines for potential IoT Testbed owners, helping them in strategic decisions and plans in how they should perform their investment.Providing also consolidated views and perspectives regarding possible target markets and regarding all the costs involved. A business model is composed by partners and their roles, which produce certain value to customers to achieve some financial gain, but of course it also has its costs.As key partners the authors identified several possible actors, namely a strategic and non-strategic financial partners for the monetary support since every business needs to have an investment at an early stage (and not only) and in this case the bigger the Testbed the bigger the investment needed.Strategic and non-strategic technology providers to supply the necessary equipment for the Testbed, logistics partners to manage material, equipment, staff, etc., experimentation data storage and data service provider so the generated data can be held and give to users as well as the right accesses.Strategic system integrator to get all the different parts in the system to work as a whole, coordination partner normally performed by the individual/company that got the big vision, certification authority partner and testing and validation technology partner as an certification and validation supplier.From these actors are expected specific activities and resources. The activities are the infrastructure operation, maintenance and its management; development of the Testbed software; user support and validation and certification.The resources are personnel and technicians teams for maintenance, development, operations, marketing, management and user support; financial support; energy sources and the necessary Testbed software and hardware. For the customer side were identified as probable clients the IoT application developers and IoT hardware and communication providers that gladly embraces this opportunity to test their own applications, hardware and communication protocols in a real life environment.Also the IoT research communities and IoT related certification authorities are also possible clients.To reach them, channels were identified, namely the IoT related communities, International and National project clusters, Testbed aggregator and general dissemination (videos, internet, etc.).Interaction (customer relations) is left to discussions forums, online support, brand creation and the innovation office. As per costs inherent to the Testbed business model were recognized two stages, the infrastructure setup and the maintenance (execution).In the setup stage, costs can be related with the initial planning and assembly of the prime Testbed (which can be founded by public money if novel and if it's needed for research), the Testbed expansion, organization costs (general expenses); in the execution stage is identified costs with testing and validation techniques, infrastructure maintenance, user support for experiments, energy costs to operate the Testbed, expansion/renovation of Testbed and also the organization costs.Through the value proposition, Internet-of-Things Infrastructure for Testing & Evaluation & Certification for IoT Concepts/Technologies/Applications is expected to accomplish revenue by renting the Testbed to perform experimentations, renting for solutions certification, the provision of validation and certification of IoT solutions, and through funds by the International/ National research projects. Conclusions The Internet-of-Things is here to stay: IoT deployments abound, more IoT-related technologies appear and new IoTcentric apps are launched each day.However, the IoT movement still looks a bit unstructured which in some sense is hindering mass adoption: there is a lack of standards at many levels of the IoT (communication, data, services, middleware, semantics, etc.) and especially there is a lack of novel business models built around the strengths of the IoT.And of course, killer business model(s) for the IoT would be just perfect. Looking things from another angle, it is needed to experiment the Internet-of-Things solutions all the way through the development process.And also to make sure that solutions work and are well-accepted by users, it is crucial to experiment and trial the IoT technologies in real-life large-scale environments that closely match that of application.And as it happens, the IoT is a market where many small-to-medium enterprises (even sole individuals) are developing their solutions and they do not have the economic capacity to setup testing infrastructures.They will need to rent some infrastructure for this. IoT testbeds provide exactly this: an infrastructure where experimenters can deploy and execute their experiences and trials, collect data and further improve their IoT-based solutions (products, services, algorithms, protocols, etc.).In Europe, outstanding IoT infrastructures have been setup to provide real-life large-scale experimentally-driven research and development services.And of major importance, the SmartSantander testbed facility is the lighthouse smart-city IoT testbed in Europe with around 12,000 sensors deployed. The SmartSantander IoT testbed core business model is based on renting experimentation services and then also on services that exploits the testbed deployments.A full-fledged experimentation/service platform is in place to manage the whole SmartSantander infrastructure and assure a non-stop operation, to schedule experiments and to move data to a place where experimenters can be retrieve, analyze and study.Now, making sustainable business out of an IoT test-bed is not a trivial task.There is an operational structure to support it, and there is the issue about the energy costs, the maintenance and replacement of the IoT nodes, the software platform to evolve and correct in case of bugs, provision of support to experimenters including the experimentation interfaces, etc.There are also many key partners in the endeavor, also suppliers of hardware, networking, services, etc. Add up the costs to promote the facility and attract experimenters and engage users to the tests.Finally, there are the revenue schemes to be applied in order to make it a sustainable business. Understanding this, the authors provided here a generic business model inspired in the very complete Smart-Santander testbed facility business model.The generic business model considers the four main aspects of a business-Infrastructure, Value proposition, Customers and Financials-and thoroughly details all the elements of these in view of an IoT testbed business. The generic business model for IoT testbeds especially targets IoT deployment owners (or owners to-be) and managers presenting them a starting point to inspect, analyze and adapt if interested in making business out of an IoT deployment in the form of an experimentation facility.The business model case study of the SmartSantander facility enables stakeholder to fully understand the elements in the generic business model and to see what considerations should be met while moving forward on creating a commercially sustainable IoT testbed. Figure 1 . Figure 1.Concept of an IoT testbed.layer that contain the network of IoT devices, repeaters and gateways, were the Authentication, Authorization and Accounting (AAA) systems are located, experimental support, management support (resource, configuration, etc.), application support, etc.The Testing & Evaluation & Certification layer is where the services are provided to the customers.A Testbed facility should be able to provide true field experiments for architectures, key enabling technologies, services and applications, as well as IoT evaluation and certification in the context for what they were built for. hardware and communication providers, IoT research communities, IoT related certification authorities, project partners and open calls partners.The communication Channels are the IoT related communities, International and National project clusters, Testbed aggregator and general dissemination (videos, internet, etc.).The revenue is funneled through renting the Testbed to experimentations, for certification of solutions, validation and certification of IoT solutions and funding's by International and National research projects.
v3-fos-license
2016-03-22T00:56:01.885Z
2011-04-19T00:00:00.000
283853
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4360/3/2/779/pdf?version=1303206855", "pdf_hash": "58ee7d0474841400024e353ab70b03c0056fbc9e", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42552", "s2fieldsofstudy": [ "Materials Science", "Medicine" ], "sha1": "58ee7d0474841400024e353ab70b03c0056fbc9e", "year": 2011 }
pes2o/s2orc
Thermosensitive Self-assembling Block Copolymers as Drug Delivery Systems Self-assembling block copolymers (poloxamers, PEG/PLA and PEG/PLGA diblock and triblock copolymers, PEG/polycaprolactone, polyether modified poly(Acrylic Acid)) with large solubility difference between hydrophilic and hydrophobic moieties have the property of forming temperature dependent micellar aggregates and, after a further temperature increase, of gellifying due to micelle aggregation or packing. This property enables drugs to be mixed in the sol state at room temperature then the solution can be injected into a target tissue, forming a gel depot in-situ at body temperature with the goal of providing drug release control. The presence of micellar structures that give rise to thermoreversible gels, characterized by low toxicity and mucomimetic properties, makes this delivery system capable of solubilizing water-insoluble or poorly soluble drugs and of protecting labile molecules such as proteins and peptide drugs. Introduction Amphiphilic block copolymers (AB or ABA-type) with large solubility differences between hydrophilic and hydrophobic moieties, in aqueous medium are able to self-assemble into polymeric OPEN ACCESS micelles characterized by mesoscopic size range.These structures consist of water-insoluble cores and water-soluble shells.Depending on blocks length, the core can assemble into various supramolecular structures characterized by different morphologies [1][2][3]. In fact, unlike homopolymers made of identical monomeric units, copolymers include two kinds of monomeric units of different solubility.Thus, in solution and at low concentration, these amphiphilic molecules exist as unimers, while at increasing concentrations, aggregation takes place.The final aggregates (called micelles), characterized by spherical shape when the hydrophilic segment is longer than the core block [4], usually include several dozen of this kind of unit.Thus it is possible to define either the Critical Micelle Concentration (CMC) as the concentration of a monomeric amphiphile at which micelles are formed; or the aggregation number as the number of individual molecules forming one micelle.Whereas, for block co-polymers the amphiphile solvation is strongly dependent on temperature, another important parameter to take into account is the Critical Micellization Temperature (CMT), below which the amphiphile exists as unimer, and above which, unimers and aggregates coexist. Recent studies have focused on thermogelling polymeric micelles made of self-assembling block copolymers such as poloxamers, multiblock copolymers prepared from poly(lactide), polycaprolactone, poly(glycolic acid), and polyether modified poly(acrylic acid) [5][6][7], These polymers have the ability to form temperature dependent micellar aggregates and, after a further temperature increase, gels due to micelles aggregation or packing.Therefore, with these polymers it is possible to mix drugs in the sol state and at room temperature and the solution can be injected into a target tissue.It forms an in-situ gel depot at body temperature and provides controlled drug release.A formulation which is in the form of injectable liquid at room temperature, but changes into gel at body temperature, with a pH close to neutrality and having a certain biocompatibility and biodegradability surely represents an ideal system. In general, different kinds of synthetic copolymers are made of poly(ethylene oxide) as hydrophilic block, and of a wider range of hydrophobic blocks.The combination of different PEO-hydrophobic block have given rise to several micelle systems with differing physicochemical properties, such as loading capacity, morphology, biodistribution, size, release kinetics and specificity against different kind of drugs. Sometimes, these colloidal vectors can act as solubilizing agents by means of their hydrophobic core, which behaves as reservoir where drug molecules can be incorporated by chemical, physical or electrostatic interactions depending on physicochemical properties [8]. Obviously, it is of utmost importance to predict gelation behavior after drug loading into a copolymer formulation.In fact, the micellization and gelation transitions could be influenced by the presence of hydrophobic drug solutes: the hydrophobicities of various molecules have been demonstrated to affect Poloxamer 407 phase behavior [9] and liquid to gel transition [10]. The growing interest in polymeric micelles is strictly related to their similarity with natural carriers as viruses and serum lipoproteins.In fact, their hydrophilic shell makes them not identifiable during blood circulation prolonging the in vivo circulation time [11], while their viral-like size (<100 nm) prevents their uptake by the reticuloendothelial system. The aim of this review is to illustrate the relevance of different self assembling block copolymers used in the pharmaceutical field as drug targeting systems, by means of their chemical synthesis, description of their applications, together with some of the analitycal techniques used for physicochemical characterization. Poloxamers Poloxamers (synonyms Pluronics or Lutrol) are block copolymers of ethylene oxide (EO) and propylene oxide (PO) synthesized by sequential addition of propylene oxide first and then ethylene oxide to a low molecular weight water-soluble propylene glycol.The PEO-PPO-PEO block copolymers are available in a range of molecular weights and PPO/PEO ratios and thus, their physical and surface properties vary over a wide range, and different types (more than 30) are commercially available [12].Their temperature-dependent self-assembling and thermogelling behavior is well known and, in water, Poloxamers have the ability to change from individual block copolymer molecules (unimers) to self-assembling micelles.For example in water below the CMC, unimers form dispersions while, above this concentration, aggregation phenomena occur giving rise to a process named -micellization‖.At the same time, these block copolymers at a given concentration are characterized by specific temperature at which micelles form (CMT). Besides, at elevated temperatures, these systems show phase separation, which is due to the complete dehydration of both PO and EO blocks [13]. Different hypotheses on the mechanism of micelle formation may be proferred.The most reliable is that there is an initial stage of the phase separation with regions that are very rich in Poloxamer and regions that abound more in water.Then a further desolvation process of the PPO part leads to unimer aggregation and to a progressive formation of micelles close to each others, to form clusters.With regard to the sol/gel transition during gelation, partial collapse of PEO's chains in the micellar mantle leads to the formation of a tighter structure [12]. Micelles may have different shapes, namely spherical, cylindrical or lamellar, depending on the length of PO and EO blocks, and the core is always made of hydrophobic blocks, while hydrophilic blocks form external corona.An important property of these aggregates is their ability to incorporate hydrophobic substances being insoluble or poorly soluble in aqueous or hydrophilic environment, thus enhancing solubility.Nevertheless, it is necessary to consider that micellization is sensitive to the presence of water insoluble solutes which can promote a change from spherical to rod or lamellar shape affecting micelle size, aggregation number and different factors involved in drug delivery [14].Another important characteristic of Poloxamers is their thermogelling behavior: in fact, water dispersions of these polymers are generally liquid at low temperature and form a strong gel at higher temperature, as reported by several authors [15][16][17].Different hypothesis have been proposed in order to explain this phenomenon.The sol/gel transition have been correlated to intrinsic changes in micelles properties or to entropic variation in the ordered water molecules close to the PPO segments or to possible formation of a cross-linked and three-dimensional structure able to entrap water in its network [17][18][19][20].Overall, both micellization and gelation depend on different factors, namely temperature, polymer concentration and PEO block length [21].For example the Poloxamer 407 (branded Lutrol F127) with nominal molecular weight of 12,600 and a PEO/PPO ratio of 2:1 by weight is able to reach the maximum peak in viscosity in a temperature range of between 30° and 50 °C depending on polymer concentration [21]. Synthesis and Characterization PEO-PPO-PEO triblock copolymers are synthesized by sequential addition of first propylene oxide (PO) and then ethylene oxide (EO) to a low molecular weight water-soluble propylene glycol.The oxyalkylation steps are carried out in the presence of an alkaline catalyst, generally sodium or potassium hydroxide.The catalyst is then neutralized and removed from the final product (Figure 1) [12].In a previous study, aggregation and phase behavior in water of several Poloxamers were investigated by means of techniques such as static and dynamic light scattering (DLS), small angle neutron scattering (SANS), polarization microscopy, atomic force microscopy (AFM), differential scanning calorimetry (DSC), and 1 H-nuclear magnetic resonance (NMR).The sequence of phase behavior observed at increasing concentrations appeared analogous to that shown by normal surfactants, for example, isotropic solution, cubic phase, hexagonal phase and lamellar phase.At the same time, a significant difference compared to normal surfactant was detected, since the phase transition also outlined clearly thermotropic behavior.In fact, a sequence of different mesophases and phase transitions were detected at constant concentration but at increasing temperatures [22,23]. Other techniques were used in order to determine the hydrodynamic radius of micelles, such as pulsed gradient spin echo (PGSE) NMR and fluorescent spectroscopy [24,25]. As mentioned above, micellization phenomenon is related to dehydration of PPO chains, which drives unimer aggregation to form micelles.PPO is soluble in the temperature range of 2-15 °C, while for higher temperatures a precipitation cloud point exists [26].Thus it is assumed that the hydrophobic PPO is located in the core of the micelle, while the PEO lyophilic block forms the outer corona, as assessed by electron paramagnetic resonance [27,28] (EPR) and the above mentioned techniques.With increasing temperature, micellization becomes more important, and at a definite temperature point, micelles come into contact and no longer move.In addition, the formation of highly ordered structures, such as cubic crystalline phase, has been proposed as the driving force for gel formation [29,30], but this hypothesis has been questioned recently.Thus, packing of micelles and micelle entanglements may be possible mechanisms for Poloxamer solutions gelation at increasing temperature [31]. A complete microstructural characterization of an aqueous solution of Poloxamer 407 has been obtained by combining different techniques such as cryogenic temperature transmission electron microscopy (Cryo-TEM) and small angle neutron scattering (SANS).These different methods allowed measurements of micelles diameters and visualization of their hydrophobic cores and demonstrated the existence of different lyotropic phases for this kind of system [32].Another technique recently used to characterize Poloxamer 407's aggregation behavior has been acoustic spectroscopy [33].Particle size and microrheological extensional moduli (G' and G'') of the systems were determined from acoustic parameters such as sound attenuation and sound speed.By monitoring particles size and rheological extensional moduli variations at increasing temperatures, it was possible to define and outline Poloxamer 407 transitions. Pharmaceutical Application Poloxamers are a class of surfactants used in different industrial areas such as detergency, foaming, lubrication, dispersion, stabilization, cosmetics, and inks [12].Preliminary toxicity data indicated that this copolymer is well tolerated [34].These results have prompted the use of Poloxamer 407 in the design of medical, pharmaceutical and cosmetic systems and therefore they are extensively used in the pharmaceutical field as gel, microemulsions, nanoparticles and solid polymer blends [35].In particular, the thermogelling behavior and non-toxicity of the Poloxamers lead to different dosage forms being investigated.Early studies evaluated Poloxamer 407 thermosensitive solutions for the treatment of burns [34], topical administration of anticancer agents [36], rectal [37,38], vaginal [39], transdermal [40], and ophthalmic [41,42] administration and sustained delivery of drugs after extravascular parenteral injection [43]. However, the goal of this kind of copolymer is to administer its dispersion when in the sol state and use its ability to transform quickly into strong gels making the formulation of a parenteral delivery system possible.Concerning this route of administration, Poloxamer 407 is usually regarded as non-toxic.In fact after intramuscular injection in rabbits, Poloxamers 238 and 407 displayed musculo irritancy/toxicity comparable to that of traditional intramuscular vehicles, such as saline and peanut oil [44].Thus, the lack of miotoxicity of these reversible thermal-gelling systems after intramuscular injections to rabbits recently promoted studies on the use of Poloxamer 407 water dispersions as vehicle for sustained release formulations containing proteins by extravascular administration [45,46].In fact, after parenteral injection, Poloxamer gels can prolong drug release compared to solutions, even though the delivery period rarely exceeds a few days [47,48].Obviously, drug release control is related to the final stiffness of the thermogel, which depends on Poloxamer 407's concentration.However it is necessary to consider that increasing copolymer concentration could cause reduction of thermogelation temperature, which would make the system semisolid at room temperature and thus quite difficult to handle. On the other hand, other studies [52] demonstrated that Poloxamer 407 injected intraperitoneally into rats (1.5 g/kg) resulted in sustained hypercholesterolemia and hypertriglyceridemia (96 h).The predominant mechanism for this effect could be attributed to the inhibition of heparin-releasable lipoprotein lipase [53]. Blonder et al. [54] studied how different doses of Poloxamer 407 gels could induce hyperlipidemia in rabbits.They found that the highest dose (137.5 mg/kg) significantly increased serum triglycerides and cholesterol in both male and female rabbits, while lower doses (5.5-27.5 mg/kg) did not alter serum lipids.Thus, the amount of administered polymer should be kept to a minimum, especially when repeated dosing is required. Another important aspect is the possibility to solubilize hydrophobic drugs thanks to their amphiphilic nature.In a recent paper, Sezgin et al. [55] investigated the solubilization behavior of three poorly water soluble anticancer agents in some Poloxamers (F127, F68, P85) and asserted that these systems are effective new drug carriers, especially for anti-cancer molecules. On the other hand Poloxamer gels can have potential disadvantages including their weak mechanical strength, rapid erosion (i.e., dissolution from the surface), and non-biodegradability, which prevents the use of high molecular weight polymers that cannot be eliminated by renal excretion [56]. PEG-PLA, PEG-PLGA Block Copolymers A novel concept, which combines thermogelation, biodegradability, and no toxicity, has been proposed for injectable gel systems with improved safety and longer gel duration [57]. With this aim, new injectable biodegradable polymers possessing reverse gelation properties have been synthesized [58,59].These polymers are triblock copolymers composed of A-blocks and B-blocks arranged as ABA or BAB, where A is polyethylene glycol (PEG) and B can be the poly(dl-lactide, PLA) or the poly(dl-lactide-co-glycolide, PLGA).PLA and PLGA are characterized by good degradability and biocompatibility.PEG presents unique physicochemical and biological properties including biocompatibility, low immunogenicity, water solubility, and can be eliminated from the animal's body when the molar mass is below 30,000 [60].The final copolymers are soluble in water at or below room temperature but become hydrogels at the injection site, forming depots that degrade over a period of 4-6 weeks [61].As the polymers are biodegradable, they overcome the problem of carrier removal after the drug depot is exhausted. Concerning the PLA/PEG/PLA block copolymers, their composition, morphology and crystallinity strongly influence mechanical properties and rate of biodegradation.By adjusting the block copolymer segment sizes through the polymerization parameters, it is possible to modulate the materials characteristics in order to suit a particular application. Lee and coworkers [63] used a low molecular weight PEG (600 daltons) and the total molecular weight of PEG/PLA multiblock copolymer was varied to study the effect of this parameter on the sol-gel transition of the aqueous polymer solution.Then the molecular weight of poly(L-lactic acid) (PLA) was varied to see the effect of hydrophobicity of the PEG/PLA multiblock copolymer on the sol-gel transition. Changing the polymer composition further, particularly the middle block composition, the block length, and the block ratio, produced the next generation of poly(ethylene glycol-b-L-lactic acid-coglycolide-b-ethylene glycol) (PEG-PLGA-PEG) triblock copolymers.The aqueous polymer solution is a free-flowing sol at room temperature and becomes a gel at body temperature [61]. The sol-to-gel transition of an aqueous solution of a PEG-PLGA-PEG triblock copolymer is probably due to micellar expansion which is accompanied by an increase in aggregation number driven by hydrophobic forces [64,65].In fact, aqueous Poloxamer dispersions are known to undergo the sol-to-gel transition by a change in equilibrium from unimer to micelle reducing the number of unassociated polymers but with a constant aggregation number, whereas the PEG-PLGA-PEG triblock copolymer in water seems to undergo the sol-to-gel transition by micellar growth.Thus, Poloxamer 407 presents a large dependence of its CMC on temperature; while taking as an example the PEG-PLGA-PEG triblock copolymer aqueous solutions, two important issues were noted.First, the decrease in the second virial coefficient indicated that the polymer-polymer attraction increases relative to polymer-solvent interaction.Secondly, micelles grew by an increase in aggregation number, as well as an increase in the diameter of a micelle.Therefore, micellar growth and increase in polymer-polymer attraction could drive the sol to gel transition at a certain concentration, as the temperature increases. The structure-property relationship in the sol-gel showed that the sol-gel transition temperature and critical gel concentration decreased with increasing hydrophobicity of the triblock copolymers [66]. For the PEG-PLGA-PEG triblock copolymers the sol-gel transition temperature can be controlled by changing molecular parameters such as the PLGA length, the PEG length or LA to GA ratio of the middle block.When the hydrophobic block (i.e., PLGA) length of a PEG-PLGA-PEG triblock copolymer was increased from 2,320 to 2,840 at a fixed PEG length of 550 the sol-to-gel transition temperature (T~ −10 °C) and the critical gel concentration (CGC) decreased, indicating that this transition was driven by hydrophobic forces.An increasing hydrophobic block length brought to an increase of gel-to-sol transition temperature.Thus, the gel region was controlled by varying the PLGA length. On increasing PEG length of a PEG-PLGA-PEG triblock copolymer, from 550 to 780, at a fixed PLGA length of 2,300, the different regions of the phase diagrams were shifted to higher temperatures (T~ 18 °C).The gel region remained almost constant.Therefore the gel strength was mainly determined by the hydrophobic block [67].The DL-lactic is more hydrophobic than glycolic acid in PLGA.Thus the hydrophobicity was increased by increasing DL-lactic acid to glycolic ratio, from 78:22 to 72:28, of PEG-PLGA-PEG (550-2,900-550) triblock then, the sol-to-gel transition temperature (T ~ −5°C) and CGC (CGC~ −5 wt%) decreased. Furthermore, the gelation temperature of PEG-PLGA-PEG copolymers can be influenced by the presence of additives.The addition of a salting-out salt (1 wt%), such as NaCl, decreased the sol-gel transition temperature by 5 °C, whereas a salting-in salt, i.e., NaSCN, increased the sol-to-gel transition temperature by 5 °C. Gelation may occur by a different mechanism from that of ABA block copolymers because of the PLGA end blocks.The polymer forms micelles where PLGA block are in the core and PEG forms the curved shells, but may also give rise to bridging micelles with increasing concentration and temperature. Synthesis and Characterization Several methods are reported in the literature regarding synthesis of PLA-PEG or PLGA/PEG block copolymers. In order to synthesize (AB)n type multiblock copolymers of PLLA and PEG, dicarboxylated PLLA was prepared by a direct condensation polymerization.A 50 g of L-lactic acid (85% solution in water) and a predetermined amount of succinic acid, used to introduce a dicarboxylated terminal group, were mixed in a round-bottom flask and then dehydrated at 30 °C under reduced pressure for 12 h.Polymerization was carried out at 150 °C for 24 h under reduced pressure.The polymer was recovered by dissolution of the product in chloroform and by precipitation in excess n-hexane and dried under vacuum for three days at 60 °C. Dicarboxylated PLLA and PEG were poured into a round-bottom flask containing methylene chloride as solvent.PEG's of 2,000, 4,000, 6,000 and 10,000 number-average molecular weight were purified by precipitation in n-hexane.Dicyclohexylcarbodiimide (DCC) and N-dimethylaminopyridine (DMAP) were then added as polyesterification catalyst at the monomer to catalyst ratio of 0.25.Reaction was carried out at 25 °C for 24 h under nitrogen.Multiblock copolymers synthesized were recovered by precipitation of the reaction product, which contained dicyclohexylurea as a by-product, in excess n-hexane after filtration.Low molecular weight unreacted species were removed by fractionation method [70]. According to Lee's method [63] (Figure 2), the PLA was prepared by typical ring-opening polymerization of L-lactide using stannous octoate as catalyst.1,6-Hexane diol was used as initiator.The product was isolated by precipitation into diethyl ether/n-hexane.The polymer was redissolved in 30 mL of methylene chloride and precipitated by slowly adding diethyl ether/n-hexane.The residual solvent was removed under vacuum.The succinic anhydride was reacted with PLA to prepare-dicarboxylic acid terminated PLLA (PLLA-DA) The PEG/PLLA multiblock copolymer was prepared by the coupling reaction between PEG and --dicarboxylic acid terminated PLA.To control the molecular weight of the PEG/PLLA multiblock copolymer, a little excess amount of the PEG was used. Mothé et al. introduced in a glove box under dry nitrogen atmosphere, a pre-weighed amount of poly(ethylene glycol) (Mn = 4,000 or 600 g mol −1 ) and stannous 2-ethyl-hexanoate in a dry tube.The tube was sealed and immersed into silicon oil bath, at 120 °C for 10 min, and then cooled to room temperature.A pre-weighed amount of l,l-lactide (LA), was introduced into the tube under nitrogen atmosphere, and then sealed and immersed in a bath at 120 °C for 22 h.The copolymer thus obtained was purified by dissolution in chloroform and precipitation in methanol (three times).The prepared block copolymers presented wide range of molecular weights (800-25,000 g mol −1 ) and compositions (16-80 wt.% of PEG) [62].In different articles, Li and co-workers reported the synthesis, characterization, and stereocomplex-induced gelation of PLLA/PEG and PDLA/PEG block copolymers.The copolymers were synthesized by ring-opening polymerization of L-or D-lactide in the presence of mono-or dihydroxylated PEG, using zinc metal as catalyst [71,72]. The same author carried out a ring-opening polymerization of D,L-lactide in the presence of monohydroxylated poly(ethyleneglycol) (PEG) with Mn of 2,000 and 5,000, using zinc powder as catalyst.The resulting PEG-b-polylactide (PEG-PLA) diblocks with various ethylene oxide/lactyl (EO/LA) ratios were coupled with adipoyl chloride to yield PEG-PLA-PEG triblock copolymers.N-Dimethylaminopyridine (DMAP) was used as catalyst.This reaction led to formation of an ester bond between PLA and PEG blocks (Figure 3) [71].Zhang et al. synthesized PLA-PEG-PLA copolymers by ring opening polymerization of L-or d-lactide in the presence of dihydroxyl PEG using zinc lactate as catalyst.In fact zinc lactate was used as catalyst instead of stannous octoate or other catalysts which are rather cytotoxic.Predetermined amounts of PEG and lactide were introduced into a polymerization tube.The initial molar ratio of ethylene oxide to lactate repeat units (EO/LA) was 3/1 or 5/1.Zinc lactate (0.1 wt%) was then added.After degassing, the tube was sealed under vacuum, and polymerization was allowed to proceed at 140 °C.After 24 h, the product was recovered by dissolution in dichloromethane and precipitation in diethyl ether.Finally, the product was washed and dried under vacuum up to constant weight [73]. A poly(ethylene glycol)-poly(D,L-lactide) diblock copolymer (PEG-PLA) having a site specifically protected-sugar group at the PEG chain end was synthesized through a successive ring-opening polymerization of ethylene oxide and D,L-lactide using a methylated protected sugar as an initiator at room temperature under Argon atmosphere.Removal of protective groups from the sugar residue in the block copolymer was quantitatively carried out using 80% trifluoroacetic acid at room temperature, yielding a block copolymer having a glucose or galactose residue at the chain end in a regioselective manner [74]. Various techniques can be utilized to characterize the different diblock copolymers and their water dispersion. The subsequent step consisted in analysis of self assembling and thermogelation properties of these block copolymers: namely CMC determination and then characterization of micelles and of the sol-gel transition.UV-visible [65] together with fluorescence technique [79] and surface tension [80] were the techniques mostly used in order to determine CMC values, while dynamic light scattering measurements, as a function of temperature [63,67], allowed determination of micelles size. Regarding the sol-gel transition temperature and behavior the method of election was rheology.In fact both dynamic mechanical and flow curve analyses were used to identify sol-to-gel transitions at increasing temperature [65,71,78]. Another method often used to determine the sol-gel transition, despite the fact that it can be considered subjective and not accurate, was the test tube inverting method (temperature increment selected has been 1 °C per step).Polymer aqueous solutions (i.e., 0.5 g) were prepared in 4 mL vials with inner diameters of 11 mm.The vials were immersed in a water bath at each step for 15 minutes.The sol-gel transition temperature was monitored by inverting the vials, and if there was no flow in 30 s, it was regarded as a gel.The transition temperature was determined with 1 °C accuracy [63,65,78]. Then 1 H NMR and 13 C NMR analysis were used to study composition and microenvironment change during sol-to-gel transition and structure and composition of the synthesized triblock copolymers [63,65,67,71,78].As an example, Zhang et al. used 1H NMR and differential scanning calorimetry (DSC) analysis to characterize the synthesized PLLA-PEG-PLLA and PDLA-PEG-PDLA block copolymers.1H NMR allowed to determine structural characteristics such as EO/LA ratio, number average degree of polymerization (DP), and number average molar masses (Mn) while DSC was used to evaluate the thermal properties, including melting temperature (Tm), melting enthalpy (ΔHm), glass transition temperature (Tg), and cold crystallization temperature (Tc) [73]. Furthermore DSC, IR and X-ray diffraction techniques were also utilized to characterize PEG-PLA-PEG triblock copolymers showing that the synthesized copolymers were semicrystalline with PEG-type crystalline structure, and of decreasing crystallinity at increasing PLA block length [71]. Pharmaceutical Application As already mentioned, biodegradable in-situ forming hydrogels represent promising delivery systems. The different mechanisms of micellization and gelation compared to Pluronics make PEG-PLA or PEG-PLGA copolymers particularly able in controlling drug release.In fact, the Pluronic polymers form micelles that are equilibrated with monomeric polymers.The equilibrium shifts to micelle formation with increasing temperature.Above a CGC, the micelles pack together and occupy the entire volume, resulting in gel formation.According to this mechanism, the formed gel is subject to dissolution upon dilution from its surface, because when diluted, the interaction forces between packed micelles are not strong enough to keep an integrated mass, resulting in dissociation.This allows surface erosion of the gel, which has been utilized as a means for near-zero-order release of incorporated drugs for a short period of time [81].However, the observation of integrity of the PEG-PLGA-PEG gel in rats even after one month may imply that the gelation mechanism or gel structure is different than that of Poloxamers, or, if it follows the same mechanism, the interactions between packed micelles are in a different order than those found in Poloxamers. As an example Jeong and co-workers [66] demonstrated that PEG-PLGA-PEG copolymers formed micelles as temperature increased because PEG segments interact with a PLGA core, forming a new intermixed phase between core and shell.As a possible gelation mechanism, it is proposed that, with increased miscibility and increased interphase volume, the micelles start to contact, and the PEG chain in the corona interpenetrates between micelles or the intermixed phase hydrophobically interact with each other, leading to solid micelle packing and preventing the gel from dissolution by dilution.The miscibility between the two blocks still increases with temperature even in the gel phase, leading to unusual turbidity changes with temperature depending on degree of phase mixing. Due to the mechanism described above, PEG-PLGA-PEG systems give rise to forming gels that maintain their properties (gel shape maintained its three-dimensional form due to the rapid sol-to-gel transition) for more than one month in rats as confirmed by subcutaneous injection of these triblock copolymer aqueous solutions. Ketoprofen and spironolactone release kinetics were studied from a PEG-PLGA-PEG triblock copolymer hydrogel [82].The release profile was strongly affected by drug hydrophobicity. A 10 mM phosphate buffer, containing 0.02 wt.% NaN3 and 0.2 wt.% 3 Tween 20, was used as a drug release medium to improve drug solubility.The release medium pH was maintained between 7 and 7.4 to keep a constant drug solubility.The more hydrophilic ketoprofen was released continuously over two weeks with a first order release profile.The more hydrophobic spironolactone was released over two months with a S-shape release profile: an initial diffusion was followed by a combination of degradation and diffusion at later stage.The more hydrophobic drug partitioned more into the hydrophobic core and the release from the hydrobyphobic PLGA core domain system degradation.Thus, the release of spironolactone could be controlled by initial polymer concentration, loading of drug, and structure of the polymer. Release of several drugs from PLGA-PEG-PLGA triblock copolymers hydrogel (ReGel ® ) including protein drugs and paclitaxel was reported [64,68].ReGel is the trademark of the PLGA-PEG-PLGA copolymers, with a weight average molecular weight (Mw) of approximately 4,200 and an M w /M n ratio of 1.3.The unique characteristics of ReGel ® hinge on two different main issues.First of all ReGel ® is a water soluble, biodegradable polymer at temperatures below gel transition temperature; then ReGel ® forms a water-insoluble gel once injected with an increase in viscosity of four orders of magnitude.Thus the gel formed a controlled release drug depot with delivery times ranging from one to six weeks.Another important feature was that this copolymer showed the ability to solubilize (400 to 2,000-fold) and stabilize poorly soluble and sensitive drugs, including proteins.As an example the gel provided excellent control of paclitaxel release for approximately 50 days and showed a flexible approach to protein and small molecule delivery that was simple to process and administer (pGH, G-CSF, insulin, rHbsAg). Other temperature-responsive PLGA-PEG-PLGA triblock copolymers which have a different dl-lactide/glycolide molar ratio from ReGel ® (ranging from 6/1 to 15/1) were also synthesized.Compared with ReGel system, the synthesized copolymers showed higher gelation temperature and longer period of drug release [83]. Another study demonstrated the feasibility to deliver protein (i.e., lysozyme) in biologically active form for longer duration by varying block lengths and concentrations of PLGA-PEG-PLGA triblock copolymers [84]. Yu and co-workers obtained a thermoreversible physical hydrogel by simply mixing an aqueous sol of a block copolymer with a precipitate of a similar copolymer but with different block ratio.The mixture of these two samples with a certain mix ratio underwent a sol-to-gel-to-precipitate transition upon an increase of temperature.This study highlighted how the balance of hydrophobicity and hydrophilicity within this sort of amphiphilic copolymers appeared critical.Then, lysozyme, chosen as model protein, was used to examine whether or not this thermogelling mixture systems could encapsulate and deliver biological substances such as proteins in a biologically active form, for long duration demonstrating that the release rate could also be adjusted by the mix ratios of copolymer mixtures, and an almost zero-order sustained release of lysozymes was achieved up to 50 days.Thus the obtained results outlined that the -mix‖ method provides a very convenient approach to design injectable thermogelling biomaterials with a broad adjustable window, and the novel copolymer mixture platform can be potentially used in drug delivery and other biomedical applications [85]. Another example was represented by solutions of PLGA-PEG-PLGA containing Calcitonin as model peptide.Zero order release kinetics was achieved for up to 100 hours.No significant burst release effect was observed.Gelation time did not affect the drug release profile of the system and the diffusion was the main mechanism for Calcitonin release from these systems [86].Besides, calcitonin release kinetics, from a PLGA-PEG-PLGA polymeric solutions (25% w/w), could be controlled by using different excipients such as, for example, sodium lauryl sulfate that showed to reduce drug release rate from the systems [87]. Furthermore, PLGA-PEG-PLGA triblock copolymers were evaluated for sustained release of bee venom peptide.Bee venom peptide was released from copolymer-based hydrogels in the phosphate buffer (pH 7.4) as dissolution medium over 40 days.The hydrogels underwent slower degradation and then faster degradation rate during release process.Accordingly, the mechanism of bee venom peptide was Fickian diffusion during initial stage and then may be a combination of diffusion and degradation, indicating that PLGA-PEG-PLGA copolymer-based hydrogel represented a promising platform for sustained delivery of bee venom peptide [88]. In another study, a temperature-sensitive triblock copolymer PEG-PLGA-PEG was synthesized and blended with an oily phase (Lipiodol(R)) to obtain thermogelling emulsions.The release kinetics of paclitaxel in hydrogel and emulsion formulations were investigated demonstrating the feasibility of the thermogelling emulsions applied for both vascular embolization and sustained release of an antiangiogenic drug [89]. PLGA-PEG-PLGA hydrogel have also been studied for the ocular delivery of dexamethasone acetate.In particular the 20% (w/w) had a low critical solution temperature of 32 °C, which is close to the surface temperature of the eye and demonstrated potential thermosensitive gel-forming properties.This formulation may improve the bioavailability of some eye drugs [90]. Even PLA-PEG block copolymers have been studied for the peptide and protein delivery.A water soluble pentapeptide-TP5, taken as a model drug, was successfully incorporated into PLA-PEG-PLA hydrogels to evaluate the potential of this system as carrier of hydrophilic drugs.Various parameters such as copolymer concentration, drug load and copolymer composition were considered.Higher copolymer concentration led to slower release rate and less burst effect due to more compact structure which disfavored drug diffusion.Similarly, higher molar mass of the copolymers disfavored the release of TP5.In contrast, drug load exhibits little influence on the release profiles due to the high water solubility of TP5.Studies proved the potential of TP5 containing hydrogels, and the morphology of thymus indicated the immunization efficacy of the TP5 release systems based on PLA-PEG-PLA hydrogels [73]. PEG-Polycaprolactone Copolymers Poly(ε-Caprolactone) (PCL) is a highly hydrophobic and crystalline polymer having good biocompatibility and widely used as biomedical material.In fact PCL is one of the most promising synthetic polymers which degrades in aqueous media or when in contact with microorganisms and thus can be used to make compostable polymeric devices [91].The addition of hydrophilic polyether blocks to PCL chains has been used to enhance hydrophilicity compared to the parent homopolymer.Poly(ethylene glycol) (PEG) has been used to form various block copolymers with PCL [92,93]. PEG-PCL diblocks having PEG segment blocks of 2000 and PCL segment length of 950-1,500 underwent a sol-to-gel phase transition as the temperature was varied [94]. The main advantage of this system compared with PLGA-based thermogelling copolymers was that the incorporation of PCL led to the formation of polymers in powdery form, instead of a sticky paste making easier the handling of the drug/ polymer formulation for practical applications.In fact PCL triblock copolymers are not only simple to transfer or weigh but also easily dissolved in water. Additionally, these multiblock copolymer solutions were stable as a transparent solution at room temperature, providing practical convenience during drug formulation [95]. The phase transition behavior was mainly determined by PEG/PCL blocks ratio, PCL block length and molecular weight.In fact, a typical phase diagram of these multiblock copolymers in aqueous solution displayed a critical gel concentration (CGC) and a phase-transition temperature, which were mainly determined by these parameters.Phase separation-induced gelation mechanism was advanced to explain the thermoreversible phase transition.The hydrophobic PCL blocks aggregated and formed domains when hydrophilic PEG blocks were hydrated in water.A sol-to-gel transition resulted from the formation of three-dimensional physical network because multi-PCL blocks could diffuse into different domains.A gel-to-sol transition resulted from the melting of these domains and the collapse of physical crosslinkages.These multiblock copolymers have a varied sol-gel transition range near to body temperature with potential applications in injectable drug delivery systems [96]. When a wt of 20% PCL-PEG-PCL triblock copolymer aqueous solution was left overnight at room temperature (20 °C) opaque gel formation was observed.When the same gel was heated up to 50 °C for 30 s and a quickly quenching in an ice bath, it reversibly became a transparent free-flowing sol and when this transparent sol was injected into 37 °C water, it instantaneously became a gel.Thus, different mechanisms occur when opaque gels are formed at 20 °C (low-temperature gel) or for thermogels formed at 37 °C [95]. Synthesis and Characterization Perret and Skoulios were the first who synthesized series of PCL/PEG block copolymers [97].They obtained copolymers by anionic polymerization using naphthalene-sodium complex as catalyst.Later, catalyst-free polymerization was introduced to synthesize the PCL/PEG copolymer by Cerrai et al. [98]. Bae and co-workers prepared PCL-PEG-PCL triblock copolymers by ring-opening polymerization of caprolactone in the presence of PEG (Figure 4) by using stannous octoate as catalyst.For example, to synthesize PCL-PEG-PCL (980-1,000-980) triblock copolymer, PEG (15.0 g, 15.0 mmol, Mn = 1,000) was dissolved in anhydrous toluene (80 mL) and the solvent was distilled off to a final volume of 30 mL to remove the residual water adsorbed by the polymer.Both -Caprolactone (23.7 g, 207.6 mmol) and stannous octoate (49 L, 0.12 mmol) were added to the reaction mixture and precipitated by slowly adding diethyl ether.Terephthaloyl chloride (0.694 g, 3.3 mmol) and triethylamine (1.41 mL, 10.12 mmol) were added to the reaction mixtures and stirred at 60 °C for 24 h.The product was isolated by precipitation into diethyl ether, then the polymer was redissolved in 30 mL of methylene chloride, filtered, and precipitated by slowly adding diethyl ether.The residual solvent was removed under vacuum [95,[99][100][101].Multiblock copolymers composed of poly(ethylene glycol)s (PEGs) and biodegradable poly(-caprolactone)s (PCLs) were also synthesized through one-step condensation copolymerization with hexamethylene diisocyanate (HDI) as coupling agent.The reacted solution was precipitated in diethyl ether and the resulting copolymer was further purified through dissolution in chloroform and precipitation in diethyl ether.The final product was dried under vacuum at room temperature for over 48 h to yield a translucent solid [96].The synthesized polymers and their self assembling behavior have been characterized by means of different analytical techniques. 1 H NMR measurements were performed to determine molecular structure and composition, such as the PEG/PCL block ratio [95,96,[100][101][102], while 13 C NMR has been used to observe spectral changes of PEG/PCL multiblock copolymer (20 wt % in D 2 O) as a function of temperature [95,101].Furthermore, dynamic light scattering analyses facilitated studying the size of PEG/PCL multiblock copolymer as a function of temperature [95,101].Then IR spectra of these multiblock copolymers were performed in order to characterize their molecular structures [96,100]. Then, in order to understand phase behavior of PCL-PEG-PCL triblock copolymer solution and, in particular, the formation of an opaque gel at room temperature (a phenomenon not present in the multiblock copolymer aqueous solution), Bae et al. used X-ray diffraction analysis and Raman spectroscopy as function of time at room temperature.Both techniques highlighted that crystallization of PCL-PEG-PCL triblock copolymer in water is responsible for such phase behavior [95]. Thermal properties, such as melting temperature (T m ) of PEG and PCL homopolymers and multiblock copolymers were measured by differential scanning calorimetry (DSC) [101].The same technique has been applied to study polymer dispersion and hydrogel [95,96,99]. Samples phase transition were also investigated by using optical microscopy (OM) equipped with a hot plate to monitor samples behavior at increasing temperature [96]. More specifically the sol-gel transition was studied by the test tube inverting method [63,78,95,101] and rheological analyses [95,101]. Pharmaceutical Application The PCL-PEG copolymers might have great application in biomedical field.In particular, thanks to their great thermosensitivity and biodegradability, their hydrogels are promising materials to be used as gel-forming controlled drug delivery system. A new kind of biodegradable and injectable poly(ethylene glycol)-poly(ε-caprolactone)poly(ethylene glycol) (PEG-PCL-PEG) hydrogels were prepared in order to develop controlled drug delivery systems.Aqueous solutions of these diblock and three-block copolymers resulted to be a free-flowing sol at either room or below the corresponding critical gel temperatures (CGT), and a gel at body temperature.Thus a mice model was applied to evaluate the gel formation and its subsequent degradation followed by subcutaneous injection of PEG-PCL hydrogels (Figure 5) [103].In the same work, degradation behavior and drug release behavior of these di-and tri-block copolymers have been investigated.In-vitro release was observed in particular for three different classes of molecules: hydrophilic small-molecular-weight drugs (i.e., Vitamin B 12 ), hydrophobic small-molecular-weight drugs (i.e., honokiol, a multi-functional drug, with optimal potential application for human cancer therapy) and a hydrophilic macromolecular protein drug (i.e., BSA).Thus slow drug release was gained while release profiles were affected to some extent by initial drug loading and hydrogel concentration.Hydrophilic drug were almost completely released from the hydrogel in a week with high release rate (>85% in 24 h) and high initial burst rate (about 30% in 1 h), whereas hydrophobic drug and protein drug could be released slowly over a longer period with lower cumulative release rate (38.8% for honokiol and 27.2% for BSA in 14 days, respectively).Drug release from hydrogels was driven by two forces: diffusion effect and degradation or erosion [103].Due to good solubility, hydrophilic drugs could diffuse in short time through pores of hydrogel releasing almost all the drug from the hydrogel.But for hydrophobic drugs and proteins in water, low diffusion rates could be seen and strong intermolecular interactions with hydrogel dominated drug release profiles, resulting in low release rate and high residual drug in hydrogels. Hydrophobic honokiol (HK), as model drug, has also been used in another study [104].Self-assembled PEG-PCL-PEG micelles encapsulated this drug in order to overcome its poor water solubility and to meet the requirements of intravenous administration without any organic solvent or surfactant.Encapsulation of HK led to sustained release from HK-micelles for up to two weeks while cytotoxicity remained comparable to that of free HK.Thus, HK-micelles were safe, stable, effective, and easy to make and scale up. PEG-PCL-PEG triblock copolymer was also investigated as in-situ sustained opthalmic drug delivery.Biodegradability within the eye, its effect on cultured human lens epithelia, intraocular pressure, and ocular tissues have been studied.Data indicated that the prepared hydrogel was biocompatible and biodegradable despite the temporary elevated intraocular pressure and slight corneal endothelial damage at specific concentrations.Therefore, this hydrogel showed great biocompatibility, biodegradability, and sustained release property in eyes, making this system a safe candidate for sustained ophthalmic drug delivery [105].Furthermore, Gong et al. [106] studied a PEG-PCL-PEG hydrogel system for basic fibroblastic growth factor (bFGF) antigen delivery.bFGF encapsulated in this hydrogel resulted in an injectable free-flowing sol at room temperature forming a non-flowing gel at physiological temperature and acting as antigen depot.Furthermore, data about cytotoxicity showed that the hydrogel could be regarded as a safe carrier, and bFGF could be released from the hydrogel system in an extended period.The immunogenicity of bFGF was improved significantly after encapsulating into the hydrogel and the strong humoral immunity created by bFGF-hydrogel was maintained for more than 14 weeks.Thus, these bFGF loaded hydrogels might have great potential as novel vaccine adjuvants for protein antigen. Compared to PEG-PCL-PEG triblock copolymer, PCL-PEG-PCL triblock copolymer has several advantages: First, the PCL-PEG-PCL triblock copolymer can be synthesized in one step without using any coupling agent; second, PCL-PEG-PCL hydrogels have a wider gel window; third, PCL-PEG-PCL hydrogels can persist for a longer period, about six weeks, compared to PEG-PCL-PEG hydrogels (about two weeks).Both PCL-PEG-PCL and PEG-PCL-PEG are biodegradable, and can sustain drug release in an extended period.One could choose to use PCL-PEG-PCL or PEG-PCL-PEG hydrogels according to the practical needs of different applications [107]. PCL-PEG-PCL block copolymers have been also utilized for protein delivery.In fact, the release behaviors of two model proteins, including bovine serum albumin (BSA) and horseradish peroxidase (HRP), from a gel-forming controlled drug delivery system based on thermosensitive PCL-PEG-PCL polymers were studied in detail.Released HRP was confirmed to preserve its biological activity by specific enzymatic activity assay, in fact gel formation and degradation studies indicated that PCL-PEG-PCL copolymers hydrogels could give a sustained release for at least 45 days by subcutaneous injection.So the great thermosensitivity and biodegradability of these copolymers make them promising gel forming controlled drug delivery system for therapeutic proteins [107]. Biocompatible PEG-PCL micelles has been studied to deliver silicon phthalocyanine Pc 4; a second-generation photosensitizer currently approved by the FDA.This molecule showed promising properties for photodynamic therapy (PDT) in several cancer cells and model tumor systems.Because of its high hydrophobicity, its formulation for delivery and favorable biodistribution is quite difficult.Recently Master et al. demonstrated the efficient encapsulation of Pc 4 in these micelles, their intracellular uptake, and significant cytotoxic effect of the formulation upon photoirradiation.Quantitative estimation of the extent of Pc 4 loading on micelles, and the photocytotoxicity of the micelle-incorporated Pc 4 demonstrated that this system could be used to develop a biocompatible nanomedicine platform for tumor-targeted delivery of Pc 4 for site-selective PDT [108]. Polyether Modified Poly(acrylic acid) Poly(acrylic acid) (PAA) copolymers modified with block-copolymers of poly(ethylene oxide)poly(propylene oxide)-poly(ethylene oxide) PEO-PPO-PEO (Poloxamers) (Figure 6) show different medicinal applications, including topical and systemic administration, as their components are considered pharmaceutically safe.These copolymers have a unique graft-comb structure (Figure 6) whereby polyether chains (primarily PPO segments with tertiary carbons) are bonded to PAA via C-C bonding [109].The conformation of PAA in aqueous solutions depends on pH, ionic strength, presence of multivalent ions, while PEO/PPO chains can undergo a transition from a homogeneous solution of random coils to a separated microphase structure depending on temperature and salt concentration [110].Thus properties of aqueous solutions of graft-or block-copolymers of PAA and polyethers are sensitive to both ionic strength and temperature giving rise to a variety of phase-separated (self-assembled) structures, including micellar aggregates, in the polyether-PAA solutions [110,111].In fact, attachment of PPO groups onto a polyelectrolyte adds temperature sensitivity to an already pH-sensitive polymer, thus creating a dually responsive material. Thus, Poloxamer-PAA copolymers [112] are characterized by high molecular weights and high sensitivity to temperature.In semidilute regimes (i.e., 0.01-1 w/v% concentration range [113], Poloxamer-PAA aqueous solutions form reversible gels with significant elastic moduli [114] due to the formation of micelle-like aggregates above a well-defined CMT [114].In the range of concentration 0.5-3 wt%, PAA-g-Poloxamer and Poloxamer-g-PAA form clear gels when the temperature increased from 4 to 37 °C at pH 7.4 [115,116].Gelation of these copolymers is due to an entropically driven self-association of PPO groups to form micelles that provide physical cross-linking points [117].Besides, the critical gel concentration is considerably lower than that of the parent Poloxamer.However, an increase in polymer concentration results in lower transition temperature and higher gel strength.Thus, polymer concentration, pH and salts influenced gelation properties [118]. In fact, at increasing pH values, the onset of gelation is shifted to lower temperatures, and gel strength increased.These results depends on the fact that gelation is driven by the formation of micelles that act as thermoreversible cross-links (Figure 7).The onset of gelation in poloxamer-g-PAA solutions occur approximately at the critical micellization temperature of the corresponding Poloxamer [116].A negative change in heat capacity upon gelation suggests a decrease in the exposure of PPO segments to water [114]. Synthesis and Characterization Synthesis of the graft copolymer, PAA-g-poloxamer was reported by Hoffman and co-workers [115].Poloxamer was activated by derivatization with 4-nitrophenyl formate in the presence of triethylamine.After purification, the intermediate was reacted with diaminoethylene to yield an amino terminated Poloxamer which was conjugated to (acrylic acid) via an amide bond using dicyclohexylcarbodiimide (DCC) as a coupling agent.The proposed synthetic route involved a three-step procedure with several intermediate steps of purification, making the process potentially problematic on an industrial scale.In fact, applying Poloxamer derivative animated on both ends would lead to a permanent crosslinking of the poly(acrylic acid) chains with Poloxamer segments by urethane bonding. In this procedure the acrylic acid in a 125 mL flask was partially neutralized by addition of 5 M NaOH solution as needed.The typical degree of neutralization of acrylic acid was 6 mol%. Upon redissolution of the precipitate, Poloxamer was charged into the flask and allowed to completely dissolve in acrylic acid under constant agitation.A 500 mL multinecked, thermostated flanged glass reactor equipped with a mechanical stirrer, syringe sampler, thermometer, programmable heater bath and a gas inlet/outlet [119] was charged with 400 mL of Ganex solution in dodecane and was deoxygenated overnight by nitrogen flow while stirring. Initiator system formed by a solution of peroxide and/or azo compound in a small amount of acrylic acid was added into Poloxamer solution in acrylic acid under stirring.The resulting solution was deoxygenated by nitrogen flow for 1 h and introduced into the reactor under a nitrogen blanket while stirring.The reactor was equilibrated for 1 h while stirring at 20 °C under nitrogen purge, introduced from the bottom of the reactor.Then at t = 0 the heating began and timing commenced.The reactor was heated to 70 °C at a desired rate (typically 1-2 °C/min) under constant nitrogen flow.At a certain temperature the exothermic reaction began resulting in a rapid temperature rise inside the reactor.Then the heat release subsided, the reactor cooled to 70 °C and it was kept at this temperature for 8-10 h under stirring.The reactor was allowed to equilibrate at 20 °C, the nitrogen flow was discontinued, and the slurry of the resulting polymer was filtered off using Whatman filter paper (retention 10 μm).The polymer was repeatedly washed with excess heptane and then with excess hexane in separation funnels.The resultant white powder was dried under vacuum (10-3 Torr) at 40 °C for 24 h. To monitor kinetics of polymerization, 0.5 mL samples were withdrawn intermittently from the centre of the reactor.An effective degree of conversion of acrylic acid (AA) into the polymer [F, % = 100 × (concentration of reacted AA)/(initial concentration of AA in the reaction mixture)] was determined by measuring the amounts of acrylic acid monomer left in the reactor during the course of polymerization. In order to characterize graft-copolymerization of Pluronic and poly(acrylic acid) the effective degree of bonding between Pluronic and poly(acrylic acid) was measured [112]. A sample, withdrawn from the reactor, was filtered using Whatman filter paper (retention 10 μm), dried under vacuum and placed into a Soxhlet extractor charged with 100 mL of dichloromethane, and kept under reflux for 24 h.Then, fresh dichloromethane was added into the extractor and the operation repeated for other 24 h.The samples were collected and evaporated under vacuum.The residual solids were weighed, redissolved in THF, and analyzed for Pluronic content using HPLC and FT-IR.The residual solids were analyzed by HPLC using THF at a 1.0 mL/min flow rate and poly(ethylene glycol) and poly(acrylic acid) as molecular weight standards.An effective degree of bonding was expressed as B% = 100 × (concentration of Pluronic bonded)/(initial concentration of Pluronic in reaction mixture). The weight fraction of macroscopic gel particles was measured [116] at 15 °C by filtering 1 w/v% Pluronic-PAA through weighted Acrodisc nylon filters with pore diameters of 0.8 μm.The parameter G,% = 100 × (weight of filtered fraction)/(weight of initial suspension) was measured for each suspension.Weight-average molecular mass of Pluronic-PAA samples typically exceeded 5 × 10 5 Da, and polydispersity of the fractions subjected to SEC varied from 2.3 to 6.7. Another important step of characterization is the study of micelles formation.This phenomenon have been analyzed by using different techniques such as light scattering [116], size-exclusion chromatography [116], spin probe techniques [119], and DSC [116].Temperature sensitive chemical shifts belonging to methyl groups in 13C and 1H NMR provided direct evidence for the aggregation of PPO blocks to be a cause of micelle appearance [120]. Furthermore SANS studies [121] demonstrated that micelle-like aggregates formed in Pluronic-PAA solutions above the CMT are unusually uniformly distributed. Fluorescence analysis using pyrene I1/I3 emission intensity ratios and DPH fluorescence intensity and depolarization patterns showed that there was a distinct difference in the environment to which probes were exposed depending whether polymer solutions were in the liquid or gelled state.These domains were assumed to be created by the process of grafting the poly(acrylic acids) onto triblock copolymers [117,122].Together with fluorescence studies, ESR spectroscopy has been utilized by Bromberg and Barr [122] to monitor aggregation phenomena in aqueous solutions of these copolymers.The 12-doxylstearic acid spin probe was used to monitor changes in anisotropic ESR spectra with temperature providing a spectroscopic evidence for the presence of a constrained shell of hydrophobic (PPO) segments near the ionizable (PAA) segments. Surface tension (Wilhelmy plate method) and rheological measurements (viscosity and dynamic moduli changes) under temperature control were also used to monitor beginning of aggregation and gelation phenomenon in Pluronic-PAA [112,113]. Pharmaceutical Applications As both PAA and Poloxamers are considered safe and have been approved by the U.S. Food and Drug Administration as food additives and pharmaceutical ingredients, copolymers based on PAA bonded with Poloxamers could be used advantageously in biomedical applications [123].Furthermore animal toxicological study showed the non-toxic nature of this copolymer and that these molecules were excreted when administered orally and were not absorbed into the systemic circulation [123]. This poly(oxyethylene-b-oxypropylene-b-oxyethylene)-g-poly(acrylic acid) (trade name Smart Hydrogel) [123] formed clear gels at a critical gel concentration considerably lower than that of the parent Poloxamer. Moreover, the bioadhesive properties of this system [109] made it interesting for a wide variety of applications.In fact when this system is injected or sprayed as a liquid onto mucosal surfaces it quickly gels.The gelation gives rise to controlled release formulations characterized by lower rate of diffusion and erosion of both polymer and the associated drug, thus enhancing drug retention and bioavailability. Hydrogels of poloxamer-g-PAA have been proposed for vaginal [124] and topical [125] drug delivery.The pharmacokinetic profile of estradiol after vaginal delivery of poloxamer-g-PAA formulation was equivalent to that of Estrace ® vaginal cream, which contains five times more drug.This hydrogel was studied for the treatment of gastrooesophageal reflux by measuring its adherence to the esophageal mucosa [126] revealing that 15% of the administered dose displayed prolonged retention in the esophagus.Furthermore, it was shown that Smart Hydrogel™ could sustain the release of luteinizing hormone-releasing hormone (LHRH) and human insulin [126].At a polymer concentration of 5% release kinetics of LHRH followed a zero-order rate. The obtained swollen microgel particles form viscoelastic, crowded suspensions in water, similar to Carbopol systems and the swelling behavior in water was dependent on pH and temperature.This system showed different advantages.First of all it undergoes transitions in large volumes, in response to changing pH, which is useful in gastrointestinal applications.In fact, since the pH in the stomach is in the 1-2 range, it ensures that the microgels remain collapsed preventing an undesirable drug release in the stomach.The pH in the intestine is 6.2-7.4,leading to highly swollen state of microgels and to release the drug easily from these particles.Besides, micelles incorporated within gel particles are capable of solubilizing hydrophobic molecules including steroid hormones such as megestrol acetate and estradiol, as well as paclitaxel and camptothecin.On the other hand the charged PAA chains of Pluronic-PAA network bonded significant quantities (up to and above stoichiometric [drug]/[COO − ] ratios) of basic molecules such as doxorubicin and can be loaded with anticancer agents [127,128].Moreover, incorporation of different Poloxamer copolymers into microgels strongly influenced structure, swelling, and aggregation behavior of these gel microparticles [127]. As already mentioned these copolymers are characterized by a certain degree of mucoadhesiveness.The mucoadhesive properties were tested by Bromberg and co-workers [109] by measuring fracture strength and work of adhesion between gels and rat intestine correlated with the viscoelastic characteristics of gels such as pseudoequilibrium modulus obtained in creep recovery tests and loss angle measured at fixed oscillatory stress and frequency. Due to the strong mucoadhesive tendency, formulations based on Pluronic-PAA copolymers, exhibit enhanced retention in esophagus and intestines.In a study on human volunteers, Pluronic (F127)-PAA copolymer solution gelled and formed a protective layer over the esophageal mucosa when administered orally and efficiently adhered to the mucosa of the human esophagus, increasing the micellar gel retention on the mucosa [129]. Furthermore, the fact that these micelles efficiently solubilize hydrophobic drugs, such as paclitaxel and steroids, and protect molecules such as camptothecins from the hydrolytic reactions, makes them good candidates in the oral chemotherapy.Bromberg and coworkers carried out several and studies regarding the oral administration of doxorubicin, paclitaxel, and megestrol acetate using both cross-linked and uncross-linked Poloxamer-PAA copolymer [130,131]. High surface activity of Poloxamer-PAA copolymers in water resulted in interactions with cell membranes and suppression of membrane pumps such as P-glycoprotein.Experiments demonstrated that lightly cross-linked copolymers of PAA and Poloxamer L61, L92, and F127 loaded with antineoplastic drug doxorubicin inhibited P-glycoprotein (P-gp)-mediated doxorubicin efflux from the cells and enhanced passive influx, which increased the overall net cell absorption of doxorubicin.In the same way, micellar formulations of Poloxamer P85 and PAA copolymers appeared to be efficient excipients for oral delivery of paclitaxel in a rodent model.Compared with i.v.delivery, where no considerable change in plasma peak levels (Cmax) was observed when paclitaxel was delivered with cyclosporin A, oral coadministration of paclitaxel and cyclosporin resulted in 8-fold increase in Cmax, 15-fold increase in AUC, and 10-fold enhancement in oral availability of paclitaxel.This great improvement of the pharmacokinetic parameters in oral administration was due to decreased elimination of paclitaxel by the intestinal P-glucoprotein inhibition by means of cyclosporin A and possibly due to the cyclosporin/paclitaxel metabolic competition as substrates for the cytochrome P450 3A4 isozymes, which may reduce the metabolic elimination of the paclitaxel in the gut and the liver [132]. The ionizable carboxyls in the micellar corona facilitate mucoadhesion that enhances residence time of micelles and solubilized drugs in the gastrointestinal tract.Pluronic-PAA micelles were also loaded with weakly basic and water-soluble drugs such as doxorubicin and its analogs, mitomycin C, mitoxantrone, fluorouracil, and cyclophosphamide through electrostatic interactions with the micellar corona [133]. Besides Poloxamer-PAA copolymers have been studied to enhance aqueous solubility and stability of lactone form of camptothecin (CPT).Then, the equilibrium solubility partitioning and hydrolysis of the lactone form of CPT in the presence of Poloxamer-PAA in water and in human serum were studied.CPT solubility in polymer micellar solutions was ca.3-to 4-fold higher than that in water at pH 5. The amount of CPT solubilized per PPO was considerably greater in Poloxamer-PAA solutions than in the parent Poloxamer solutions, which suggests that the drug is not only solubilized by hydrophobic cores but also by hydrophilic POE-PAA shells of the micelles.Furthermore, the drug loading in micelles of Poloxamer-PAA notably hinders the hydrolytic opening of lactone rings in both alkaline water and human serum [134]. Thus, the pH-and temperature-sensitivity and the ability to solubilize and stabilize hydrophobic steroids, anti-cancer drugs, and proteins together with mucoadhesive properties have made the Poloxamer-PAA gels a feasible vehicle for oral, topical [127] and parenteral drug delivery [135]. Conclusion In the recent past, an impressive number of novel, self-assembling thermosensitive, gel-forming copolymers have been studied.Aqueous solutions of these novel block copolymers characterized by a reversible sol-gel transition behavior allows delivery of drugs in the liquid state, which can form an in-situ gel depot at body temperature providing drug release control.Another important feature is the amphiphilic character of micelles that makes these delivery systems capable of solubilizing water-insoluble or poorly soluble drugs and of protecting labile molecules such as proteins and peptides.The most studied self-assembling copolymers are Poloxamers despite the fact that they have not met initial expectations as pharmaceutical and biomedical implants, mainly due to their non-biodegradability and inability to provide a very long sustained drug delivery.On the other hand, PEO/PLGA and PEG-Polycaprolactone hydrogels represent very attractive systems for pharmaceutical applications.Their biodegradability and their good safety profile make these new polymers very promising as delivery systems able to control drug release over weeks or months by means of parenteral extravascular administration. Figure 4 . Figure 4. Scheme of the synthesis pathway of the PEG-PCL multiblock copolymer according to reference [96].
v3-fos-license
2021-04-22T13:37:41.053Z
2021-05-01T00:00:00.000
233333059
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://repository.tudelft.nl/islandora/object/uuid:c61a1e6a-7760-457f-b79f-638bd2f3baf7/datastream/OBJ/download", "pdf_hash": "83014d9bc6b5525fb468e7d5d94150ff46cf7737", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42555", "s2fieldsofstudy": [ "Physics" ], "sha1": "1a534578e4c71173acc7502c104d3092d1ca61c3", "year": 2021 }
pes2o/s2orc
A Compact Transformer-Based Fractional-N ADPLL in 10-nm FinFET CMOS In this article, we introduce a fractional-N all-digital phase-locked loop (ADPLL) architecture based on a single LC-tank, featuring an ultra-wide tuning range (TR) and optimized for ultra-low area in 10-nm FinFET CMOS. Underpinned by excellent switches in the FinFET technology, a high turn-on/off capacitance ratio of LC-tank switched capacitors, in addition to an adjustable magnetic coupling technique, yields almost an octave TR from 10.8 to 19.3GHz. A new method to compensate for the tracking-bank resolution can maintain its quantization noise level over this wide TR. A new scheme is adopted to overcome the metastability resolution problem in a fractional-N ADPLL operation. A low-complexity TDC gain estimator reduces the digital core area by progressive averaging and time-division multiplexing. Among the published fractional-N PLLs with an area smaller than 0.1mm2, this work achieves an rms jitter of 725fs in an internal fractional-N mode of ADPLL’s phase detector (2.7–4.825GHz) yielding the best overall jitter figure-of-merit (FOM) of −232dB. This topology features small area (0.034mm2), wide TR (56.5%) and good supply noise rejection (1.8%/V), resulting in FOMs with normalized TR (FOMT) of −247dB, and normalized TR and area (FOMTA) of −262dB. , with high performance computing (HPC) growing rapidly due to the recent emergence of artificial intelligence (AI) [3]. The wireline and clock generation applications simultaneously require low jitter, small area, wide tuning range, and supply noise rejection. There are two general types of oscillators: ring-based and conventional LC tanks, as indicated in Table I. An inverterbased ring oscillator (RO) is the most common solution for wireline communications due to its wide tuning range and small area. However, it exhibits poor jitter and supply noise rejection, especially in advanced CMOS [4], [5]. A recent trend of injection locking a RO to a reference clock requires the clocking source and its distribution to be of very high purity [6], which is not readily available in a highly integrated SoC environment. An LC tank oscillator could solve these two drawbacks, but it suffers from a narrow tuning range and large area due to the LC-tank inductor [7], [8]. In state-of-the-art ROs, a high current is needed to achieve reasonable phase noise. In [4], the phase noise is enhanced by increasing the current drawn by the RO. Three identical ROs are connected in parallel to achieve the best phase noise. For less demanding applications, two out of the three cores could be shut down to reduce the power consumption 3× while degrading the phase noise by 5 dB. The RO-based phaselocked loop (PLL) typically needs a low drop-out (LDO) regulator to address its poor power supply rejection (PSR) of noise and ripple [9], [10]. The supply sensitivity is usually several MHz per mV [14]. However, an LDO with a high PSR over a wide bandwidth is extremely challenging in advanced CMOS due to the decreasing dynamic resistance r out and increasing parasitic capacitance of (long channel) devices. On the other hand, the major issue of LC-PLLs is a narrow tuning range (e.g. 15 To increase the tuning range, multiple oscillator cores are usually employed [7], [11], [12]. However, the large resulting area is usually cost prohibitive. All-digital PLLs (ADPLL) are widely used in advanced CMOS, where they exploit the naturally fine conversion resolution of time-to-digital converters (TDC) and digitally controlled oscillators (DCO), thus further reducing the area and power dissipation over analog PLLs [4]- [6], [8], [12]- [18]. FinFET technology provides an especially good solution due to its faster switching transistors with lower propagational speed and lower power consumption, all in a much smaller area than in the coplanar technology [19]. In this article, we present an ADPLL fabricated in 10-nm FinFET technology to achieve a good supply noise rejection, wide tuning range, and reasonable jitter performance within a very limited area for clock generation and wireline communications SoCs [20]. Block diagram of the proposed ADPLL is shown in Fig. 1. The key innovation is a compact transformer-based DCO described in Section III. Four further innovations are introduced: 1) an adjustable magnetic coupling technique to extend the DCO tuning range to near octave (Section III-B); 2) a compensation of tracking bank gain for the reduced quantization noise floor (Section III-D); 3) a new metastability clock scheme in the TDC (Section IV); and 4) a low complexity K TDC estimator to minimize the digital area (Section V). II. ADPLL ARCHITECTURE At the heart of this ADPLL lies the differential LC-tank DCO, as shown in Fig. 1. To save the power consumed by the clock distribution network, a D2S block transforms the differential oscillator output waveform into a single-ended clock, which goes to the main output as FOUT at f out ≈ 10-20 GHz. FOUT is further divided by two stages of ÷2 dividers going to the secondary output, FOUT/4, and is also fed back to the TDC-based phase detector through another ÷2 divider. In this RF synthesizer of wide tuning range, the divider selection is quite important due to its trade-off between consumed power and the capability of wide operational frequency. An injection-locked divider, which is widely adopted in mmwave/RF wireless applications, is not in consideration here due to the narrow tuning range and large area. Most designers will use a D-flip-flop (DFF) based divider, TSPC divider, or CML divider in wireline applications for the wide tuning range operation. Current-mode logic (CML) divider could handle very high input frequency due to its fast response time. Since the static current is required, the power consumption is quite large. True single-phase clock (TSPC) divider is an alternative solution for medium frequency due to its requirement of full swing. In the first two ÷2 divider stages after the ∼10-20 GHz DCO, the operational frequency band is >5 GHz, so TSPC appears suitable. For the third-stage divider feeding the DCO modulator, the TSPC and static dividers could both meet the specification, so the DFF to save power. The 2 nd -order modulator provides dithering to the DCO to minimize the quantization noise with a sampling rate of f out /8 frequency. A digital-to-time converter (DTC) is inserted before the TDC to dither its FREF input in order to reduce the reference spurs and improve the in-band noise [16]. To achieve a compact implementation of a short dither sequence generator, a linear feedback shift register (LFSR) is used for DTC control. A. Transformer-Based Oscillator As stated above, the RO would burn excessive power just to produce a relatively mediocre level of jitter performance. Furthermore, an RO-based PLL usually requires a wide bandwidth and high PSR LDO to decrease its high sensitivity to noise and perturbations on the power supply lines coupling from the rest of the SoC, especially switching digital circuitry. As an alternative, an LC-tank oscillator could easily improve the Q-factor 5-20×, thus resulting in a much better jitter performance. For the immunity to power supply noise, the LC tank oscillator usually features 100× better frequency pushing than the RO-based oscillator due to its resonating frequency being established by the product of the inductor (L) and capacitor (C) values only. If the single-coil inductor is replaced with a multi-coil inductor (i.e. transformer), a larger loop gain can be achieved to bring up a more reliable start-up condition at a low power consumption [21]. As mentioned in Section I, the LC-based oscillator usually occupies an excessive area due to the required high-Q of inductor which needs to be physically large. Since the inductor needs to be minimized here, its Q-factor will be inevitably degraded [8]. Hence the need for a transformer with a passive voltage gain which also helps with an oscillation start-up in low-Q conditions, as shown in Fig. 2. In Fig. 3, if we put the primary coil on the drain side and the secondary coil on the gate side, then the voltage loop gain could be enhanced by the product of the coupling coefficient k m and turns ratio N, as in (1). where G m and r out are the large-signal transconductance and output impedance of cross-coupled MOS, respectively. Z in is the input impedance of the transformer seen from the primary side, as shown in (2): Components L P (L S ) and C P (C S ) represent the inductor and capacitor in the primary (secondary) side. In this topology, we set k m and N to 0.707 and 2, respectively. Consequently, the enhancing factor voltage gain is 141%. The extra 41% in the enhanced voltage gain helps the oscillation start-up in the low-Q condition. We minimize the inductor area by using a compact multiturn stacked transformer [8]. The 3D view layout is shown in Fig. 4. The transformer is designed as a stacked topology in two thick metal layers utilizing interconnecting vias. No ultrathick metal is needed. Primary inductor L p has two turns as the red winding at the top and pink at the bottom. It is connected to the drain side and the supply voltage (VDD). Secondary inductor L s has four turns marked as the blue winding at the top and the light blue winding at the bottom. It is connected to the gate side and the bias voltage (V B ). There are floating dummy metals between the bottom winding and substrate. The spacing between the primary and secondary windings is optimized to a proper coupling coefficient k m of 0.707 for the proper class-F operation [21]. The dotted lines represent vias to connect the upper and lower metal layers. The transformer has a turns ratio of 1:2. Its size is only 42 × 42 um 2 . The two metal layers shown in Fig. 4 are the uppermost thick layers. To simplify the transformer arrangement and for easy design reuse, the AP layer, which is widely used for high Q-factor inductor/transformer designs, is not adopted here. To compare with the conventional LC tank oscillator, the Q-factor is ∼ 4× smaller but the area is also ∼ 4× smaller because the area is roughly proportional to the Q-factor. B. Magnetic Coupling Technique for Wide Tuning Range Having addressed the area compactness of LC-tank, the next challenge is to extend the narrow tuning range inherent in the general LC-tank oscillators. The tuning range ( f max : f min ) could be represented as (3): An octave 2:1 tuning range requires the ratio of C max to C min to be greater than 4 [25]. The switched capacitor (swcap) network is typically built with an array of metal-oxidemetal (MOM) capacitors, each connected in series with a MOS switch [17], [26]. C max results when all switches are turned on. Likewise, C min is reached when all switches are turned off and is established by the source/drain capacitances of the switch transistors and parasitics of interconnect wires [27]. The switch transistors must be large enough to ensure a high Q-factor of the sw-cap at low frequencies, but not too large as that would prevent C min from going low enough to reach high frequencies. Hence, due to this conflicting requirement on the switch size, it is hard to enlarge the C max /C min ratio in the sw-cap. Practically, C max /C min could at best be 3 in advanced technology, which results in a tuning range of 1.7. Consequently, we must resort to an alternative, such as magnetic tuning. Figure 5 shows the concept. The secondary coil of inductance L 2 , coupled to the primary coil L 1 (here representing the transformer's equivalent winding [21]) with a coupling coefficient k m , is loaded by a variable resistor R. The impedance seen from the primary side is: By inspection, if k m is 0, the equivalent inductance (L eq = Z eq /s) naturally falls back to L 1 . The same happens if R becomes very large. If R is close to 0, L eq is equal to . For all other cases, the equivalent impedance will show both real (resistive) and imaginary (inductive) components. This has been studied in [31], [32] for mmwave oscillators. Our goal is to increase k m to minimize the equivalent inductance when R → 0 such that the ratio is maximized, as demonstrated later in Fig. 9(a). This allows to further extent the maximum resonant frequency to a much higher value. Figure 6 shows the layout of transformers employing the magnetic coupling technique. The red and blue traces represent the original 2-winding transformer, as shown earlier in the 3D view layout in Figure 4. The green trace represents the magnetic coupling coil. The switch lies on the South side of the green turn. Once the switch turns on, the opposite magnetic field will try to cancel the main one according to the Lenz's law, thus decreasing the equivalent inductance seen by the transformer and increasing its resonant frequency. In N = 1, the resonant frequency will be enhanced by 15%, as shown in Figure 7, but it is still not high enough. We further need an extra 20% enhancement to reach the octave tuning range. Hence, we start to increase the number of magnetic coupling turns and enhance the coupling coefficient. However, the resonant frequency enhancement becomes saturated after N = 2 due to the parasitic capacitance. More turns simply result in a larger parasitic capacitance. This parasitic capacitance cancels the opposite magnetic field and degrades the frequency enhancement. Hence, we have chosen N = 2 to obtain the largest tuning range extension of 20.6%. Although the main task of the magnetic coupling winding is to produce an opposite magnetic field for the cancellation, the resistance in this coil will be seen at the transformer's primary via magnetic coupling. The width of the magnetic coupling coil needs to be traded off between the resonant frequency boost and Q-factor degradation. In this design, the width is set to 1/2 of the original transformer winding to help with the interwinding spacing. The strong magnetic coupling technique will inevitably degrade the transformer's Q-factor. In (5), which assumes the series resistance losses are dominant, if the inductance L is reduced by the Lenz's law and the effective series resistance r s is increased due to the turn-on resistance of MOS switches, the Q-factor could degrade heavily. As shown in Figure 8, as the resonant frequency goes up, the Q-factor increases in the original turn-off state of extra magnetic coupling. Then, at the 16 GHz switch-over point, Q-factor goes down when the magnetic coupling turns on. Fully turning on the switch would induce a worse quality factor than in the case of softly turning it on. The Q-factor will drop 33% from 5.3 to 3.5 while fully turned on. It could even brake the oscillation. Hence, we need to add a softer mid turn-on state (engage M M0 in Fig. 9(b)) to provide a medium inductance and a lighter quality factor degradation of only 20%. The Q-factor then increases with the frequency increase from 16 to 18 GHz. This way, the quality factor will always stay above the lower bound (dotted segment) and so the oscillator start-up condition could be safe across the entire tuning range. The relationship between R and L eq is shown in Fig. 9(a). The effective inductance gets saturated at the lower bound when the resistance is smaller than 10 . Thus, we could set different switch sizes for the magnetic-coupling tuning. A middle turn-on state could be set by the M M0 switch which has a 25× smaller W/L ratio than in M M1 , as shown in Fig. 9(b). Since the 10 value requires a large MOS switch, the metal routing is done in such a way that it prevents from horizontal and vertical coupling of the source and drain sides of the MOS switch. Figure 10 shows a complete schematic of the DCO and its buffers. The primary tank is drawn in blue color. It is connected to the coarse and fine tuning capacitors. The secondary tank is in red. The turns ratio is set to 1:2 for the class-F operation. PVT and BAND banks contain the coarse tuning capacitors and TRACK bank contains the fine tuning capacitors. The magnetic coupling is in green with two switchable MOS transistors. M1-M2 comprise the crosscoupled G m device providing negative resistance to start up and sustain the oscillation. C. Oscillator Design M3-M6 comprise the NMOS-only buffer with dc-coupling [8], shifting the dc level from V D D to half of V D D . In the conventional AC-coupling technique, the dc-blocking capacitor would occupy a large area and the resistor would inject its noise back into the tank. Using a dc-coupled buffer can prevent these issues. However, the DCO outputs at a dc level of V B , at which it is difficult to design an effective dc buffer. Using the NMOS-only buffer could solve these problems. M5-M6 have a 4× larger W/L ratio than M3-M4 do in order to maintain the duty cycle and DC level of the output signal. M7-M14 feature the same W/L ratio for PMOS and NMOS transistors and form a high-speed differential-to-signal-ended buffer (D2S) to provide the single-ended full-swing clock to the true single-phase clock (TSPC) divider in the following stage. Using a single-ended buffer helps to reduce power consumption in the divider chain. An oscillation waveform with a dc level of V D D might cause reliability issues. Since the oscillation amplitude is proportional to the quality factor, the oscillation amplitude in this low-Q design is not excessively large as in the conventional LC-tank oscillators. We only need to ensure that the peak of oscillation amplitude would not exceed V MAX of the process. Figure 11 illustrates the corresponding time-domain waveforms at each stage. V D lies at the dc level of V D D (0.8 V). V D shape exhibits a square-like wave due to the third harmonic tone present in this class-F oscillator [21]. V G is a 2× larger waveform than V D due to the transformer's 1:2 turns ratio. After the NMOS-only buffer, BUF DC level is down to half of VDD (0.4 V). Since the oscillator buffer BUF produces a differential signal which might not be rail-to-rail, D2S circuit helps to transform it to a single-ended clock with a rail-to-rail swing for the following TSPC and CMOS clock buffers. The coarse PVT bank is a binary-weighted switchedcapacitor array split into the transformer's primary and secondary to achieve the maximum Q-factor enhancement [21]. To improve the fine-tune resolution without degrading the total tank's Q-factor, TRACK bank is connected to the primary coil to benefit from the capacitance transformation of 1/N 2 . The PVT bank provides large steps of 140 MHz/LSB and dominates the DCO tuning range [29]. The COAR and TRACK banks have a resolution of 15 MHz/LSB and 1.2 MHz/LSB, respectively. The COAR bank is 4 bits in binary code and the TRACK banks is 5 bits in thermometer code. A time-averaged resolution of 37.5 kHz is achieved by 5 fractional tuning bits undergoing a 2nd-order dithering [29], feeding a 3-bit unit-weighted capacitor bank at the transformer's primary. Figure 12 shows the DCO layout. There are two pseudodifferential transformers. The G m transistors M1 and M2 lie at the center of this layout plan. The power and ground ring with the decoupling capacitors provide the AC ground for each transformer. The transformers are laid out as point-symmetric to the center. Blue and light blue lines represent the primary winding in the differential mode. Red and light red lines represent the secondary coil. They connect the gates/drains of MOS transistors and the power ring. This floorplan allows for the magnetic field cancellation. The transformer in the top right produces a magnetic field in one direction, but the opposite direction is produced by the transformer in the bottom left. Once the magnetic field is substantially canceled, the field's interference within the SoC will also be reduced. The switched-capacitor bank could occupy the remaining 50% of area without any area being further wasting. In the conventional LC tank oscillators, the differential inductors usually dominate the occupied area. The total DCO size is only 125 × 125 μm 2 and so the core area is only 0.016 mm 2 . D. Nonlinearity of DCO Gain Having achieved the small area and wide tuning range, we still see yet another drawback naturally existing in wide tuning-range LC DCOs-the step size non-linearity. In (6), the gain K track variation due to a fixed capacitive step C is a cubic rule of resonant frequency ( f = 1 2π √ LC ) [28]. In this design, if the frequency tuning ratio is 2×, the gain variation will be 8×. Because the quantization noise is proportional to the fine-tuning DCO gain, K track , we need to compensate the tracking steps at high frequencies to prevent the ADPLL phase degradation there. As a solution, we use two MOM tracking capacitors stacked together and selected for different bands. In the lower band, track0 and track1 are both used. The capacitance of track0 and track1 is roughly the same. Thus, K track can select between two non-zero values of 4.1 MHz and 8.2 MHz with 2× ratio. With the original K track shown in red in Fig. 13(a), there might be 470% variation of the DCO gain from f min to f max [28]. To minimize the gain variation, we set a threshold frequency to f th for the compensation to be triggered. Since the oscillation frequency is highly related to the coarse tuning bank (PVT) [29], the PVT control code is fixed after locking. We set a PVT code as a threshold to judge whether the oscillation frequency is higher or lower than f th . The circuit implementation is shown in Fig. 13(b). The comparator is triggered at every frequency reference cycle and lets the tracking bank to use either track1 only or both track0 and track1. Once the PLL is locked, the PVT code should be fixed and the tracking bank compensation completed. With this technique, the variation of K track will reduce from 470% to 230%. IV. METASTABILITY IN FRACTIONAL PLLS Having solved the key issues of the wide-tuning-range DCO, the next two techniques are related to the system level. Figure 14(a) shows a block diagram of the conventional ADPLL [13], [15]. It has two independent clock domains, namely FREF and CKV, thus it could experience metastability, for example, in the resampling FF for the CKV counter. This is prevented by employing the red-colored D flip-flop (DFF) which aligns the clock edges of FREF to CKV, and the resulting retimed clock (CKR) is adopted by all the lowerspeed digital blocks. The ADPLL works now correctly in the integer-N case since the phase error is usually a small constant after locking, 1 as shown at the top of Figure 14(b). In a fractional-N ADPLL, there might be a metastability issue in the red DFF itself, as shown in 14(b). The DCO phase (i.e. edge positions) versus FREF will vary in accordance with FCW and so θ n constantly changes. In the fractional-N mode, the red DFF could likely encounter the metastable timing alignment between CKV and FREF. The metastability issue can have a detrimental effect in increasing the fractional spurs. Figure 15 shows a new metastability resolution scheme. The main idea is using the edge selector from the TDC to select the safe edge of CKV (rising or falling) for the FREF sampling and thus to prevent the metastability risk. In "case A" of Figure 15(a), if the FREF rising edge is close to the CKV falling edge, we use the rising edge of CKV for reclocking. R V will be selected as path A (blue dotted line) in Figure 15(b). In "case B" of Figure 15(a), the CKV falling edge is chosen for the reclocking. The edge selector will set R V to select path B in the MUX where CKV will use an extra DFF with the inverted CKV clock. The edge selector judges the phase relationship by the TDC data output bit Qout. We monitor the first transition of 0 → 1 or 1 → 0 to judge the CKV-FREF relationship. For example, if the first transition is 0 → 1, i.e. the region within the 1/4-th of CKV cycle, the edge selector will determine it is too close to the CKV's rising edge and use case B to get the correct result. V. TDC GAIN NORMALIZATION METHOD To have a compact ADPLL, a simplified digital design is also important. In this section, we try to minimize the digital core area of a circuit that is potentially of high complexity if not properly optimized. The TDC output with a gain (K TDC = 1 In a type-II PLL, CKV will be substantially aligned with FREF but one can add a small offset to avoid the metastability in the red DFF itself. T V / t inv ) needs to be normalized by its inverse, 1/K TDC , where T V and t inv are the DCO clock period and inverter delay (TDC resolution), respectively [13]. We present a low-complexity adaptive estimator of 1/K TDC with progressive averaging and time-division multiplexing, as shown in Fig. 16. A progressive-average (PA) calculator smoothens the TDC output roughness due to the quantization noise and is preferred over the straightforward moving-average implementation for cost reasons. It could be represented as: where, K TDC,i represents the i th sampled data that is accumulated over n samples. The sampling clock is FREF. As shown in Fig. 16, the circuit implementation of progressive averaging is quite simple. It only requires two adders and one shift register. For an alternative moving average method, the system would need to save n data values within a certain period, which costs significant hardware to implement. The progressiveaverage method only saves one data value each cycle. The area cost benefits are n − 1 times better. Generally speaking, the number of n would usually be larger than 10. In [22], a least-mean-squared (LMS) calibration based on phase error is applied to estimate the reciprocal of K TDC , but that might suffer from a non-convergence problem in the fractional-N mode, especially when it is close to integer-N [17]. A Newton-Raphson method is proposed here to provide a reciprocal of K TDC with a recursive equation and guarantee absolute convergence taking max 3-4 iterations even in face of a large step input. The Newton-Raphson equation for the reciprocal [30] is represented in (8), which recursively calculates the inverse of D = K TDC over internal steps of i . After 3-4 iterations, X i+1 will approach 1/D, where 0.5 ≤ D ≤ 1. The initial value of X i is represented as X i=0 . To minimize the peak of the approximation error, X i=0 is represented as [30]: The above coefficients of the linear approximation are determined by Chebyshev equioscillation theorem. Using this approximation, the absolute error of the initial value is less than 1/17. Thus, three multipliers are required to get the reciprocal of 1/K TDC by employing (8) and (9). Multipliers cost a huge area and power penalty due to the digital complexity. In this design, one multiplier needs a 19-bit output (6b + 13b). The high input word-length further makes it area/power expensive. A time-division multiplexing technique is proposed here to reduce the number of multipliers from 3 to 1 while getting the same result. In MUX1 and MUX2, they execute the 0, 1 and 2 MUX input paths serially with each FREF cycle. Consequently, the digital core size is substantially minimized. Fig. 17 shows the measurement results of the new estimator. The calculation starts as early as the PVT acquisition. The trk[3:0] bus signal controls the acquisition kick-off time in each bank. For example, the PVT bank starts the acquisition at 5 μs, as shown by the rising edge of the red line. The vertical red dotted line indicates some early settling of the 1/K TDC signal starting to reveal the progressive average behavior. After 3 ms, 1/K TDC is well settled and its curve is flat without any further changes. The value of 1/K TDC is 82 in this case under f R = 150 MHz and f V = 1.5 GHz, where f R and f V are the reference and variable frequencies seen by the TDC. From this, the inverter delay can be calculated as t inv = T V /K T DC = 8.2 ps, in which T V = 1/ f V . There are different digitally controlled settings for fast/slow settling modes of 1/K TDC estimation. In Fig. 17, the slow settling mode with accurate adaptation results is demonstrated. For the fast settling mode, the settling time could be less than 6.4 μs with ≤ 1.1% error. In practice, the trade-off between the accuracy and speed of the adaptation loop is addressed by a dynamic switch-over of its loop bandwidth from wide to narrow, as typically done in ADPLLs [15]. VI. MEASUREMENTS Figure 18 shows the chip micrograph of the ADPLL. It is fabricated in TSMC 10-nm FinFET CMOS. The DCO core occupies merely 0.016 mm 2 . The clock output is on the North side of the DCO core, so the divider chain lies nearby. The buffer line on the West side passes the divided output clock to the TDC at the South-West. The TDC with a 128-stage delay line and its metal routing output bus occupies 0.004 mm 2 . The digital core occupies 0.01 mm 2 at the South-East side. The active ADPLL region is only 0.034 mm 2 . The total area is smaller than the published RO-based frequency synthesizers that include the necessary LDOs. The DCO, divider and buffers consume 9 mW. The TDC, modulator and variable accumulator consume 1.6 mW. The digital core consumes 1.3 mW. The total power consumption P DC is 11.9 mW and the frequency range is from 10.8 GHz to 19.3 GHz, which is almost an octave. Fig. 19 shows the measurements of integrated jitter and spurious tones in an internal fractional-N mode of ADPLL's phase detector (i.e. f V / f R in Fig. 1). In Fig. 19(a), the sub-ps phase jitter of 725 fs is achieved while f R is 150 MHz and f out is 12.3 GHz. The overall fractional division ratio in this case is 20.5 ×4 = 82, but because of the DCO's ÷8 divider, the TDC sees FCW = 20.5. The fractional part is 2 −1 in this case, which shows the lowest phase jitter among all the fractional frequency offsets in Fig. 19(b). In Fig. 19(c), the fractional-N spurs are −66 dBc outside of the loop bandwidth and they increase going into the inband with a 6 dB slope, as shown in Fig. 19(d). As the fractional-N spurs go inside the loop band, the integrated jitter is consequently increased to ∼1 ps, except for the very small fractional FCW of ≤ 2 −9 , where it reaches 1.5-2 ps. Techniques to reduce such fractional spurs were presented, among others, in [16], [17]. Figure 20(a) shows the phase jitter in integer-N mode at 12 GHz output with the 150 MHz reference. This mode could reach smaller integrated jitter 669 fs without the fractional spurs. Figure 20(b) shows the spectrum plot. The reference spurs can reach −74 dBc at 12.15 GHz (150 MHz × 81), which is a fairly low level. Figure 21 shows the measured ADPLL loop settling behavior of the the three DCO tuning banks by means of capturing the DCO tuning input signals into the SRAM memory during the actual operation. This case corresponds to the locking frequency of 14.4 GHz with 150 MHz FREF. The default PVT code is close to the target frequency and no acquisition is required. The start-up time is within 5 μs and the lock time is 22 μs. Table II shows the performance summary and comparison with state-of-the-art in PLLs featuring a small area in advanced technology. Our operational frequency is the highest among all LC tank oscillators. The core area is compatible with the RO ADPLL and as little as half of the analog LC PLL [7]. The phase jitter could achieve sub-ps due to the transformer-based DCO. The frequency pushing of 1.8%/V is hundreds of times smaller than in the RO-based frequency synthesizers (without any LDO) [4]. For an overall performance assessment of a PLL, the jitter (σ t ) figure-of-merit (FoM) was defined in [23] as: FoM = 20 log 10 σ t 1s + 10 log 10 P dc 1mW (10) An extension, FoM T , normalizes it to the tuning range, TR: FoM T = FoM − 20 log 10 TR[%] 10 The area cost is essential in advanced technology and the LC tank oscillators usually require huge area due to the inductor. Consequently, FoM TA is defined to further normalize it to the (12) FoM, FoM T and FoM TA of the proposed ADPLL are −232, −247, and −262 dB, respectively. Our best reported FOM TA signifies achieving the adequate state-of-the-art performance for the intended application but at the near-octave tuning range and the lowest possible occupied area. VII. CONCLUSION In this article, we have proposed a new fractional-N ADPLL architecture with the following features: Nearly one octave tuning range with a single LC tank oscillator, which does not require ultra-thick metal layers, thus could be universally used in all CMOS flavors. The DCO is assisted by an adjustable magnetic coupling technique that increases the tuning range by 17.2%. The compensation of tracking bank resolution can keep the DCO gain K track roughly constant over this wide tuning range, thus maintaining the quantization level. A new metastability resolution scheme is adopted to overcome the fractional-N problem. The low complexity TDC gain estimator, 1/K TDC , reduces the digital core area by the progressive average and time division multiplexing. Among all the fractional-N PLLs with an area smaller than 0.1mm 2 , this work achieves a rms jitter of 725 fs in an internal fractional-N mode of ADPLL's phase detector (i.e. f V / f R ). This topology featuring small area, wide tuning range, and good supply noise rejection shows the potential to replace ROs which necessarily require wide bandwidth LDOs, which is currently the most common solution for wireline communications.
v3-fos-license
2023-04-27T15:04:26.008Z
2023-04-01T00:00:00.000
258337522
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1016/j.enggeo.2023.107121", "pdf_hash": "2d3062c52931b874c1f48ddc4a764fcaa48cca69", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42556", "s2fieldsofstudy": [ "Geology" ], "sha1": "559ae908ff08e0412d57c8586908158d83012aeb", "year": 2023 }
pes2o/s2orc
Modeling the area of co-seismic landslides via data-driven models: The Kaik ¯ oura example substantial developments in data-driven models for landslide prediction. However, this improvement has been mostly devoted to models that estimate locations where landslides may occur, i Introduction The estimation of where landslides may occur in the future has dominated the geomorphological literature pertaining to data-driven applications since its first conceptualization in the early 1970s (Reichenbach et al., 2018).Almost no other data-driven modeling framework with a spatially-explicit (i.e., an explicit representation of a geographic space) connotation has been developed for the subsequent five decades other than the susceptibility (Fell et al., 2008).This concept boils down to estimating the probability of a landslide occurring in a given mapping unit under the influence of topography and other thematic landscape characteristics (Van Westen et al., 2008).An extension to this framework is present in the literature, although less prominent than the susceptibility, and it features the spatiotemporal characteristics of the landslide trigger, leading to the estimation of the hazard concept (Guzzetti et al., 1999).This extension has led to the development of important forecasting tools such as near-real-time models (Nowicki Jessee et al., 2018) and early warning systems (Intrieri et al., 2012;Kirschbaum and Stanley, 2018).In both cases, though, the model behind the respective results still targets landslide occurrence data in the form of presences or absences across a given landscape (Frattini et al., 2010). As technology advanced, the information on unstable slopes also changed, being acquired and processed in multiple ways.For instance, at the origin of the susceptibility concept, geomorphologists were observing the landscape and labeling slopes to be likely stable or unstable on the basis of personal experience (Brabb et al., 1972).The birth of GIS facilitated the development of numerical tools, which started from simple analytical approaches such as bivariate statistics (e.g., Van Westen et al., 1997) and evolved into more complex modeling schemes where multiple variables simultaneously contribute to estimate the susceptibility (e.g., Steger et al., 2016).The latter frameworks belong to different and complementary approaches that have taken root in the landslide community.One corresponds to statistical models where model interpretability is favored at the expense of reduced flexibility and lower performance (i.e., analytical tasks).The other one corresponds to machine learning, where performance maximization is sought at the expense of interpretability (i.e., prediction task).Two common examples correspond to statistical models such as Generalized Linear Models (e.g., Castro Camilo et al., 2017) and machine learning models such as decision trees (e.g., Yeon et al., 2010) or neural networks (e.g., Wang et al., 2021).Between these two lies the Generalized Additive Model, also referred to as an interpretable machine learning technique (Steger et al., 2021).They ensure a high degree of interpretability typical of statistical architectures, but their structure allows for incorporating nonlinear effects, leading to flexible models with high performance (Lin et al., 2021).Nevertheless, because their target variable is a binary realization of landslide occurrences, these models lack the ability to return information on how large a landslide may be (Lombardo et al., 2021) or on how many coalescing landslides may initiate in a particular region (Lombardo et al., 2019). There are three previous articles where a data-driven model is used to estimate landslide areas at regional scale.The first is Lombardo et al. (2021), which estimates the maximum and the sum of planimetric landslide areas within slope units.That model has a global connotation: it implies that knowing its validity in a worldwide context is not sufficient because this scale is not applicable to local territorial management.Alternatively, physically-based models are applied to predict landslide size (Alvioli et al., 2014), although the limited availability of geotechnical and hydrological data restricts their application to small regions.The applicability of the model proposed in Lombardo et al. (2021) still needs to be validated for site-specific conditions.Moreover, it must undergo tests in case of seismic-and rainfall-induced landslides.To this purpose, other contributions try to replicate a similar experimental setting at regional scales, focusing on co-seismic landslides (Aguilera et al., 2022) and rainfall-triggered landslides (Bryce et al., 2022). In this work, we seek to produce a new analysis that follows the main workflow direction of the articles mentioned above but introduces untested innovations, such as topographic correction of the landslide area target variable and the validation of model performances using a spatially-explicit model validation.We selected the Kaikōura earthquake (7.8 M w , 13-11-2016), for which a suitable co-seismic landslide inventory is available (Tanyas et al., 2022).We partitioned the area affected by landslides into slope units (Alvioli et al., 2016), extracted the sum of all landslide extents falling within each mapping unit, and calculated the topographically-corrected surface areas.As for the model, we adopted a Generalized Additive Model structure under the assumption that the aggregated landslide area per slope unit followed a Log-Gaussian likelihood (cf.Lombardo et al., 2021). Study area and co-seismic landslides The Kaikōura earthquake struck the South Island of New Zealand on UTC 14 th of November 2016 at 11:02.This was the largest magnitude earthquake in New Zealand since 1855 and had a very complex rupturing mechanism (Hamling et al., 2017).The earthquake cascaded across a series of faults with dextral, sinistral, oblique, and reverse rupture mechanisms (Diederichs et al., 2019).Significant co-seismic surface deformations occurred across a large landscape extending up to 100 km from the epicenter (Cesca et al., 2017).The reported uplift amount reached up to 8 meters in some locations (Hamling et al., 2017).As a result, the earthquake severely damaged infrastructure and altered the environment (Kaiser et al., 2017). Considering the steep terrain affected by considerable ground shaking, the earthquake resulted in a large number of landslides (Tanyas et al., 2022).Massey et al. (2020) reported more than 29,000 landslides triggered by the earthquake, whereas Tanyas et al. (2022), using slightly different criteria and lower resolution imagery, mapped 14,233 landslides over an area of about 14,000 km 2 .Considering the documented earthquake-induced landslide events (Tanyas ¸ et al., 2017), the Kaikōura event is one of the largest recorded in the literature. This study examines an area affected by the 2016 Kaikōura earthquake (see Fig. 1) using the landslide inventory of Tanyas et al. (2022), which delineates landslide sources and deposit areas as single polygons.The inventory consists of various landslide types, including disrupted rock, debris, soil falls and slides.However, landslide types are not indicated in the original data; thus, our analyses are not sensitive to any specific type of landslide. Slope Units The use of a Slope Unit (SU) delineation in the framework of landslide predictive models dates back to Carrara (1988).The spatial extent of this mapping unit is usually coarser than the more common grid cells.The latter are regular polygonal objects that offer a simple spatial partition of any landscape, mainly by matching the gridded resolution of the Digital Elevation Model (DEM) available for the given study area.They are suited to express the spatial variability of continuous phenomena, such as temperature fields.However, landslides are discrete processes.The geoscientific community has long debated whether grid cells are suitable for modeling slope failures.Conversely, SUs are more suitable from a geomorphological perspective, although they require additional processing steps, such as the aggregation of fine-scaled landscape characteristics.Several automated tools have been proposed and shared within the geoscientific community (Alvioli et al., 2020).In this work, we selected SUs to partition the area affected by the Kaikōura earthquake to predict the cumulated area of landslides per mapping unit.Below we report the parameterization of r.slopeunits, the software we used.As for their interpretation, we refer to Alvioli et al. (2016). • Circular variance = 0.4 • Flow accumulation threshold = 1,000,000 • Minimum Slope Unit area = 80,000 The circular variance is a parameter that controls how homogeneous the aspect criterion should be.For instance, a circular variance of 0 would result in a strict selection where a few pixels could be merged into a SU.Conversely, a circular variance of 1 would result in a flexible search where pixels with large differences in slope exposition would be merged in a single SU.As for the flow accumulation, this parameter controls the starting planimetric area for the SU partition.Any subsequent r.slopeunits iteration would reduce the SU unit extent.The minimum SU area and the cleansize control the final outcome.The first defines the average target planimetric area for a SU to be considered, and the second represents the size below which any artifact SU should be merged to the adjacent one. The resulting SUs offered a medium resolution of the exposed Kaikōura landscape with 26,839 total SUs, whose planimetric area distribution has a mean of 500,000 m 2 and a standard deviation of 430,000 m 2 . Covariates: landscape characteristics and ground motion data This section illustrates the covariates we used to explain the variability of the co-seismic landslide area distribution in the affected Kaikōura region.Although there is extensive literature examining factors governing the probability of landslide occurrence, factors controlling the area of landslides in a spatial context is a relatively new concept (e.g., Lombardo et al., 2021).In this regard, we tested several variables representing morphometric, anthropogenic, and seismic factors as well as material properties (see Table 1 and Fig. 2).We used a 25-m-resolution DEM provided by The Land Resource Information Systems Portal of New Zealand (LRIS; https://lris.scinfo.org.nz) and tested some basic DEM derivatives, namely, slope steepness (Slope), northness (NN), eastness (EN), local relief (Relief), profile curvature (PRC) and planar curvature (PLC) to assess the role of morphometric variables on landslide area.Capturing the role of anthropogenic factors is challenging (e. g., Tanyas ¸ et al., 2022), but the study area is a remote territory, and the road cuts are the main features representing human influence on landsliding.Therefore, we calculated the Euclidean distance to the road network (e.g., Lepore et al., 2012) to capture a possible influence of anthropogenic factors.Specifically, we accessed the road network map of the study area via the Land Information Portal (https://data.linz.govt.nz) of New Zealand.As for the co-seismic ground shaking, we used the Peak Ground Acceleration (PGA) map of the Kaikōura earthquake provided by the U.S. Geological Survey (USGS) ShakeMap system (Worden and Wald, 2016).PGA is a seismic proxy, and specifically, the deterministic estimate of PGA provided by the USGS ShakeMap system is widely used in susceptibility analyses of co-seismic landslides (e.g., Nowicki et al., 2014).Also, we used a pedological soil thickness map of the study area (Lilburne et al., 2012) via the LRIS portal as a proxy for the shear strength of hillslope materials.Different from all the other covariates, we examined the soil thickness map as a categorical covariate because it includes four categories where soil depth is described as deep (D, >90 cm), moderately deep (MD, 40-90 cm), shallow (S, 20-40 cm) and very shallow (VS, <20 cm) as well as a category indicating no soil (NS) cover (note that a more desirable regolith thickness map does not exist). Table 1 Covariate summary table.Each covariate listed here was later used in a dual form during the analyses.Specifically, we represented each covariate in this table through the mean and standard deviation values computed per SU.We do not list both terms in the table, but they will be denoted in the remainder of the manuscript via the suffix _mean and _stdev added to the acronyms reported in the table. Data aggregation at the Slope Unit level We used slope units to aggregate both the target variable, this being the topographically-corrected landslide area, and the covariates described in the previous Section. The landslide extent calculation was based on the aggregation of the landslide area by summing up all landslide areas within each SU.Before this aggregation step, though, we applied a correction to reduce the underestimation of landslide area on steeper terrain due to the underlying planar projection.For this purpose, a trigonometric function based on a slope-steepness map was used to derive the "true" surface area of each landslide polygon in analogy to Steger et al. (2021). Fig. 3a shows the distribution of the topographically-adjusted landslide area after the aggregation step mentioned above (sum for each SUs).Being the distribution strongly heavy-tailed, we opted to take the logarithm of the cumulative landslide area per SU Fig. 3b.In such a way, a Log-Gaussian model could be used to suitably explain the variability of these estimates (more details in Section 3.1). Fig. 2. Example of the covariate set used for the analyses.The soil depth map includes five classes: NS for No Soil, VS for Very Shallow, S for Shallow, MD for Moderately Deep, and D for Deep.Notably, the Dist2R map is shown as is only for graphical purposes.We constrained the information conveyed by Dist2R (to the model we will describe in Section 3) only up to a 500 m buffer around the road network.After this distance, we impose the covariate to cease to be informative. As for the landscape characteristics, we computed the mean and standard deviation of each continuous covariate per SU (see Titti et al., 2022).Ultimately, we only extracted the dominant type per SU whenever the landscape characteristics corresponded to categorical properties, such as underlying lithology, land use, or soil thickness classes. Modeling strategy Below we provide a brief description of the model we adopted, the cross-validation scheme we implemented, and the metrics we used to assess how the estimated landslide areas matched the observed cases. Generalized Additive Model Generalized Linear Models (GLM) are statistical techniques designed to model linear relationships between a target variable and a set of predictors.A Generalized Additive Model (GAM) is a more flexible extension of a GLM.In analogy to GLMs, GAMs can handle a variety of error distributions but additionally account for nonlinear associations between the target variable and continuous predictors.This flexibility combined with high interpretability makes GAMs useful in data-driven studies.The presence of nonlinear relationships between landslide occurrence and environmental factors can be expected (e.g., landslides may less likely occur in flat and very steep terrain), while high interpretability of the modeling results is paramount for geomorphological interpretation and plausibility checking (Steger et al., 2017).GAMs with a binomial error distribution have been applied to model landslide susceptibility (Petschko et al., 2014), while Poissonian GAMs were used to model spatial landslide counts (i.e., intensities; Lombardo et al., 2020).A Log-Gaussian distribution within a Bayesian GAM forms the foundation to create the first data-driven model predicting landslide size per SUs, i.e., the maximum landslide area and the sum of landslide area (Lombardo et al., 2021).The Log-Gaussian GAM used within this study is based on the R-package "mgcv" (Wood and Augustin, 2002).This framework allowed us to model the topographically corrected log-area of co-seismic landslide areas at the SU-level (hereafter L A ) as a function of a covariate set that describes landscape characteristics and spatial ground motion properties.The nonlinear relationships were fitted while we restricted the maximum allowed flexibility of the underlying smoothing functions to a k-value of 4 (i.e., the maximum allowed degrees of freedom) to enhance model generalization and interpretability.The covariates are described in detail within Section 2.3, while their selection was based on a systematic procedure that included an iterative fitting and evaluation of different model realizations.In detail, we started with a full model and iteratively excluded covariates that did not meet the following two criteria: a covariate was only considered appropriate in case the underlying smoothing term was estimated to be significant at the five percent level (p-value < 0.05); a covariate did not enhance the model's predictive performance. Besides handling nonlinear relationships, GAMs also allow visualizing the respective modeled associations between the target variable and the predictors.This model transparency is useful to enable interpretation and uncover implausible results (Zuur et al., 2009).In this sense, component smooth function (CSF; i.e., partial effects) plots were used to visualize the estimated covariate-response relationship.These plots enabled the interpretation of nonlinear effects on the aggregated landslide area per SU at a single covariate level while simultaneously accounting for the influence of other covariates in the model (Molnar, 2019). Model performance Below we summarise the cross-validation schemes adopted and the metrics used to assess how the model performed in explaining the spatial distribution of landslide areas.The last section explains how we then provided estimates of landslide areas for SUs that did not experience slope failures during the Kaikōura earthquake. Cross-validation routines To test the performance of our model, we selected two crossvalidation approaches.The first corresponded to a random crossvalidation scheme (RCV), where we repeatedly extracted a random subset of 90% of SUs within the study area for training our model (i.e., a training set) while the remaining data (i.e., a test set) of each repetition was used to calculate the performance metric.The random selection was constrained to select the same SU only once.Thus, the union of the ten replicates returns all the SUs constituting the whole study area. However, a spatial process may embed some degree of internal spatial dependence, which may not be fully explained by a chosen covariate set.Conventional non-spatial random partitioning of training and test sets (e.g., RCV) may provide test statistics that do not capture the variability of model performance across sub-regions of a study site.Using RCV, over-optimistic performance scores are likely to be measured if spatial model predictions poorly match data within single sub-regions of an area.Spatially-explicit validation schemes, such as spatial cross-validation (SCV), can be used to estimate the spatial transferability of model performance scores within a study site and uncover spatially incoherent model predictions (Steger et al., 2017).SCV results can inform potential users of a given model about worst-case prediction skills in space about the spatial robustness of the general model setup.SCV is usually based on a repeated random splitting of training sets and test sets according to sub-areas of a study site.For this study, the underlying spatial partitioning approach was based on kmeans clustering (see Brenning, 2012 for a more detailed explanation). In this work, we opted to report the model performance estimated via an RCV where the prediction skill was aided by residual clustering effects, as well as via an SCV where the estimated performance scores are usually lower, thus providing insight into the minimum prediction skill expected for sub-regions of the study site.Fig. 4 shows a few examples of the routines mentioned above.Specifically, the 10-fold RCV and the 10fold SCV have been iterated ten times to randomize the spatial cluster of slope units to be extracted. Performance metrics To assess how suitable our modeling framework is to reflect the observed landslide area per SU, we selected a dual approach featuring visual and numerical performance summaries for both CV schemes described above.The visual summary corresponds to a simple graph plotting the observed landslide areas against the estimated ones.As for the numerical summaries, the metrics we used were the Pearson Correlation Coefficient (R-Pearson; Schober et al., 2018) and Mean Absolute Error (MAE; Mayer and Butler, 1993).To these, we also added the Root Mean Square Error (RMSE; Kenney and Keeping, 1962) for completeness, although several contributions mention that the MAE is a better measure of deviance (Willmott and Matsuura, 2005). Map-based landslide area prediction In this section, we specify something of particular conceptual relevance.In traditional susceptibility models, one can and should use the presence-absence information across the whole study area (Petschko et al., 2014).However, information on landslide area is only associated with a subset of the SUs partitioning the Kaikōura landscape.Therefore, to produce maps of the predicted landslide area for the whole study area, we adopted the following procedure.Initially, we extracted the positive landslide areas to train and test our Log-Gaussian GAM.Subsequently, we implemented a simulation step using the estimated regression coefficients to solve the predictive equation in areas where landslide area information was unavailable. Results Below we present an interpretation of the model components, performance, and mapping results. Model relationships This section summarizes the estimated covariate effects responsible for explaining the spatial distribution of landslide areas per SU. Fig. 5 offers an overview of all the nonlinear effects we included in the model.Although we allowed the regression coefficient to vary nonlinearly across each covariate domain, the implemented internal smoothness selection procedure selected certain covariates to be best represented via linear functions.This was the case for Slope_stdev, NN_mean, PRC_stdev, and PGA_mean.This implies that a unit increase in the covariate value would generate a proportional changedepending on the sign of the regression coefficientin the resulting landslide area.And, that the change would be the same irrespective of where that unit increase happened across the whole covariate spectrum.Moreover, eight covariates deviated from linear behavior, of which two were only mildly nonlinear NN_stdev, Dist2R_stdev, whereas the remaining six showed much more nonlinearity (Slope_mean, EN_mean, Relief, PLC_mean, Dist2R_mean, and PGA_stdev). Below we provide a brief overview of these covariate effects (from the most interesting linear to the nonlinear ones) by interpreting their marginal contribution (i.e., assuming all the other covariate contributions to be fixed).For example, we justify the positive increase of the estimated landslide area due to Slope_stdev because a rougher terrain may have larger quantities of hanging material susceptible to be mobilized due to the contextual water impoundment (Jiao et al., 2014).Similarly, the PGA_mean positively contributes to the estimated landslide area, and its linear behavior may be seen as a destabilizing effect of ground motion on the landscape (Tanyas and Lombardo, 2019).Furthermore, two covariates share similar nonlinear contributions.These are Relief and PLC_mean, both with pronounced sigmoidal behavior.The former can be interpreted with the positive contribution of gravitational potential energy, where at increasing values, the failing mass will experience increasing kinetic energy as it moves downhill, thus producing larger landslides overall (Yamada et al., 2018).As for PLC_mean, the planar curvature is known to control the convergence of granular materials and overland flows over a landscape (Ohlmacher, 2007). Aside from covariates we allowed to behave nonlinearly while still carrying their ordinal structure, we also considered the nonlinear and categorical signal of soil thickness classes.As it stands out in Fig. 6, the signal carried by the prevalent soil depth class per SU does not produce a clear "monotonic" pattern in the estimated regression coefficients per class (i.e., landslide area increases or decreases systematically with soil depth).This is likely due to two reasons.First, the raw soil depth map we accessed is directly expressed into classes, which implies a loss in the continuous information a soil depth should be expressed into.Clearly, soil depth cannot be continuously measured over space because it would require excessive resources.Therefore, even the classes we used are the result of an interpolation routine, which may have smoothed the soil depth signal over space.A second and valid reason for the not straightforward interpretation emerging in Fig. 6 is that we aggregated the soil depth signal over the SU by choosing the majority rule.In this sense, a given SU is assigned with the soil depth label of the class with the largest areal extent.However, the majority class may not be the one responsible for the failure. Model performance The visual agreement between the observed and estimated landslide area among the three model routines we tested is summarized in Fig. 7. There, one can see that the model fit produces the highest degree of agreement between the observed and estimated landslide areas.In the second panel, the RCV-predicted landslide areas closely follow the trend shown for the fit.As for the SCV results, the deviations from a perfect match between observed and estimated landslide areas appears slightly more pronounced compared with the other two cases.However, this is to be expected because a SCV essentially removes any residual dependence from a spatially distributed dataset, thus producing lower performance scores in a real-world data setting.In this sense, the match shown for the SCV can still be considered a suitable source of information for hazard assessment. Fig. 8 complements Fig. 7 by informing on the correlation between observed and estimated landslide areas, together with the error between the two.Several authors have proposed a classification of the R-Pearson, and most of the literature on the topic would indicate values of around 0.6 to reflect a moderate (Mukaka, 2012) to strong (Corder and Foreman, 2011) correlation between observed and estimated landslide extents.Analogous considerations arise by examining the MAE and RMSE, with acceptable errors in both cross-validation schemes.Notably, the performance metrics reported in Fig. 8 confirm that the SCV returned a slightly poorer agreement compared to a purely random cross-validation scheme. Landslide area predictive maps Fitting a statistical model allows one to retrieve the set of regression coefficients through which one can estimate the expected values of the given target variable.At the same time, one can use the same set of regression coefficients to solve the predictive function for locations where the target variable is unknown.The latter concept boils down to what one could refer to as a statistical simulation (e.g., Luo et al., 2021) or model transferability (e.g., Petschko et al., 2014).Fig. 9 summarizes the estimates of the two cross-validation schemes at SUs for which we have L A observations, as well as SUs where we have not.The first row highlights the agreement in spatial patterns among the observed and predicted L A values, with a coherent pattern shown among the three images, albeit the prediction routines show some degree of smoothing as they transition from RCV to SCV.The strength of our modeling framework is shown in the second row of Fig. 9, where we transfer the predictive equations to the remainder of the Kaikōura landscape. Discussion The capacity of data-driven models to go beyond traditional susceptibility models is still in infancy stage.This analysis suggests that a Log-Gaussian GAM can reproduce the pattern and value range of landslide areas aggregated at the slope unit level.Out of the whole procedure, certain elements already support the replication of similar analyses, while others call for further improvements.Two of these elements are discussed separately in the sections below. Supporting arguments Landslide area correction with respect to slope steepness is not usually considered in most geoscientific contributions, with the exception of very few cases (e.g., Steger et al., 2021).In the context of a model that aims at predicting landslide area, we consider this an additional element to be added to the protocol proposed in Lombardo et al. (2021).A further addition is the use of spatial cross-validation.Lombardo et al. (2021) constrain the spatial cross-validation to be generated once.Herein we focus on a specific site, which makes it easier for us to replicate the spatial sampling, thus fully randomizing the spatial crossvalidation results, in line with what Brenning (2012) prescribes, albeit in a binary context. The performances we retrieved suggest that it is worth extending the landslide area prediction further.Figs.7 and 8 show the extent to which our model estimates the observed landslide areas.Also, this is translated over the geographic space in Fig. 9, where the spatial patterns match, albeit the predictive routines show deviation from the original A L values as the tested cross-validation routines moved from the random context to the spatially constrained one.Further improvements may consolidate the concept and role of landslide area prediction within protocols of disaster risk reduction.For instance, we could combine the area model with a traditional susceptibility one.As things are, the traditional susceptibility framework does not formally account for the expected area of landslides once triggered in a given slope.However, even our landslide area framework is blind to whether a slope may be prone or not to fail.In turn, this means that these two tools are currently separated, and further efforts could be directed toward merging them into a single product that integrates two hazard features, namely spatial landslide probability and landslide area.For instance, one could model them separately and then combine them.In such a way, slopes that may morphologically be associated with large failures but are seen to be stable (low probability of occurrence) by the susceptibility component will result in a small hazard-proxy value.The same may happen in the case of slopes that are expected to be unstable (high probability of occurrence) but associated with very small landslides.In such a scenario, the estimated hazard proxy will also be low.On the contrary, only in situations where high susceptibility is associated with large expected landslides, one would obtain a level of such a hazard proxy that would inevitably require attention.Such a scheme may give rise to a new landslide hazard framework, providing a full spectrum of probabilistic estimates aimed at aiding decision-making processes for landslide risk reduction.An alternative we envision could include physics-based simulations typical of slope-scale engineering solutions.Assuming we could simulate landslides for single slopes, the physics-based framework would ensure retrieving intensity measures such as landslide velocities, kinetic energy, or impact force.These could then be passed to a similar model to the one presented here to predict momentum-related measures rather than landslide areas. Opposing arguments To critically review our landslide area model, one should take a step back and look at the model's fundamentals.The fact that it relies on a logarithmic transformation of the landslide area distribution per SU requires some consideration.From a mathematical perspective, this framework is sufficient to produce valuable predictive maps, as the logarithm is a monotonic transformation.Thus, landslide areas that are smaller compared to other SUs in the observed data will still be relatively smaller in the prediction, irrespective of whether we direclty model the landslide extent in linear or log scale.However, two negative elements affect this framework.An obvious one is that from an interpretation standpoint, one lacks the intuition of what a predicted value would indicate at the log scale.This argument could still be acceptable because of the monotonic transformation mentioned above.But, reflecting on what this entails in terms of errors calls for potential improvements.A Gaussian likelihood implies that the model focuses on the bulk of the landslide area distribution.In other words, the mean landslide area will be suitably estimated, leaving the tails potentially misrepresented.The left tail, the side of the distribution with very small landslides, may be of lesser interest.However, a misrepresentation of the right tail, the side of the distribution that hosts very large landslides, can lead to poor decisions for the more dangerous ones.Notably, the performance we produced does not raise concerns to the point of considering our landslide area model inappropriate.However, we envision the next phase of the model development to explore more suitable likelihoods.The log-Gaussian context is appealing because of its easy implementation, and as long as the performance may stay along the lines of what we presented here, the choice of such likelihood can be justified.However, in the hope of further extending the landslide area prediction in different geographic contexts across different landslide types and triggers, we cannot exclude that the likelihood we chose so far may prove to be insufficient or lead to undesired errors away from the bulk of the distribution.In such cases, extreme-value theory in statistics may provide a better modeling framework.We already envision this direction to be the following research and development phase.Some of this development can be seen already in Yadav et al. (2022), where a number of distributions are tested in a Marked Point Process framework. Conclusions Data-driven landslide models have relied essentially on the same toolbox for over five decades.We believe it is time to review whether new tools could improve the static susceptibility framework and complement the information it provides.One of these elements consists of how landslide areas may enlarge after the landslide initiates, evolves, and potentially coalesces into larger areas of materials moving downhill.This information has been traditionally associated with physically-based models, and other kinematic parameters such as velocity.On the one hand, landslide kinematics cannot be modeled in detail via data-driven approaches because of the lack of observations.On the other hand, though, the landslide area information is contained in any standard landslide polygonal inventory.As a result, data-driven models can be trained to learn what environmental characteristics promote small to large landslides and translate this information into maps of the expected landslide areas.This idea is an uncharted territory within the geoscientific community, with few articles currently addressing this issue. We consider our landslide area model a new venue of potential scientific interest, and prompt the geoscientific community to explore this framework further.To ensure dissemination, we shared data and codes in a GitHub repository, accessible at https://github.com/mmorenoz/GAM_LandslideSize.Further improvements may be directed toward fitting different statistical distributions tailored toward extremely large landslides or to extend the current spatial model toward its space-time counterpart.Potential implications may be translated into better hazard information for administrations to base their risk reduction plans. Fig. 1 . Fig. 1.Geographic summary of the co-seismic landslides triggered in response to the Kaikōura earthquake (panels a and b).Panel c shows an example of the slope unit delineation superimposed on an aspect map. Fig. 3 . Fig. 3. Distribution of the topographically-corrected landslide areas per SU.Panel (a) shows the sum of derived landslide areas per SU in a linear scale, whereas panel (b) highlights the same information in a logarithmic scale. Fig. 4 . Fig. 4. Geographical sketches of CV routines via five examples of the ten folds we implemented in this work.The first row shows an RCV, whereas the second row highlights the effect of a spatial constraint in the SU selection. Fig. 5 . Fig. 5. Summary of ordinal nonlinear effects on the aggregated landslide area per SU. Fig. 6 . Fig. 6.Summary of the categorical nonlinear effect of soil depth classes on the aggregated landslide area per SU. Fig. 7 . Fig. 7. Summary of the agreement between the observed landslide area per SU and the corresponding values estimated through a fit and two cross-validation (RCV and SCV) routines. Fig. 8 . Fig. 8. Pearson correlation coefficient, mean absolute error and root mean square error estimated for the purely random cross-validation and the spatial random cross-validation, respectively. Fig. 9 . Fig. 9.The first row of this figure highlights the details of the main area affected by landslides for which we have observations.The second row shows the whole study area without focusing on the SU for which we measured the landslide area.The first column plots the actual measurements and represents the target variable of our model.The second and third columns report the estimated landslide areas via the RCV and SCV routines.
v3-fos-license
2021-05-08T00:04:07.146Z
2021-02-15T00:00:00.000
233926594
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.researchsquare.com/article/rs-164626/v1.pdf?c=1631867982000", "pdf_hash": "7362d5dfa47717139fd3aa8c2c3b7f42df34fbe3", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42557", "s2fieldsofstudy": [ "Materials Science", "Physics", "Engineering" ], "sha1": "4e0bb31791a1ef6f6c0affee19c578250dc37355", "year": 2021 }
pes2o/s2orc
Significant improvement in the piezoelectric properties and electromechanical coupling factors of wurtzite AlN compound under high pressures This work describes a theoretical study of the pressure effect in structural, elastic, piezoelectric and dielectric properties as well as electromechanical coupling factors of wurtzite AlN, obtained by ab-initio calculations using pseudo-potential plane waves (PP-PW) that combine the density functional theory (DFT) and density functional perturbation theory (DFPT). The results of the calculation indicate that the parameters of AlN crystal cells and the volume of AlN crystalline crystal cells decrease notably with increasing pressures from 0 to 40 GPa. Due to an increase in the value of the direct piezoelectric constant (e33)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${e}_{33})$$\end{document} and a decrease in the value of the elastic constant (C33\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${C}_{33}$$\end{document}), there is a significant improvement in the value of the converse piezoelectric constant (d33\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${d}_{33}$$\end{document}). The improvement in the piezoelectric value leads to a higher value in electromechanical coupling coefficient. Our results agree well with previous theoretical and experimental research. We hope that our results will provide guidelines for the realistic application as well as further research of high-performance compounds appropriate for applications in a multitude of fields of study, such as biomedical engineering. Introduction Aluminum nitride (AlN) is one of the most important semiconductors used in various fields of applications such as optical electronic devices, cell phones, and optical detectors [1][2][3]. Given some important properties that this compound possesses, such as dielectric, mechanical, thermal conductivity and corrosion resistance. Owing to its 6.0 eV (207 nm wavelength) large bandgap at ambient temperature, AlN can be used piezoelectric and ceramic with a wide variety of technical applications, such as ultrasound transducer [4,5]. The piezoelectric constants of AlN, however, are smaller than other widely used piezoelectric materials. For example, piezoelectric strain modulus d 33 for AlN is reported to be 5.5 pC/N, while d 33 for ZnO can be at least twice as large [6], and PZT can be larger than 100 pC/N [7]. Under normal temperature and pressure conditions, AlN compound crystallizes according to the hexagonal structure B4 (wurtzite), whose group space symmetry is of type P63mc or cubic structure (zinc blende) whose group space symmetry is of type F-43 m (216) [8]. The change in structure under the influence of pressure causes an increase in internal energy, which induces displacements of anions and cations, changes in chemical bonds, thus a change in the structure of energy bands. Therefore, studying the microscopic mechanism of phase transition is an essential ingredient. Despite this, some experimental work has tried to explain some phase transition mechanisms [9][10][11]. The increase in pressure leads to an increase in the piezoelectric properties, which leads to an improvement in the electromechanical coupling factors. There is a crucial need to study materials and to know their physical properties, which include structural, elastic, piezoelectric and dielectric properties, in order to describe devices with extreme precision. A better understanding of structural properties provides a basic background for the production of engineering materials for advanced structures. The elastic and mechanical properties of solids reflect their reactions to the intervention of certain external factors. In the simplest case, such factors are mechanical actions: compression, traction, bending, shock, torsion. In addition to mechanical interventions, they can be thermal, magnetic, etc. Piezoelectric materials have many power applications. These materials have been used for many years, in applications such as ultrasonic generation or actuation. Recently, other applications have appeared, in particular with the piezoelectric transformer and micro-energy sources. The transformer has found a commercial outlet in cold cathode lamp supplies for backlighting. Piezoelectric micro-sources, whose energy comes from ambient vibration, seem to have a bright future in applications such as the self-powering of abandoned sensor networks. In the present paper, we focus on the theoretical investigation of the piezoelectric and dielectric properties and electromechanical coupling factors of the wurtzite AlN compound at high pressure. The findings on structural, elastic, piezoelectric dielectric properties and electromechanical coupling factors under high pressure are described. The calculation has been reported after geometry optimization, as programmed in the open-source software package ABINIT [12][13][14] within the framework of the DFT [15] and DFPT theories [16,17]. There are some previous computational studies in the influence of pressure on elastic properties [18][19][20][21][22]. Nevertheless, far as we know, there are no reports regarding the effect of pressure on piezoelectric, dielectric, and electromechanical coupling parameters. The rest of the paper is arranged based on this. A brief overview of the methodology used and calculation description are given in Sect. 2. Section 3 presents and discusses the findings for structural, elastic, piezoelectric, dielectric and electromechanical coupling coefficients at high pressures. In Sect. 4, the findings are outlined. Theoretical methodology First principles calculations within the framework of density functional perturbation theory (DFPT) have been shown to be the most reliable for obtaining elastic, piezoelectric and dielectric properties. This is the reason why we decided to use this method. The exchange and correlation potential are assessed using the local density approximation (LDA) based on the data of Ceperley and Alder [23] and generalized gradient approximation (GGA) using the scheme of the Perdew-Burke-Ernzerhof [24]. The calculations were conducted with the ABINIT code. The choice of this code was motivated by its open and recent development in the field of piezoelectric and dielectric properties. This code uses the pseudopotential approximation and plane waves as the basis for the development of Kohn-Sham orbitals. The pseudopotential used is nonlocal, norm conserving pseudopotentials (Pseudodojo) generated by Van Setten et al. [25]. The potentials and the wave functions are represented in a plane wave base with a cut-off energy of 60 Ha. Integration into the Brillouin zone is performed in an 8 × 8 × 8 k-point mesh [26]. The elastic tensor with relaxed atoms results from the combination of the second derivative for electric field and strain; where 0 is the cell volume, i and j the components of second-rank pure strain tensor [27]. The proper piezoelectric constant can be obtained as [28]: Looking at the above equation, we find that the piezoelectric constant is divided into two terms, namely clamped-ion and the internal strain term, respectively. The former depicts the electronic contribution while ignoring the effect of atomic relaxation, while the latter depicts the ionic contribution as well as the response of atomic displacement to the strain. In addition, the effective charge Born Z mα , the constant force matrix K mn, and the internal stress tensor Λ nj refer to the second energy derivatives with respect to displacement and electric field, and displacement, displacement and strain pairs, respectively. The dielectric tensor can be split in the ionic 0 and electronic ∞ contributions [29]: where p is the polarization, induced by strain. The densityfunctional perturbation theory (DFPT) [16,17] implemented in ABINIT is used to estimate Born effective charges, piezoelectric, and dielectric constants. In calculating the dielectric constants, the k-point density is doubled because DFPT is sensitive to this computational parameter. Structural properties The determination of structural properties in the physics of materials plays an important role, as it allows gathering information on the microscopic structure of materials and will therefore have a relatively significant effect on the prediction of other properties. The properties of the ground state of our materials are obtained using a relativistic scalar calculation based on the plane wave pseudo-potential (PW-PP) method, by treating the exchange and correlation energy by the LDA and GGA approximations, this choice is motivated by a concern for enriching the existing database, which will subsequently allow a comparative study of the performance of each of the approximations. In the frame 0-40 GPa, the optimum geometries of AlN unit cells have been calculated at constant values of applied pressure difference, where a full optimization for lattice parameter is carried out at each pressure. The result of optimization lattice constants a and c, the ratio c/a, the internal parameter u at zero-pressure of AlN wurtzite are indicated in Table 1 with both LDA and GGA. Both the atomic positions and the lattice constants have been optimized in presence of hydrostatic pressure as and getting the pressures as function of volume. In principle, to calculate the bulk modulus B and the pressure derivative of the bulk modulus B′. In addition, the empirical and previous theoretical results presented in the literature are also cited for comparison. In view of Table 1, we can notice that the values of a, c and u have been underestimated in light of the LDA with regard to the experiment [30], and vice versa, when we use the GGA, where it appears to be overestimated. In view of Table 1, lattice constants (a and c) and internal parameters obtained from LDA calculations are underestimated with regard to the experiment [30]. The study findings are satisfied with the actual results of the DFT-LDA and DFT-GGA methods [31,32]. In terms of theoretical calculations relative to those reported in [31] using all-electron full potential linearized augmented plane wave approaches, our LDA results in good consistency with the experiment [33]. For pressures varying between 0 and 40 GPa, our LDA values for a and c are compared to those of [34] and are shown to be lower. All parameters a, c, c/a and u are seen in Fig. 1. It should, however, be noted that the reported findings of our GGA agree well with those of [35]. Our LDA results are closer to the experiment for the bulk modulus at P = 0 and were found to be smaller by using the GGA approach rather than the LDA one. Our values for c/a and u compared with the ideal structure (c/a = 8/3 and u = 3/8) [36], described in Table 1 for both LDA and GGA approaches. In the current work, the computed u is in strong alignment with the experimental one described in [30]. Figure 1 indicates the pressure dependency of the ratio (c/a) and u for AlN, respectively. Notice that the c/a ratio decreases as well for the compound of interest as pressure increases, whereas u increases. Elastic and mechanical properties In the study of stability, elasticity is a very important factor, since it must be considered in the study of mechanical forces, it is a question of determining the elastic constants (C ij ) , which connect the stress to the strain and extract the mechanical and physical properties of these materials. The elastic constant results derived by the ABINIT code are shown in Table 2. It is acknowledged that wurtzite AlN has five distinct elastic constants,C 11, C 12 , C 13 , C 33 , C 44 . The elastic constants C 12 , C 13 , and C 44 are linked to the form of elasticity. The elastic constants of the compound in question were determined in the current work for different compositions of pressure varying from 0 to 40 GPa within both the LDA and GGA. In Table 2, our results are summarized and plotted in Fig. 2. For comparison, the experimental findings and previous theoretical results reported in the literature are also shown. Our GGA for the elastic constants determined for w-AlN are in good accordance with the measured and determined data available from Refs. [35,37] and our LDA was found to be lower than [34] for pressure ranging from 0 to 40 GPa. The influence of hydrostatic pressure enables us to consider the mechanical stability and phase transition mechanisms of interatomic interactions. The elastic constants (C ij ) follow the conditions of mechanical stability for w-AlN in the 0-40 GPa pressure range. For w-AlN, the elastic constants (C ij ) satisfy mechanical stability conditions [38], namely C 44 > 0, C 2 11 > C 2 12 and (C 11 + C 12 )C 33 > 2C 2 13 indicates that w-AlN is stable in the above-mentioned equations for both LDA and GGA approximations. When a rigid material undergoes any kind of transformation, it deforms elastically, which means that it deforms in a predictable manner, and the solid springs return to their original shape when the load is released, and the modulus of elasticity is a physical property that determines the amount of stiffness. Certain physical quantities are strongly linked with the derived elastic constants, such as bulk modulus (B) and shear modulus (G). If all the different elastic constants C 11, C 12 , C 13 , C 33 , C 44 and C 66 = 1∕2 C 11 − C 12 are extracted, it is possible to obtain the polycrystalline modulus using the Voigt Reuss methods [39][40][41]. Both bulk ( B V , B R ) and shear modulus ( G V , G R ) of AlN can be determined as: The bulk modulus (B) is presented as follows: B = 1∕2 B R + B V , and the shear modulus (G) describes the plastic deformation resistance associated with the bending of the bonds depends on the nature of these bonds and decreases as a function of the iconicity: For wurtzite structure, the shear anisotropies (A) are described as [42]: A shear = 2C 44 ∕ C 11 − C 12 . In addition, Young's modulus (E), known as the stress-strain ratio, is generally used to calculate the rigidity of the solid, it is defined as E = 9BG∕(3B + G) . All those parameters B, G, and E with different pressures for w-AlN are presented in Table 3. The Poisson's ratio, against shear, calculates the (7) Table 3. Although only those mechanical behaviors at ambient pressure have factual study for wurtzite AlN, our LDA and GGA calculations were first checked with published experimental results ( Table 3). The bulk B, shear G and Young modulus E experimental results are in good agreement with our GGA [34]. Under pressure, these major mechanical properties may be further studied. It has been found that B increases steadily with a corresponding increase in pressure (Table 3). Nevertheless, it has been found that with the rise in pressure (up to 5 GPa in G, and in E) with both LDA and GGA approximations, G and E modules gradually increase, then with more compression, the modulus B (up to 15 GPa and 20 GPa in G) with LDA and (25 GPa and 30 in E) with GGA essentially decrease, and eventually, we can notice that the elastic anisotropic A pressure rises from 0 to 40 GPa. This may be linked to the structural phase transition from wurtzite to rocksalt, as well as the metastability of wurtzite variation at high pressures. In the concluding part of our research, we evaluated at the bulk modulus, shear modulus, Young's modulus, and hardness of AlN, as well as their relationship and behavior under pressure. Table 3 shows a description of this intricate interaction in wurtzite. The reason is also due to the change in the value of the elastic constants due to the change in pressure applied to the material. Piezoelectric properties All non-centrosymmetric III-V compounds are piezoelectric, particularly nitrides (AlN, InN, and GaN), this polarization is very important because the bond is strongly polarized, so if an external force is applied, the material deforms, the positions of atoms in relation to each other change, the parameters of the ideal unit cell a and c again change. AlN in wurtzite structure has three distinct piezoelectric tensor components, namely e 33 , e 31 (= e 32 ), and e 15 (= e 24 ), which e 15 (= e 24 ), and therefore can effectively be polarized along the z-direction as the system is strained in the same direction, i.e., the largest is the e 33 = 1.50 C/m 2 . In order to determine the piezoelectric tensor in a wurtzite material, both the atomic positions and the lattice constants have been optimized before beginning DFPT calculations. The calculated e ij are listed in Table 4 and compared to previous data, which are only available for wurtzite at zero pressure [43,44]. Notice that our computed e 31 using the LDA is in close agreement with that recorded in [43] for wurtzite AlN. Moreover, the agreement between the determined e 15 of our LDA and that stated in [43] is fair. An even significant difference between our calculated e 33 LDA and that of [43] may, however, be noted. In addition, it can be inferred that our GGA results are in good accordance with the experiment [44], considering a well-known experimental uncertainty in the calculation of piezoelectric constants. In addition, our results for the constant w 33 agree with the results computed using ab initio using the Berry-phase approach [45]. As is [9]. Therefore, we decided to study the variation in the piezoelectric, dielectric properties and, electromechanical coupling coefficient versus pressure at 20 GPa to ensure to ensure stability at this range. For e and d constants with LDA and GGA approximations, the range of the piezoelectric constants versus pressure is seen in Table 4 and plotted in Fig. 3 Figures 1 and 4 represent a clear pattern in which the value of e 33 increased as the value of c/a decreases, emphasizing that the structural parameter might be a crucial indicator for characterizing the piezoelectricity of wurtzite materials. The value of u increases as c/a decreases, indicating a strong relationship between the structural parameters. In addition, many previous theoretical studies showed that the high value of the piezoelectric constant is related to the small value of the lattice constant [47,48]. Our results agree well with that, as they indicate that by increasing the applied pressure, the value of the lattice constant decreases, which in turn increases the value of the direct piezoelectric constant. The microscopic explanation of this mechanism is illustrated by the following mathematical relationship: where q is the electron charge, P 3 and 3 are the electric polarization and external strain along the c-axis, respectively [45]. Moreover, the improvement in the piezoelectric constants along z-direction is being related with internal parameter u, so it depends on the z-direction positions of Al and N. Dielectric properties Dielectric constant describes the response of a given medium to an electric field. It is involved in many fields, in particular in optics, via the refractive index. The laws governing the refraction and reflection of light make use of it. At the microscopic level, the dielectric constant is linked to the electrical polarizability of the molecules or atoms constituting the medium. The dielectric constant is a tensor quantity (the response of the material can depend on the orientation of the crystallographic axes of the material), which reduces to a scalar in isotropic media. It is very generally complex; the imaginary part being linked to the phenomenon of absorption or emission of the electromagnetic field by the material. From (DFPT), we have calculated the Born effective charges ( Z B(g) ) and dielectric constants ε (∞). The dielectric tensor is symmetric, and it follows all of the point group's symmetry operations. According on the crystal symmetry, this restricts the number of independent components in the tensor to a minimum of 1 and a maximum of 6. Two separate components consist of the wurtzite system, one corresponding to the orientation parallel to the c-axis, m (∞) = xx (∞) = yy (∞) , a n d t h e o t h e r b e i n g p e r p e n d i c u l a r t o t h e c-axis, n (∞) = zz (∞) . In the hypothetical scenario of a material that does not respond to the external field at all, might equal the identity tensor and be zero. In actuality, there are materials that have no ionic contribution. Compounds must contain at least two atoms per primitive cell, each with a separate atomic charge, in order to be nonzero. Our findings with respect to m (∞) and n (∞) for wurtzite AlN at P = 0 are found to be 4.35 and 4.57 for our LDA, 4.42 and 4.64 for GGA approximation, in which our GGA is in good accordance with theoretical [41]. The average value (∞) = (1∕3)Tr (∞) at P = 0 has been calculated and found to be 4.43 and 4.52 for LDA and GGA, respectively, which LDA good agree with available theoretical [49]. The variation of pressure versus m (∞) , n (∞) and (∞) is shown in Fig. 4. Note that all the explored quantities decrease monotonically as pressure increases. Subsequently, the second derivatives of E are calculated with respect to the displacement of an atom and with respect to one of the components of ε; this forms the dynamic effective charge tensor (Born effective charge)Z B(g) . In an insulator, the born effective charge is a basic quantity that describes the main coupling between lattice displacement and the electrostatic field. In the wurtzite structure, the pressure dependence of born effective charge Z B is illustrated in Fig. 5 [51]. In addition, our GGA computed Z B m and Z B n are in perfect accordance with [50] using DFT approach. In Fig. 5, the pressure dependency of Z B m , Z B n k, and Z B for w-AlN is shown. From the figure shown, we find the effect of pressure on the effective charge, which gradually decreases with the increase in the applied pressure. Electromechanical coupling factor An interesting coefficient for the characterization of piezoelectric materials is the coupling coefficient (or factor) k. It represents a material's ability to transform mechanical energy into electrical energy, and vice-versa. The coupling coefficient is defined for each mode of vibration, depending on the direction of polarization and the shape of the sample. These factors are essential parameters in the nature of the energetic harvesting and sensing transducer applications [52]. For applications including pressure sensors, ultrasound transducers and gyroscopes, the piezoelectric medium is usually needed to work in longitudinal mode [53], where the related coupling coefficient is: where 33 is the dielectric tensor. The high K 33 coupling factor contributes to higher axial resolution, broader bandwidth and transducers with greater sensitivity [54]. Cantilever-based actuators function in the mode of transverse length (bending), where the related coefficient of coupling is: Our values concerning K 33 and K 31 for AlN at P = zero are found to be 0.29, and 0.12 for our LDA, 0.26 and 0.12 with GGA approximation. Our GGA is well in accordance with theoretical [55] for K 33 and our LDA is well in consistent with the experimental value of 0.3 [56]. The variation of the electromechanical coupling coefficients versus pressure is plotted in Fig. 6. Note that both the K 33 and K 31 coupling coefficients increase as pressure increases. In addition, the factor K 33 reached a value of 0.38 and 0.40 with GGA and LDA, respectively, at P = 20 GPa which is comparable with ZnO [57], BaTiO 3 [58], and, LiNbO 3 [59] as candidate compound for practical considerations. Conclusion In summary, we have investigated the comprehensive effects of pressure on the structural, elastic, piezoelectric, dielectric properties, and electromechanical coupling coefficient of wurtzite AlN using calculations within density functional theory combined with density functional perturbation theory with both LDA, and GGA approximations. The pressure increases the piezoelectric response for the wurtzite phase considerably and the maximal piezoelectric coefficients that occur at the phase transition are much greater than those of the most typical ferroelectric perovskites. The improvement in the piezoelectric value leads to a higher value in electromechanical coupling coefficient. A promising direction for further theoretical studies can be useful for piezoelectric thin film used in ultrasonic transducer.
v3-fos-license
2019-04-13T13:01:59.800Z
2012-06-19T00:00:00.000
111088002
{ "extfieldsofstudy": [ "Engineering" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3846/bjrbe.2012.20", "pdf_hash": "ce9c87849acdb61bb50a89832feb75b9d4c3cf16", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42558", "s2fieldsofstudy": [ "Sociology" ], "sha1": "7066e8c74ed47e806ec363a5f07c612167d4c04a", "year": 2012 }
pes2o/s2orc
PRIORITIZING CONSTRUCTING PROJECTS OF MUNICIPALITIES BASED ON AHP AND COPRAS-G: A CASE STUDY ABOUT FOOTBRIDGES IN IRAN Each municipality has a certain budget for constructing, maintaining and repairing every year. Prioritizing projects is one of the difficult issues of decision making and takes time for evaluating and programming. The main aim of this study is to make a framework for municipalities to prioritize their projects based on this framework and for this a footbridge project of Sari City in Iran was selected as a case study for this research to show prioritizing of boulevards that have no footbridge for constructing footbridge. The cost of constructing footbridge is sometimes high and the footbridge site place is important for pedestrians. Besides, selecting of an area for constructing new footbridges includes quantitative and qualitative factors, such as the Total cost, Environmental factors, and Socio-economic factors. For these reasons, selecting area for constructing footbridges can be viewed as a kind of Multi-Attribute Decision Making (MADM) problem. The aim of this study is the use of Analytic Hierarchy Process (AHP) and COPRAS-G methods for evaluating and selecting an area for constructing new footbridges alternatives. More precisely, AHP was used for calculation of the relative importance of criteria and COPRAS-G method was used for ranking location alternatives. Introduction Every municipality deals with many projects including constructing, maintaining, repairing and etc. every year. Among these projects, constructing projects are one of the critical problems for municipality because many fields such as, manufacturing engineering, transportation engineering, civil engineering, etc. have to work together and the results are very important for the government. Also, the numbers of constructing projects are many and municipality budget is limited. Besides, a proper construction project selection is a very important activity for every municipality due to the fact that improper selection can negatively affect the overall performance and productivity of a project. In addition, sometimes determining an appropriate area for constructing project is as important as the project. Most of constructing projects deal with civil engineering. In civil engineering some projects deal with designing and constructing new bridges. Bridges are important structures of our lives and make transportation easier for us. Also, bridge projects sometimes are very expensive and vital for the country. Most studies in this field are about designing or constructing bridges and merely no studies could be found about selecting an area for bridge constructing. The common features of these studies are focused on designing, constructing and mathematical calculations of structure of the bridge. Footbridges are kind of bridges that pedestrians use for their movements. Generally, all groups of people use footbridges. These bridges help pedestrians to cross the street without making any problem for vehicle traffic. Selecting an area for constructing new footbridge is a sophisticated, time-consuming and difficult process, requiring advanced knowledge and expertise. So, the process can be very hard for engineers and managers. For a proper and effective evaluation, the decision maker may need a lot of data and many factors for evaluation. For these reasons, selection of an area for constructing new footbridge can be viewed as multi-attribute decision making process (MADM) problem. The aim of this study is the use MADM methods for evaluating and selection the best area as alternative for constructing a new footbridge. There are many MADM methods in the literature including Priority based, outranking, distance-based and mixed methods (Pomerol, Barba-Romero 2000). Some of famous MADM methods in the literature are: analytic hierarchy process (AHP) (Saaty 1980), analytic network process (ANP) (Saaty, Vargas 2001), axiomatic design (AD) (Kulak, Kahraman 2005), TOPSIS (Hwang, Yoon 1981), ELECTRE (Wang, Triantaphyllou 2008), VIKOR (Opricovic, Tzeng 2007), COPRAS-G (Zavadskas et al. 2008) and PROMETHEE (Behzadian et al. 2010;Dağdeviren 2008). But among these methods, Analytic Hierarchy Process (AHP) is one of the bests, and that was introduced by Saaty (1980;2001). The idea behind this method is obtaining the relative weights among the factors and calculating the total values of each alternative based on these weights. This study uses the AHP to calculate each criterion weight from subjective judgments of the decision maker group. The rating of each alternative and the weight of each criterion, which are determined using the AHP, are then passed to the complex proportional assessment method with grey interval numbers (COPRAS-G), which is MADM method. This paper is organized in five sections. In section "Introduction" the studied problem is introduced. Section "Principles of AHP and COPRAS-G methods" briefly describes the two proposed methodologies. In section "Proposed AHP -COPRAS-G integrated approach", proposed AHP -COPRAS-G integrated approach for footbridge site place selection is presented and the stages of the proposed approach and steps are determined in detail. How the proposed approach is used on a real world case study is explained in section "Case study". In section "Conclusions and future research" conclusions and future research areas are discussed. The AHP method This technique was developed by Saaty (1980) and the main point behind this technique is how to determine the relative importance of a set of activities in a multi-criteria decision problem. Based on this approach decision maker could incorporate and translate judgments on intangible qualitative criteria alongside tangible quantitative criteria (Badri 2001). The AHP method is based on three principles: first, structure of the model; second, comparative judgment of the alternatives and the criteria; third, synthesis of the priorities (Dağdeviren 2008). The recent developments of decision making models based on AHP methods are listed below: In the first step, a sophisticated decision problem is structured as a hierarchy. This method breaks down a sophisticated decision making problem into hierarchy of objectives, criteria, and alternatives. These decision elements make a hierarchy of structure including goal of the problem at the top, criteria in the middle and the alternatives at the bottom of this hierarchy. In the second step, the comparisons of the alternatives and criteria are made. In AHP, comparisons are made based on a standard nine point scale (Table 1). Also, in this standard some numbers including 2, 4, 6, and 8 could be used as intermediate values. Let be the set of criteria. The result of the pairwise comparison on n criteria can be summarized in an evaluation matrix A in which every element a ij (i, j = 1, 2, ... , n) is the quotient of weights of the criteria, as shown in Eq (1): . (1) At the third step, the mathematical process commences to normalize and find the relative weights for each matrix. The relative weights are given by the right eigenvector (w) corresponding to the largest eigenvalue (l max ), as If the pairwise comparisons are completely consistent, the matrix A has rank 1 and l max = n. In this case, weights can be obtained by normalizing any of the rows or columns of A (Wang, Yang 2007). The quality of the output of the AHP is strictly related to the consistency of the pairwise comparison judgments (Dağdeviren 2008). The consistency is defined by the relation between the entries of A: a ij × a jk = a ik. The consistency index (CI) is (3) The final consistency ratio (CR), using which one can conclude whether the evaluations are sufficiently consistent, is calculated as the ratio of the CI and the random index (RI), as indicated in Eq (4): The CR index should be lower than 0.10 to accept the AHP results as consistent (Işıklar, Büyüközkan 2007). If the final consistency ratio exceeds this value, the evaluation procedure has to be repeated to improve consistency (Dağdeviren 2008). The CR index could be used to calculate the consistency of decision makers as well as the consistency of all the hierarchy (Wang, Yang 2007). The COPRAS-G method In order to evaluate the overall efficiency of an alternative, it is necessary to identify selection criteria, to assess information, relating to these criteria, and to develop methods for evaluating the criteria to meet the participants' needs. Decision analysis is concerned with the situation in which a decision-maker (DM) has to choose among several alternatives by considering a particular set of usually conflicting criteria. For this reason Complex proportional assessment (COPRAS) method that was developed by Zavadskas and Kaklauskas (1996) can be applied. This method was applied to the solution of various problems in construction (Tupenaite et al. 2010;Ginevičius et al. 2008;Kaklauskas et al. 2010;Zavadskas et al. 2010;Medineckiene, Björk 2011). The most alternatives under development always deal with vague future, and values of criteria cannot be expressed exactly. This MADM problem should be determined not with exact criteria values, but with fuzzy values or with values in some intervals. Zavadskas et al. (2008) presented the main ideas of complex proportional assessment method with grey interval numbers (COPRAS-G) method. The idea of COPRAS-G method with criterion values expressed in intervals is based on the real conditions of decision making and applications of the Grey systems theory (Deng 1982;1988). The COPRAS-G method uses a stepwise ranking and evaluating procedure of the alternatives in terms of significance and utility degree. The recent developments of decision making models based on COPRAS methods are listed below: - 1. Selecting the set of the most important criteria, describing the alternatives. 2. Constructing the decision-making matrix : where -determined (the smallest value, the lower limit) and (the biggest value, the upper limit). 3. Determining significances of the criteria . 4. Normalizing the decision-making matrix are calculated by formula 6: . In formula (6) -the lower value of the I criterion in the alternative j of the solution; -the upper value of the criterion i in the alternative j of the solution; m -the number of criteria; n -the number of the alternatives, compared. Then, the decision-making matrix is normalized are determined according to the formula 7: 5. Calculating the weighted normalized decision matrix The weighted normalized values are calculated as follows: (8) where q i -the significance of the i-th criterion. Then, the normalized decision-making matrix is: 6. Calculating the sums of criterion values, whose larger values are more preferable: 7. Calculating the sums R j of criterion values, whose smaller values are more preferable: In formula (11), (m -k) is the number of criteria which must be minimized. 8. Determining the minimal value of R j as follows: 9. Calculating the relative significance of each alternatively Q j the expression: 10. Determining the optimally criterion by K the formula: 11. Determining the priority order of the alternatives. 12. Calculating the utility degree of each alternative by the formula: , where Q j and Q max are the significances of the alternatives obtained from Eq (13). In the first stage, alternatives and the criteria which will be used in their evaluation are determined and the decision hierarchy is formed. In the last step of the first stage, the decision hierarchy is approved by decision making team. In stage two and after approval of decision hierarchy, criteria that were used in evaluation alternatives are assigned their weights via AHP. In this stage, criteria weights are calculated by pairwise comparisons. The decision making team used Table 1 as a standard for doing pairwise comparisons. The project team used Delphi technique as a group decision making tool for receiving general agreement. Area priorities are found by using COPRAS-G computations in the third stage. Firstly, the project team evaluates alternatives and after these evaluations, COPRAS-G is used for ranking the alternatives. Finally, in the last stage, decision making team made decision about selecting the best place for footbridge. Case study Iran is one of the most dangerous countries for both drivers and pedestrians. This case study is based on one of the important projects in Sari and proposed approach is applied in one of the important municipality projects, in Sari, Iran. Sari City is the capital of Mazandaran province in the north of Iran and near to Caspian Sea. Unfortunately, the number of pedestrians that got involved in the accidents in Sari is high. In recent years the principals of management and structures of municipality has changed while a new building of Sari Municipality was established less than two years ago, many projects started in the city like developing roads, boulevards, parks and etc. due to deserve of this city that is 3000 years old and was the first city in the whole north of Iran. Compared to developing roads in the city footbridges did not develop like roads and this issue can be dangerous for local people and tourists in the city. The municipality project team wants to evaluate and select area for constructing new footbridges. The budget of the municipality was limited and the best area had to be selected. However it is hard to choose the most suitable one among the municipality projects which dominate each other in different characteristics. This research has tried to give a framework as a scientific way for prioritizing roads and boulevards for construction of new footbridges that can be helpful for municipality to follow their projects according to the budget and to identify priority projects. The three boulevards are selected by the project team because of their importance and situations as alternatives. These alternatives are Khazar Boulevard (KB), Artesh Boulevard (AB) and Taleghani Boulevard (TB) (Fig. 2). Data gathering At first, the top managers of Sari municipality and a group of experts in civil engineering, economic, and social sciences participated in a conference meeting on footbridge project (Table 2) and with a preliminary work the decision making team determined three possible areas suitable for the needs of the municipality. The three criteria with eight sub-criteria are used for evaluation of the alternatives. Decision hierarchy structured and criteria are provided in Fig. 3. There are four levels in the decision hierarchy structured for selection. AHP calculations After constructing the decision hierarchy and alternatives, the project team assigns pairwise comparisons via AHP for evaluating all the criteria and weighting each criterion. In this step, the experts in the decision making team are given the task of forming individual pairwise comparison matrix by using the scale given in Table 1. As mentioned before, the project team for receiving the general agreement on their evaluations used Delphi technique as a group decision making tool. The all pairwise comparisons and the weights of criteria are showed in Tables 3-8. Eqs (1) to (4) were used for AHP calculations. The last column of every table shows the weight of each criterion. The Socio-economic, Environmental factors and Total cost are determined as the three most important criteria in the area selection process by AHP. Table 5. Pairwise comparison matrix for Traffic related factors and their weighs C 1-1-1 C 1-1-2 Weights C 1-1-1 1 1/2 0.333 C 1-1-2 2 1 0.667 Table 6. Pairwise comparison matrix for Accident related factors and their weights C 1-2-1 C 1-2-2 Weights C 1-2-1 1 1/5 0.167 C 1-2-2 5 1 0.833 Table 3. Criteria, sub-criteria, sub-sub criteria and their descriptions Criteria C 1 . Envi ronmen tal factors C 1-1 Traffic related factors: C 1-1-1 Vehicles C 1-1-2 Pedestrians C 1-2 Accident related factors C 1-2-1 Number of injuries C 1-2-2 Dead C 1-3 Average speed limit The pedestrians and vehicles are determined as the two most important criteria in the area selection process, in the sub-criteria of Traffic related criteria by AHP. In the criterion of Accident related factors the dead and number of injuries are determined as the two most important criteria in the area selection process by AHP. For the four sub-criteria of Environmental factors, Accident related, Traffic, Influence of physical and Average speed are determined as the four most important criteria in the area selection process by AHP. In the Socio-economic factors, situation of area, special importance of each road or boulevard to the city of and Rate of transportation of families, children and business dates as the three most important criteria in the area selection process by AHP. Consistency ratio of the pairwise comparison matrix calculated for all of the tables was lower than 0.1. So the weights are shown to be consistent and they are used in the selection process. COPRAS-G calculations First of all in this step, alternatives are evaluated based on the evaluation criteria and the evaluation matrix is constructed. The evaluations of these three alternatives according to the previously stated criteria, i.e., evaluation matrix, are displayed in Table 9. In Table 9 weights of each criterion and sub-criterion was calculated based on results of AHP about criteria and sub-criteria. Final results calculated by the formulas 10-15 are presented in Table 11. Table 9. Initial decision making matrix with the criteria values described in intervals Region KB 60 70 70 80 50 60 20 30 50 60 70 80 AB 70 80 60 70 60 70 30 40 60 70 60 70 TB 60 70 60 70 40 50 20 3 50 60 60 70 Table 9. (continuation). Initial decision making matrix with the criteria values described in intervals According to results of Table 11, Khazar Boulevard is in the first priority for the construction of footbridge, after that is Taleghani Boulevard and finally Artesh Boulevard is the last in prioritizing. Decision-making Each municipality has limited budget and needs to make the best decisions for doing their projects. The defined project was area selection and the problem was to select one of the areas based on quantitative and qualitative criteria. The aim of this study was the use of MADM tools for solving this problem of the municipality projects in Sari. According to the AHP and COPRAS-G computations, it is decided to select KB. For reaching more accurate analyzing, project team used conference meeting and consistency ratio in AHP calculations. The use of grey analysis helped the project team to deal with the uncertain and insufficient information and to build a relational analysis or to construct a model to characterize the system. Conclusions and future research In this paper, a decision approach is provided for prioritizing projects particularly of constructing new footbridges. Municipality projects are important for every city and best decisions must be made on it. Budgets of each municipality are confined and the needs are wide. Appropriate prioritizing is very important and influences the time of finishing project or the quality of carrying it out. This selection problem is based on the comparisons of area criteria and evaluations of the alternatives, according to identified criteria. An integrated AHP and COPRAS-G methods have been used in proposed approach. AHP is used to assign weights to the criteria to be used in area selection, while COPRAS-G is employed to determine the ranking of the alternatives. The weights obtained from AHP are included in decision making process by using them in COPRAS-G computations and the alternative priorities are determined based on these weights. The proposed model has only been implemented on an area selection for constructing new footbridges in the municipality project in Sari; however, the project team has found the proposed model satisfactory and implementable in others bridge selection decisions. Also, this approach could be used in any other kind of prioritizing constructing projects of municipalities. Besides, this approach can be used for prioritizing other municipality projects such as roads, bridges, highways.
v3-fos-license
2017-03-06T12:35:52.000Z
2017-03-03T00:00:00.000
56456179
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2017/29/epjconf_azores2017_03007.pdf", "pdf_hash": "c8a49451241a5ef033a2eb38b55a181b4a9b22e8", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42559", "s2fieldsofstudy": [ "Physics" ], "sha1": "c8a49451241a5ef033a2eb38b55a181b4a9b22e8", "year": 2017 }
pes2o/s2orc
Seismic inference of 57 stars using full-length Kepler data sets We present stellar properties (mass, age, radius, distances) of 57 stars from a seismic inference using full-length data sets from Kepler. These stars comprise active stars, planet-hosts, solar-analogs, and binary systems. We validate the distances derived from the astrometric Gaia-Tycho solution. Ensemble analysis of the stellar properties reveals a trend of mixing-length parameter with the surface gravity and effective temperature. We derive a linear relationship with the seismic quantity $\langle r_{02} \rangle$ to estimate the stellar age. Finally, we define the stellar regimes where the Kjeldsen et al (2008) empirical surface correction for 1D model frequencies is valid. Introduction Time-series analysis of the full-length Kepler data sets of solar-like main sequence and subgiants stars is presented in [1]. They identify modes of oscillations for 66 stars. Using the Asteroseismic Modeling Portal, AMP 1 , we analyse the individual frequency data from [1] for 57 of these stars and we supplement them with spectroscopic data from [2][3][4][5][6][7]. In this version of AMP, AMP 1.3, we fit the frequency separation ratios r 01 and r 02 [8] to determine the optimal models for each star in our sample. The optimization method in AMP is genetic algorithm (GA) which efficiently samples the full parameter ranges without imposing constraints on unknown parameters, such as the mixing-length parameter or the initial chemical composition. The results of the GA is a dense clustering of models (1000s) around the optimal parameters. We analyse the distribution of these models to determine the stellar properties, such as mass, radius and age, along with their uncertainties. Details of the methods can be found in [9][10][11], with the most recent reference containing the tables of stellar parameters for the sample of 57 stars presented here. The results are validated using solar data and independent determinations of luminosity, radii, and ages. In these proceedings we analyse the derived stellar properties of our sample. We compare their predicted distances with the recent solution provided by the Gaia-Tycho analysis (TGAS, [12], Sect. 2). In Section 3 we derive expressions for estimating the mixing-length parameter and e-mail: ocreevey@oca.eu Visiting Scientist at 1 1 https://amp.phys.au.dk/ the stellar age based on observed properties. Finally, in Section 4 we explore the range of parameters where the correction proposed by [13] to mimic the so-called surface effect related to seismic data is useful. Asteroseismic distances We use the stellar luminosity, L, constrained from the asteroseismic analysis to compute the stellar distance, as a parallax. The model surface gravity and the observed T eff and [Fe/H] are used to derive the amount of interstellar absorption between the top of the Earth's atmosphere and the star, A K s , by applying the isochrone method described in [14]. Here, ths subscript K s refers to the 2MASS K s filter [15]. We compute the bolometric correction BC K s , using BC K s = 4.514650 − 0.000524T eff [16] where the solar bolometric magnitude is 4.72 mag. With L, K s , A K s , and BC K s we derive the the distance to each of the stars in our sample, and then its parallax. The TGAS catalogue of stellar parallaxes was recently made available through the first Gaia Data Release [17]. A comparison of the parallaxes we derive and these new values is shown in Figure 2. Apart from a few outliers there is an overall good agreement between the independent methods. However, the parallaxes that we derive are systematically larger, with a mean difference of +0.7 mas, just over 2σ the current uncertainties on the TGAS parallaxes. The cause of this systematic difference is currently under investigation. Nevertheless it is worthy to note that such comparisons of independent methods may allow us to reveal errors in the extinction measurements, the bolometric corrections, the T eff or the underlying assumptions in the models which compute the stellar luminosities. Trends in stellar properties Performing a homogenous analysis on a large sample allows us to check for trends in some stellar parameters, and compare them to trends derived or established by other methods. We performed this check for two parameters: the mixing-length parameter and the stellar age. The mixing-length parameter versus T eff and log g The mixing-length parameter, α, is usually calibrated for a solar model and then applied to all models for a set range of masses and metallicities. However, several authors have shown that this approach is not correct [18,19]. The values of α resulting from a GA offer an optimal approach to effectively test and subsequently constrain this parameter, since the only assumption is that α lies between 1 and 3. The distribution of α with log g and T eff (colour-coded, see figure caption) is shown in Fig. 3. We see that for a given value of log g, the value of α has an upper limit, The colour coding is as follows: red -T eff < 5600 K, yellow -5600 K < T eff < 6000 K, green -6000 K < T eff < 6300 K, blue -T eff > 6300 K. Lower: The distribution of age and r02 . with a mean and rms of the residual to the fit of -0.01 ± 0.15. This equation yields a value of α = 2.03 for the known solar properties. These results agree in part with those derived by [20], using full 3D radiative hydrodynamic calculations for convective envelopes. These authors also found that α increases with log g and decreases with T eff . Our fit indicates a very small dependence on metallicity while their results finds an opposite and more significant trend with this parameter. Our sample, however, does not span a very large range in [M/H], and the low coefficient is consistent with zero within the error bars. Age and r 02 The r 02 frequency separation ratios are effective at probing the gradients near the core of the star [8]. As the core is most sensitive to nuclear processing, the r 02 are a diagnostic of the evolutionary state of the star. Using theoretical models [21] showed a relationship between the mean value of r 02 and the stellar age. That relationship was recently used by [22] to estimate the age of KIC 7510397 (HIP 93511). Figure 3 shows the distribution of r 02 versus the derived age for the sample of stars studied here. A linear fit to these data leads to the following estimate of the stellar age, τ, based on r 02 This is, of course, only valid for the range covered by our sample. Note that when inserting the solar value of r 02 = 0.068 µHz, Eq. 2 yields an age of 4.7 Gyr, in agreement with the Sun's age as determined by other means. Surface Effects The comparison of the observed oscillation frequencies from the Sun and other stars with model frequencies calculated from 1D stellar models reveals a systematic error in the models which increases with frequency. These are known as surface effects, which arise from incomplete modelling of the near-surface layers and the use of an adiabatic treatment on the stellar oscillation modes. Efforts to improve the stellar modelling is underway, but the application of improved models on a large scale is still out of reach. To alleviate this problem, several authors have advocated for the use of combination frequencies which are insensitive to this systematic offset [8], hence the exclusive use of r 01 and r 02 in the AMP 1.3 methodology. However, since individual frequencies contain more information than r 01 and r 02 , some authors have derived simple prescriptions to mitigate the surface effect. One such parametrization is that of [13] who suggest a simple correction to the individual frequencies δν n,l in the form of a power law, namely where b = 4.82 is a fixed value, calibrated by a solar model, a 0 is computed from the differences between the observed and model frequencies [10,23], ν obs n,l is the observed frequency n, l mode and ν max is the frequency corresponding to the highest amplitude mode, see [1]. The AMP 1.3 methodology uses exclusively r 01 and r 02 as the seismic constraints, hence our results are insensitive to the surface effects. Using the model frequencies of the best-matched models, we can test how useful Eq. 3 is for different stellar regimes along the main sequence and early sub-giant phase. The interest in testing this becomes apparent when we consider that for many stars we do not have a high enough precision on the frequencies, or a sufficiently large range of radial orders, to use r 01 and r 02 to effectively constrain the stellar modelling. This is the case for some ground-based observations and for some stars that will be observed by TESS [24], and PLATO [25], where a limited time series of only one to two months may only be available. In order to test where Eq. 3 is useful, we calculated the residual between the observed oscillation frequencies and the model frequencies corrected for the surface term (Eq. 3): q n,l = ν obs n,l − ν mod n,l + δν n,l ,. Then we defined the metric Q as the median of the square root of the squared residuals: for all n and all l defined in the region of 0.7 ≤ ν obs n,l /ν max ≤ 1.3. We find values of Q that vary between 0.3 and 10 and this variation is anti-correlated with a 0 : as a 0 becomes more negative the match with the model becomes worse. We define a subsample of 44 stars with the best fits to r 01 and r 02 . This subsample in shown in Fig. 4 as open circles for the observed values of ∆ν and T eff , and the derived values of mass and radius. In these figures the stars represented by the dark blue filled circles have Q ≤ 1.0 and those represented by the light blue filled circles have 1.0 < Q < 1.2. It is evident from the figures that there exists regions of the parameter space where the surface correction proposed by [13] adequately corrects for the surface term. These regions are highlighted by the dashed lines, defined more conservatively here as log g > 4.2, T eff < 6250 K, ∆ν > 80µHz and ν max > 1700µHz. In physical properties this corresponds to a star with R < 1.6 R , M < 1.3 M , and L < 2.5 L , with no apparent evidence of the age or the metallicity playing any role. Conclusions In these proceedings we used the derived properties of 57 Kepler stars presented in [11] to predict the star's parallaxes, to derive an expression relating the mixing-length parameter and the age to observed properties, and to explore the regions of parameters where the proposed surface correction to individual frequencies by [13] (Eq. 3) is useful. For this latter point, the aim of defining valid regions is to use the correction for data sets where the frequency precision or the range of radial orders is not sufficient for r 01 and r 02 to constrain the stellar models. This will be the case for some stars that will be observed in low ecliptic latitudes with TESS and for the step-and-stare pointing phases of the PLATO mission. The parallaxes that we derived in this work were compared to the parallaxes derived from the Tycho-Gaia solution. We found in general a good agreement between our results. The mean differences between our results (ourstheirs) is +0.7 mas and the cause of this error is currently under study. Nevertheless we validate the new parallaxes from [12]. This comparison of parallaxes also highlights the ability of these external measurements to allow us to investigate the sources of systematic errors. We believe that in providing a prior on the luminosity, we may even be able to constrain the initial helium abundance in the star. We look forward to the wealth of forthcoming asteroseismic data on bright nearby stars with TESS and PLATO, and the forthcoming Gaia parallaxes with µas precision.
v3-fos-license
2019-02-19T14:07:55.828Z
2018-11-14T00:00:00.000
69750459
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2504-3900/4/1/11/pdf?version=1548291060", "pdf_hash": "bd74344ca168d49fdbbdd37f8ac1f3fb5d0b2973", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42564", "s2fieldsofstudy": [ "Physics" ], "sha1": "bd74344ca168d49fdbbdd37f8ac1f3fb5d0b2973", "year": 2018 }
pes2o/s2orc
The Citizen as a Key Point of the Policies : A First Approach to Auralization for the Acoustic Perception of Noise in an Urban Environment † Abstract: The improvement of the quality of life in the framework of the smart city paradigm cannot be limited to a set of objective measures carried out over several critical parameters (e.g., noise, air pollution). The citizen’s perception of the problem to be solved, as well as the perception of the improvement achieved with the policies defined for this purpose are more important than the objectivity and the measurement of the change achieved. A first auralization approach for the evaluation of the acoustic perception of street noise is presented in this work. The wireless acoustic sensor network can pick up street noise and can even record specific sounds that reach a higher equivalent level for study, but the most important thing for administration is whether the neighbor has noticed an improvement in the quality of life. This work is a first approximation to an estimation of the real perception of citizens of the street urban noises collected by a low-cost wireless acoustic sensor network. Introduction Because of population growth and the consequent expansion of transportation systems, including highways, railways, and airways, environmental noise pollution is increasing year after year.Noise pollution continues to constitute a major environmental health problem in Europe [1].From all the health effects, annoyance is one of the most well-known effects of environmental noise [2]; however, it is not merely an annoyance, since several works point out health-related problems such as sleep disorders [3], learning impairment [4], and heart disease [5].Most of the conducted studies address the effects of long-term exposure to environmental noise and are mainly focused on concentration, sleep disturbance, and stress [6] issues, emphasizing the especially negative effects on children [7]. The European Union reacted to this alarming increase of environmental noise pollution, especially in large population cities, approving the Environmental Noise Directive 2002/49/EC (END) [8].In accordance with the END, the CNOSSOS-EU methodological framework pretends to improve the consistency and comparability of noise assessment results across the EU Member States [9] for its application.The main pillars of the END are the following: (i) determining the noise exposure; (ii) making the updated information related to noise available to citizens, and (iii) preventing and reducing the environmental noise where necessary. Recent studies show that the effects of noise on people not only depend on the level of noise, but also on the type of sound.In fact, in 2018, the WHO has incorporated into its study noises such as leisure noise and wind turbine noise (http://www.euro.who.int/en/publications/abstracts/environmental-noise-guidelines-for-the-european-region-2018).This principle is the basis for the work carried out by this team within the framework of the European project LIFEDYNAMAP [10].The ANED [11], the anomalous noise event detector, has been designed to rule out non-traffic noise events; this algorithm separates abnormal noises from the road traffic noise.The ANED is an algorithm based on the spectral distribution of the different types of noise to identify them properly, and throughout the study time, it has been proven that depending on the propagation of sound, the identification conditions may change [12].Furthermore, by changing the temporal spectrum distribution of the signal, human perception may also change [13].This work intends to be a first step in the framework of the concept of auralization in an urban environment [14] with non-traffic-related noise (anomalous noise events (ANE)), to evaluate a first observation of the effect of the channel [15] on the spectrum-temporal vision of the real-operation signal collected in the Milan pilot project DYNAMAP [10].The final goal of this preliminary study is two-fold: on the one hand, have a first glance at the possible influence of the channel propagation on the accuracy of the ANED and, on the other hand, the possibility of the change in the annoyance of the neighborhood depending on the impulse response of the propagation of the noise channel. This paper is structured as follows.In Section 2, brief details of the LIFE-DYNAMAP project are given.In Section 3 the mathematical models used for propagation are detailed, the results of which are presented in Section 4. Finally, several conclusions are described in Section 5, and future work is proposed. The DYNAMAP Project and Real-Operation Recordings In this framework, the DYNAMAP project [10] aims to deploy a low-cost hybrid WASN to tailored noise maps representing the acoustic impact of road infrastructures in real time, using a Geographic Information System (GIS) platform.The project includes the deployment of two pilot areas in Italy, the A90 motorway in Rome (for the suburban scenario) and District 9 in Milano (urban area).The system has to operate 24 h a day, 7 days a week.In order to monitor the impact of the road infrastructures solely, the events that are unrelated to road traffic noise, denoted as ANE, should be removed from the noise map generation [16] to avoid its impact. In District 9 in Milan there are currently 24 low-cost, high-capacity sensors deployed in a WASN.We have collected ANE data from two sensors (hb137 and hb145), the performance of which is the closest to open air due to the fact that they are located near parks, and not in narrow streets.For more details about the location of the sensors, the reader is referred to [17].The data were recorded during two complete days-one weekday, on Thursday, and one weekend day, on Sunday-gathering 20 min of audio data each hour, in order to maximize the diversity of the recorded ANEs.For the acoustic data gathering, Bluewave, the partner of the DYNAMAP project that handles sensors' hardware design and maintenance, provided us access to the recorded data files in the cloud, which were subsequently downloaded.The next step was labeling by subjective listening to half of the available audio (all odd hours of the 20 min recorded: 1 h, 3 h, 5 h, . . ., 23 h), which was performed by five trained listeners.From those labeled events, we collected several significant noises (airplane, bell, and horn) to conduct this first stage of the study. Outdoor Propagation Models In this work, we considers the sound signal radiating isotropically as a spherical wave-front [18].In such a case, the free-field intensity of the radiation reduces with the inverse square of the distance.If we take into account the sound pressure (P), this relationship translates into the following relationship: where R is the location of the receiver, E is the location of the emitter, and r is the euclidean distance between both.In this work, we do not consider high frequency attenuation due to atmospheric scattering. Regarding sound reflection models, we assume pure specular reflectors with obstacles much bigger than the emitted sound wavelength since it is an urban scenario.We also take into account two channel models in an urban scenario.The first one is a two taps channel (Channel A) where the emitter and receiver are separated 5 m.In Channel A, we consider a direct path and a ground reflected path (7 m long).However, we have designed a more challenging channel with a direct path (8 m long), a ground-reflected path (10 m long), and two more paths reflected or refracted by nearby walls and/or vegetation (14 m and 16 m long).Each tap introduces an attenuation, which is, as stated above, inversely proportional to the length of the path.The phase (θ) of each path is uniformly distributed between 0 and 2π.Then, the impulse response of the channel can be expressed as: where r n is the attenuation of path n and N is the number of paths of the channel (N = 2 for Channel A, and N = 4 for Channel B). Results In this section, we evaluate the changes suffered in the frequency domain when recorded ANEs were propagated through two different multipath channels (Channel A and Channel B, explained in Section 3).We show in Figures 1-3 the outcomes related to different ANEs, i.e., the noise of an airplane, a bell, and a horn.For the sake of brevity, we only show these three examples, which are representative of the phenomena we want to outline.For each of them, we show the spectrogram of the emitted signal in the upper plot, the spectrogram of the received signal through Channel A in the mid-left plot, and the spectrogram of the received signal through Channel B in the mid-right plot.Finally, in the lower plots of each figure, we show the accumulated energy at the receiver when propagating through Channels A and B, on the left-and right-hand side, respectively.The spectrogram chops the signal into 40-ms segments, which were windowed with a Hanning window to reduce leakage and transformed into the frequency domain by means of a 2048-point FFT, displayed in a natural scale.Consecutive segments were overlapped by a factor of 87.5% to maximize the probability of detection. In Figure 2, we can observe two phenomena.The first one is that the frequency distribution of the energy changed depending on the type of channel, and the second one is that the intensity of the received sound may depend on the number of paths of the channel and the phase of each of them when impacting the receiver.In Figure 3, we can observe that Channel A and Channel B influenced the intensity of the high frequency components (e.g., the component at 10 kHz), as well as their time length.In Figure 1, we can observe that the intensity and number of high frequency components were both reduced in Channel B compared to Channel A. In all of the figures, we can observe that the variation of the accumulated energy at the emitter point was more similar to that at the receiver point when propagating through a low number of tap channel (i.e., Channel A), rather than through a higher number of tap channel (i.e., Channel B).The fact of having a higher number of replicas with random phase added together increased the probability of having the maximums of energy at different time instants. Conclusions The work presented in this paper is a preliminary study to determine the spectro-temporal variations of acoustic signals in the presence of different types of propagation channels in an urban environment.On the one hand, the qualitative evaluations developed in this work present substantial variations both in terms of spectral distribution energy and in temporal variations due to delay.These variations can have severe effects on the detection of anomalous events using the ANED algorithm. On the other hand, it should be also taken into account whether these spectral-temporal variations have any effect on people living in the environment: Do these variations make the noises more annoying?Does perception change when the coefficients of the spectral and temporal energy distribution are modified? The future lines of this work are going to focus on the quantification of spectro-temporal variations depending on the type of channel with which we are working.At the same time, the study will be generalized for all the ANE available in the project, and the degree of the detectionwill be determined as ANED accuracy for different types of channels.Finally, it is intended to study the degree of generalization of the detection of acoustic events in varying propagation environments, taking the example of a narrow street with tall buildings to a point surrounded by a park, much closer to what could be considered open air. Figure 1 .Figure 2 . Figure 1.Anomalous noise event labeled as an airplane.The spectrogram of the emitted signal in the upper plot; in the mid-left plot, the spectrogram of the received signal through Channel A; in the mid-right plot, the spectrogram of the received signal through Channel B; in the lower-left plot, the accumulated energy through Channel A; and in the lower-right plot, the accumulated energy through Channel B. Figure 3 . Figure 3. Anomalous noise event labeled as a horn.The spectrogram of the emitted signal in the upper plot; in the mid-left plot, the spectrogram of the received signal through Channel A; in the mid-right plot, the spectrogram of the received signal through Channel B; in the lower-left plot, the accumulated energy through Channel A; and in the lower-right plot, the accumulated energy through Channel B. Author Contributions: R.M.A.-P.conceived of the experiments and wrote a part of the paper.P.B. coded the tests and wrote the rest of the paper.
v3-fos-license
2020-10-19T18:10:57.762Z
2020-09-22T00:00:00.000
224955022
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-3417/10/18/6626/pdf", "pdf_hash": "083bd5e664b9e3fd511eef53c7b5d78fc3014f15", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42566", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "sha1": "fc356f6f6e386731e93b7bdbf6763123618522c6", "year": 2020 }
pes2o/s2orc
Taylor Bird Swarm Algorithm Based on Deep Belief Network for Heart Disease Diagnosis : Contemporary medicine depends on a huge amount of information contained in medical databases. Thus, the extraction of valuable knowledge, and making scientific decisions for the treatment of disease, has progressively become necessary to attain e ff ective diagnosis. The obtainability of a large amount of medical data leads to the requirement of e ff ective data analysis tools for extracting constructive knowledge. This paper proposes a novel method for heart disease diagnosis. Here, the pre-processing of medical data is done using log-transformation that converts the data to its uniform value range. Then, the feature selection process is performed using sparse fuzzy-c-means (FCM) for selecting significant features to classify medical data. Incorporating sparse FCM for the feature selection process provides more benefits for interpreting the models, as this sparse technique provides important features for detection, and can be utilized for handling high dimensional data. Then, the selected features are given to the deep belief network (DBN), which is trained using the proposed Taylor-based bird swarm algorithm (Taylor-BSA) for detection. Here, the proposed Taylor-BSA is designed by combining the Taylor series and bird swarm algorithm (BSA). The proposed Taylor-BSA–DBN outperformed other methods, with maximal accuracy of 93.4%, maximal sensitivity of 95%, and maximal specificity of 90.3%, respectively. Introduction Contemporary medicine depends on a large amount of information accumulated in medical datasets. The extraction of such constructive knowledge can help when making scientific decisions to diagnose disease. Medical data can enhance the management of hospital information and endorse the growth of telemedicine. Medical data primarily focuses on patient care first, and research resources second. The main rationalization to collect medical data is to promote patient health conditions [1]. The accessibility of numerous medical data causes redundancy, which requires effectual and significant techniques for processing data to extract beneficial knowledge. However, the diagnostics of various diseases indicate significant issues in data analysis [2]. Quantifiable diagnosis is performed by adoctor's guidance rather than patterns of the medical dataset; thus, there is the possibility of incorrect diagnosis [3]. Cloud-based services can assist with managing medical data, including compliance management, policy integration, access controls, and identity management [4]. Now a day, heart disease is a foremost source of death. We are moving towards a new industrial revolution; thus, lifestyle changes should take place to prevent risk factors of heart disease, such as obesity, diabetes, hypertension, and smoking [5]. The treatment of disease is a complex mission in medical field. The discovery of heart disease, with different risk factors, is considered a multi-layered issue [6]. Thus, patient medical data are collected to simplify the diagnosis process. Offering a valuable service (at less cost) is a major limitation in the healthcare industry. In [7], valuable quality service refers to the precise diagnosis (and effective treatment) in patients. Poor clinical decisions cause disasters, which may affect the health of patients. Automated approaches, such as the machine-learning approach [8,9] and data mining [10] approach, assist with attaining clinical tests, or diagnoses, at reduced risks [11,12]. The classification and pattern recognition by machine learning algorithms are widely included in prognostic and diagnosis monitoring. The machine learning approach supports decision-making, which increases the safety of the patients and avoids medical errors, so that it can be used in clinical decision support systems (CDSS) [13,14]. Several methods are devised for automatic heart disease detection to evaluate the efficiency of the decision tree and Naive Bayes [15]. Moreover, optimization with the genetic algorithm is employed for minimizing the number of attributes without forfeiting accuracy and efficiency to diagnose heart disease [16]. Data mining methods for heart disease diagnosis include the bagging algorithm, neural network, support vector machine, and automatically defined groups [17]. In [18], the study acquired 493 samples from a cerebrovascular disease prevention program, and utilized three classification techniques (the Bayesian classifier, decision tree, and backpropagation neural network) for constructing classification models. In [19], a method is devised for diagnosing coronary artery disease. The method utilized 303 samples by adapting the feature creation technique. In [20], a methodology is devised for automatically detecting the efficiency of features to reveal heart rate signals. In [21], a hybrid algorithm is devised with K-Nearest Neighbour (KNN), and the genetic algorithm for effectual classification. The method utilized a genetic search as a decency measure for ranking attributes. Then, the classification algorithm was devised on evaluated attributes for heart disease diagnosis. The extraction of valuable information from huge data is a time-consuming task [22]. The size of the medical dataset is increasing in a rapid manner and advanced techniques of data mining help physicians make effective decisions. However, the issues of heart disease data involve feature selection, in which the imbalance of samples and the lack of magnitude of features are just some of the issues [23]. Although there are methods for heart disease detection with real-world medical data, these methods are devised to improve accuracy and time for computation in disease detection [24]. In [25], a hybrid model with the cuckoo search (CS)-and a rough set-is adapted for diagnosing heart disease. The drawback is that a rough set produces an unnecessary number of rules. To solve these challenges in heart disease diagnoses; a novel method, named the Taylor-based bird swarm algorithm-deep belief network (Taylor-BSA-DBN), is proposed for medical data classification. The purpose of the research is to present a heart disease diagnosis strategy, for which the proposed Taylor-BSA-DBN is employed. The major contribution of the research is the detection of heart disease using selected features. Here, the feature selection is performed using sparse FCM for selecting imperative features. In addition, DBN is employed for detecting heart disease data using the features. Here, the DBN is trained by the proposed Taylor-BSA, in such a way that the model parameters are learned optimally. The proposed Taylor-BSA is developed through the inheritance of the high global convergence property of BSA in the Taylor series. Hence, the proposed Taylor-BSA-DBN renders effective accuracy, sensitivity, and specificity while facilitating heart disease diagnosis. The major portion of the paper focuses on: • Proposed Taylor-BSA-DBN for heart disease diagnosis:Taylor-BSA-DBN(a classifier) is proposed by modifying the training algorithm of the DBN with the Taylor-BSA algorithm, which is newly derived by combining the Taylor series and BSA algorithm, for the optimal tuning of weights and biases. The proposed Taylor-BSA-DBN is adapted for heart disease diagnosis. Other sections of the paper are arranged as follows: Section 2 elaborates the descriptions of conventional heart disease detection strategies utilized in the literature, as well as challenges faced, which are considered as the inspiration for developing the proposed technique. The proposed method for heart disease diagnosis using modified DBN is portrayed in Section 3. The outcomes of the proposed strategy with other methods are depicted in Section 4; Section 5 presents the conclusion. Motivations This section illustrates eight strategies employed for heart disease diagnosis, along with its challenges. Literature Survey Reddy, G.T. et al. [22] devised an adaptive genetic algorithm with fuzzy logic (AGAFL) model for predicting heart disease, which assists clinicians in treating heart disease at earlier phases. The model comprises rough sets with a fuzzy rule-based classification module and heart disease feature selection module. The obtained rules from fuzzy classifiers are optimized by adapting an adaptive genetic algorithm. Initially, the significant features that affect heart disease are chosen using the rough set theory. Then, the second step predicts heart disease with the AGAFL classifier. The method is effective in handling noisy data and works effectively with large attributes. Nourmohammadi-Khiarak et al. [23] devised a method for selecting features and reducing the number of features. Here, the imperialist competitive algorithm was devised to choose important features from heart disease. This algorithm offers an optimal response in selecting features. Moreover, the k-nearest neighbor algorithm was utilized for classification. The method showed that the accuracy of feature selection was enhanced. However, the method failed to utilize incomplete or missed data. Magesh, G. and Swarnalatha, P. [26] devised a model using Cleveland heart samples for heart disease diagnosis. The method employed cluster-based Decision Tree learning (CDTL) for diagnosing heart disease. Here, the original set was partitioned using target label distribution. From elevated distribution samples, the possible class was derived. For each class set, the features were detected using entropy for diagnosing heart disease. Thiyagaraj, M. and Suseendran, G. [27] developed Particle Swarm Optimization and Rough Sets with Transductive Support Vector Machines (PSO and RS with TSVM) for heart disease diagnosis. This method improved data integrity to minimize data redundancy. The normalization of data was carried out using Zero-Score (Z-Score). Then, the PSO was employed for selecting the optimal subset of attributes, reduce computational overhead, and enhance prediction performance. The Radial Basis Function-Transductive Support Vector Machines (RBF-TSVM) classifier was employed for heart disease prediction. Abdel-Basset, M. et al. [28] devised a model using Internet of Things (IoT) for determining and monitoring heart patients. The goal of the healthcare model was to obtain improved precision for diagnosis. The neutrosophic multi-criteria decision-making (NMCDM) technique was employed for aiding patients (i.e., for observing patients suffering from heart failure). Moreover, the model provided an accurate solution that decreases the rate of death and the cost of treatment. Nilashi, M. et al. [24] devised a predictive technique for heart disease diagnosis with machine learning models. Here, the method adapted unsupervised and supervised learning for diagnosing heart disease. In addition, the method employed Self-Organizing Map, Fuzzy Support Vector Machine (FSVM), and Principal Component Analysis (PCA) for missing value assertion. Moreover, incremental PCA and FSVM are devised for incremental learning of data to minimize the time taken for computation in disease prediction. Shah, S.M.S. et al. [29] devised an automatic diagnostic technique for diagnosing heart disease. The method evaluated the pertinent feature subset by employing the benefits of feature selection and extraction models. For accomplishing the feature selection, two algorithms: accuracy based feature selection algorithm (AFSA) and Mean Fisher based feature selection algorithm (MFFSA) for heart disease diagnosis. However, the method failed to employ PCA for dimension reduction. Acharjya, D.P. [25] devised a hybrid method for diagnosing heart disease. The method combined the cuckoo search (CS) and rough set to infer decision rules. Moreover, the CS was employed for discovering essential features. In addition, three major features were evaluated with rough set rules. The method improved feasibility, but failed to induce an intuitionistic fuzzy rough set and CS for diagnosing heart disease. Proposed Taylor-BSA-DBN for Medical Data Classification The accessibility of a large amount of medical data led to the requirement of strong data analysis tools for extracting valuable knowledge. Researchers are adapting data mining and statistical tools for improving the analysis of data on huge datasets. The diagnosis of a disease is the foremost application in which data mining tools are offering triumphant results. Medical data tend to be rich in information, but poor in knowledge. Thus, there is a deficiency of effectual analysis tools for discovering hidden relation and trends from medical data generated from clinical records. The processing of medical data brings a manifestation if it has some powerful methods. Thus, the proposed Taylor-BSA-DBN is devised to process medical data for attaining effective heart disease diagnosis. Figure 1 portrays the schematic view of the proposed Taylor-BSA-DBN for heart disease diagnosis. The complete process of the proposed model is pre-processing feature selection, and detection. At first, the medical data is fed as an input to the pre-processing phase, wherein log transformation is applied to pre-process the data. Log transformation is applied for minimizing skew, and to normalize the data. Once the pre-processed data are obtained, then it is further subjected to the feature selection phase. In the feature selection phase, the imperative features are selected with Sparse FCM. After obtaining imperative features, the detection is performed with DBN, wherein the training of DBN is carried out using Taylor-BSA. The proposed Taylor-BSA is devised by combining the Taylor series and BSA. The output produced from the classifier is the classified medical data. Consider an input medical data be given as A, with various attributes, and is expressed as where A G,H denotes H th attribute in G th data, B specifies a total number of data, and C specifies total attributes in each data. The dimension of the database is represented as [B × C]. Pre-Processing The importance of pre-processing is to facilitate smoother processing of the input data. Additionally, the pre-processing is carried out for eliminating the noise and artefacts contained in the data. In this method, the pre-processing is carried out by using log transformation, in which data are replaced with a log function, wherein the base of the log is set by the analyst (maybe 2, or 10). The process is used to compress the massive data. In addition, the log transformation has extensively adapted the method to solve skewed data and assist data normalization. The log transformation is formulated as, The dimension of pre-processed dataset A becomes [B × C]. Selection of Features with Sparse FCM Clustering The pre-processed data are fed to the feature selection module, considering the Sparse FCM algorithm [30], which is the modification of the standard FCM. The benefit of using Sparse FCM is to provide high dimensional data clustering. The pre-processed data contain different types of attributes, each indicating individual value. In the medical data classification strategy, the sparse FCM is applied for determining the features from the data. The sparse FCM clustering algorithm clusters nodes, to attain communication between nodes through the cluster head, and facilitate effective detection of the attacker node. Generally, in sparse FCM, dimensional reduction is effective, poses the ability to handle disease diagnosis without delay, and is easier with optimization techniques. Classification of Medical Data with Proposed Taylor-BSA-Based DBN In this section, medical data classification using the proposed Taylor-BSA method is presented, and the classification is progressed using the feature vector. Proposed Taylor-BSA Algorithm The proposed Taylor-BSA is the combination of the Taylor series and BSA. The Taylor series [31] explains the functions of complex variables, and it is the expansion of a function into an infinite sum of terms. It not only serves as a powerful tool, but also helps in evaluating integrals and infinite sums. Moreover, the Taylor series is aone-step process, and it can deal with higher-order terms. The Taylor series seems to be advantageous for derivations, and can be used to get theoretical error bounds. Above all, the Taylor series ensures the accuracy of classification. Moreover, it is a simple method to solve complex functions. BSA [32] is duly based on the social behaviors of birds that follow some idealistic rules. BSA is more accurate than other standard optimizations with highly efficient, accurate, and robust performances. In addition, there is a perfect balance between exploration and exploitation in BSA. The DBN has recently become a popular approach in machine learning for its promised advantages, such as fast inference and the ability to encode richer and higher order network structures. DBN is used to extract better feature representations, and several related tasks are solved simultaneously by using shared representations. Moreover, it has the advantages of a multi-layer structure, and pre-training with the fine-tuning learning method. The algorithmic steps of the proposed Taylor-BSA are described below: Step 1. Initialization: the first step is the initialization of population and other algorithmic parameters, including: where, the population size is denoted as j, h max represent maximal iteration, prob indicate the probability of foraging food, and the frequency of flight behavior of birds is expressed as Ft. Step 2. Determination of objective function: the selection of the best position of the bird is termed as a minimization issue. The minimal value of error defines the optimal solution. Step 3. Position update of the birds: for updating the positions, birds have three phases, which are decided using probability. Whenever the random number Rand(0, 1) < prob, then the update is based on foraging behavior, or else the vigilance behavior commences. On the other hand, the swarm splits as scroungers and producers, which is modeled as flight behaviors. Finally, the feasibility of the solutions is verified and the best solution is retrieved. Step 4. Foraging behavior of birds: the individual bird searches for the food based on its own experience, and the behavior of the swarm, which is given below. The standard equation of the foraging behavior of birds [32] is given by, where, F h+1 i,j and F h i,j denotes the location of i th bird in j th dimension at (h + 1) and h, P i,j refers to the previous best position of the i th bird, Rand(0, 1) is independent uniformly distributed numbers, Y j indicates the best previous location shared by the birds swarm, Z denotes the cognitive accelerated coefficients, and T denotes the social accelerated coefficients. Here, Z and T are positive numbers. According to the Taylor series [31], the update equation is expressed as, Substituting Equation (5) in Equation (3), Step 5. Vigilance Behavior of Birds: the birds move towards the center, during which, the birds compete with each other; the vigilance behavior of birds is modeled as, where, V represents the number of birds, w 1 and w 2 are the positive constants lying in the range of [0, 2], RQ(U) i denotes the optimal fitness value of i th bird, and RQ corresponds to the addition of the best fitness values of the swarm. ψ be the constant that keeps optimization away from zero-division error. T signifies the positive integer. Step 6. Flight Behavior: this behavior is of the birds' progress, when the birds fly to another site in case of any threatening events and foraging mechanisms. When the birds reach a new site, they search for food. Some birds in the group act as producers and others as scroungers. The behavior is modeled as, where, Random (0, 1) refer to the Gaussian distributed random number with zero-mean and standard deviation. Step 7. Determination of best solution:the best solution is evaluated based on error function. If the newly computed solution is better than the previous one, then it is updated by the new solution. Step 8. Terminate: the optimal solutions are derived in an iterative manner until the maximum number of iterations is reached. The pseudo-code of the proposed Taylor-BSA algorithm is illustrated in Algorithm 1. Vigilance behavior using Equation (12) Training of Deep Belief Network This section elaborates on the training process of the proposed Taylor-BSA-DBN classifier. A RBM has unsupervised learning based on the gradient descent method, whereas MLP performs a supervised learning method using the standard backpropagation algorithm. Therefore, the training of DBN is based on a gradient descent-backpropagation algorithm. Here, the most appropriate weights are chosen optimally for the update. The training procedure of the proposed DBN classifier is described below, I. Training of RBM Layers A training sample N is given as the input to the first layer of RBM. It computes the probability distribution of the data and encodes it into the weight parameters. The steps involved in the training process of RBM are illustrated below. 1. The input training sample is read and the weight vector is produced randomly. 2. The probability function of each hidden neuron in the first RBM is calculated. 3. The positive gradient is computed using a visible vector and the probability of the hidden layer. 4. The probability of each visible neuron is obtained by reconstructing the visible layer from the hidden layer. 5. The probability of reconstruction of hidden neurons is obtained by resampling the hidden states. 6. The negative gradient is computed. 7. Weights are updated by subtracting the negative gradient from the positive gradient. 8. Weights are updated for the next iteration, using the steepest or gradient descent algorithm. 9. Energy is calculated for a joint configuration of the neurons in the visible and the hidden layers. II. Training of MLP The training procedure in MLP is based on a backpropagation approach by feeding the training data, which are the hidden output of the second RBM layer through the network. Analyzing the data, the network is adjusted iteratively until the optimal weights are chosen. Moreover, Taylor-BSA is employed to compute the optimal weights, which are determined using the error function. The training procedure is summarized below. 2. Read the input sample from the result of the preceding layer. 3. Obtain the average error, based on the difference between the obtained output and the desired output. 4. Calculate the weight updates in the hidden and the visible layers. 5. Obtain the new weights from the hidden and the visible layers by applying gradient descent. 6. Identify the new weights using the updated equation of Taylor-BSA. 7. Estimate the error function using gradient descent and Taylor-BSA. 8. Choose the minimum error and repeat the steps. Results and Discussion This section elaborates on the assessment of the proposed strategy with classical strategies for medical data classification using accuracy, sensitivity, and specificity. The analysis is done by varying training data. In addition, the effectiveness of the proposed Taylor-BSA-DBN is analyzed. Experimental Setup The implementation of the proposed strategy is carried out using Java libraries via Java Archive (JAR) files, utilizing a PC, Windows 10 OS, 2GB RAM, and an Intel i3 core processor. The simulation setup of the proposed system is depicted in Table 1. Dataset Description The experimentation is done using Cleveland, Hungarian, and Switzerland datasets taken from healthcare data based on University of California Irvine (UCI) machine learning repository [34], which is commonly used for both detection and classification. The Cleveland database is taken from the Cleveland Clinical Foundation contributed by David W. Aha. The Hungarian dataset is obtained from the Hungarian Institute of Cardiology. The Switzerland dataset is obtained from the University Hospital, Basel, Switzerland. The dataset comprises of 303 number of instances and 75 attributes, ofwhich, 13 attributes are employed for experimentation. Furthermore, the dataset is characterized as multivariate with integer and real attributes. The attributes (features), such asresting blood pressure (trestbps), maximum heart rate achieved (thalach), the slope of the peak exercise ST segment (slope), age (age), sex (sex), fasting blood sugar (fbs), ST depression induced by exercise relative to rest (oldpeak), chest pain (cp), serum cholesterol (chol), exercise-induced angina (exang), resting electrocardiographic results (restecg), number of major vessels (0-3) colored by fluoroscopy (ca), and 3 = normal; 6 = fixed defect; 7 = reversible defect (thal). Evaluation Metrics The performance of the proposed Taylor-BSA-DBN is employed for analyzing the methods, including accuracy, sensitivity, and specificity. Accuracy The accuracy is described as the degree of closeness of an estimated value with respect to its original value in optimal medical data classification, and it is represented as, where, T p represent true positive, F p indicate false positive, T n indicate true negative, and F n represents false negative, respectively. Sensitivity This measure is described as the ratio of positives that are correctly identified by the classifier, and it is represented as, Specificity This measure is defined as the ratio of negatives that are correctly identified by the classifier, and is formulated as. Comparative Analysis The analysis of the proposed Taylor-BSA-DBN, with the conventional methods, with accuracy, sensitivity, and specificity parameters, is evaluated. The analysis is performed by varying the training data using Cleveland, Hungarian, and Switzerland databases. Analysis with Cluster Size = 5 The analysis of methods, considering cluster size = 5, using Cleveland, Hungarian, and Switzerland databases are specified below: Analysis Considering Cleveland Database Table 2 elaborates the analysis of methods using the Cleveland database, considering training data with accuracy, sensitivity, and specificity parameters. The maximum accuracy, sensitivity, and specificity is considered as the best performance. Here, the proposed system offers better performances than the existing methods, such as SVM, NB, and DBN, respectively. Table 3 elaborates the analysis of methods using the Hungarian database, considering training data with accuracy, sensitivity, and specificity parameters. The proposed system offers the best performance when considering 90% of training data. Table 4 elaborates the analysis of methods using the Switzerland database considering training data with accuracy, sensitivity, and specificity parameters. The better performances of the proposed system, with values, are 0.8462, 0.8571, and 0.8333 for performance metrics, such as accuracy, sensitivity, and specificity. The analysis of methods considering cluster size = 9, using Cleveland, Hungarian, and Switzerland databases are specified below: Analysis Considering Cleveland Database Table 5 depicts the analysis of methods using the Cleveland database, considering training data with accuracy, sensitivity, and specificity parameters. The maximum accuracy, sensitivity, and specificity are considered as the best performances. Here, the proposed system offers better performance than the existing methods, such as SVM, NB, and DBN, respectively. Table 6 shows the analysis of methods using the Hungarian database, considering training data with accuracy, sensitivity, and specificity parameters. The proposed system offers the best performance when considering 90% of training data. Table 7 depicts the analysis of methods using the Switzerland database considering training data with accuracy, sensitivity, and specificity parameters. The better performance of the proposed system with values is 0.7778, 0.7857, and 0.7692, for the performance metrics, such as accuracy, sensitivity, and specificity. Table 8 depicts the comparative analysis based on ROC curve, using Cleveland, Hungarian, and Switzerland databases. In the Cleveland dataset, when the false positive rate (FPR) is 5, the corresponding true positive rate (TPR) of the methods, such as SVM, NB, DBN, and the proposed Taylor-BSA-DBN is 0.8857, 0.9119, 0.9535, and 0.9684, respectively. By considering the Hungarian dataset, when the FPR is 4, the corresponding TPR of the proposed method is a maximum of 0.9348. For the same FPR, the TPR of the methods, such as SVM, NB, and DBN is 0.9030, 0.9130, and 0.9233, respectively. By considering the Switzerland dataset, when the FPR is 6, the TPR of the methods, such as SVM, NB, DBN, and the proposed Taylor-BSA-DBN is 0.9105, 0.9443, 0.9569, and 0.9794, respectively. Table 9 depicts the comparative analysis based on k-fold using the Cleveland, Hungarian, and Switzerland databases, for cluster size = 5. The Hungarian datasets offer the maximum accuracy of 0.9021, when k-fold = 8. By considering k-fold = 7, the specificity offered by the Cleveland datasets for the methods, such as SVM, NB, DBN, and the proposed Taylor-BSA-DBN, is 0.8032, 0.8189, 0.8256, and 0.8321, respectively. The proposed Taylor-BSA-DBN offers maximum accuracy, sensitivity, and specificity, when considering k-fold = 8. Table 10 portrays the analysis of methods using accuracy, sensitivity, and specificity parameter with varying training data. The analysis is done with Cleveland, Switzerland, and Hungarian databases. Using cluster size = 5, and considering the Cleveland database, the proposed Taylor-BSA-DBN showed maximal accuracy of 0.871, which is 13.43%, 12.17%, and 11.14%, better than the existing methods, such as SVM, NB, and DBN, respectively. In the existing methods, the DBN offers maximum sensitivity of 0.771, but the proposed method is 12.29% better than the existing DBN. The proposed method has a maximum specificity of 0.862. The percentage of improvement of the proposed method with the existing methods, such as SVM, NB, and DBN, is 12.99%, 12.06%, and 9.40%, respectively. Considering the Hungarian database, the proposed Taylor-BSA-DBN showed maximal accuracy of 0.913, maximal sensitivity of 0.933, and maximal specificity of 0.875. Considering the Switzerland database, the proposed Taylor-BSA-DBN showed maximal accuracy of 0.846, which is 19.98%, 16.78%, and 15.60% better than the existing methods, such as SVM, NB, and DBN, respectively. Similarly, the proposed system has a maximum sensitivity of 0.857. The percentage of improvement of the proposed system sensitivity, with the existing methods, such as SVM, NB, and DBN is 19.72%, 19.25%, and 16.69%, respectively. Likewise, the proposed Taylor-BSA-DBN showed maximal specificity of 0.833. Using cluster size = 9, and considering the Cleveland database, the proposed Taylor-BSA-DBN showed maximal accuracy of 0.934, which is 16.92%, 11.13%, and 3.96%, better than the existing methods, such as SVM, NB, and DBN, respectively. In the existing methods, the DBN offers maximum sensitivity of 0.913, but the proposed method is 3.89% better than the existing DBN. The proposed method has a maximum specificity of 0.903. The percentage of improvement of the proposed method with the existing methods, such as SVM, NB, and DBN, is 23.15%, 15.28%, and 3.10%, respectively. Considering the Hungarian database, the proposed Taylor-BSA-DBN showed maximal accuracy of 0.902, maximal sensitivity of 0.909, and maximal specificity of 0.893. Considering the Switzerland database, the proposed Taylor-BSA-DBN showed maximal accuracy of 0.840, which is 19.17%, 10.12%, and 2.38%, better than the existing methods, such as SVM, NB, and DBN, respectively. Similarly, the proposed system has a maximum sensitivity of 0.846. The percentage of improvement of the proposed system sensitivity with the existing methods, such as SVM, NB, and DBN is 19.74%, 11.35%, and 1.89%, respectively. Likewise, the proposed Taylor-BSA-DBN showed maximal specificity of 0.833. Table 11 shows the computational time of the proposed system and the existing methods, such as SVM, NB, and DBN, in which the proposed Taylor-BSA-DBN has a minimum computation time of 6.31 sec. Table 12 shows the statistical analysis of the proposed work and the existing methods based on mean and variance. Conclusions Contemporary medicine depends on a huge amount of information contained in medical databases. The obtainability of large medical data leads to the requirement of effective data analysis tools for extracting constructive knowledge. This paper proposes a novel, fully automated DBN for heart disease diagnosis using medical data. The proposed Taylor-BSA is employed to train DBN. The proposed Taylor-BSA is designed by combining the Taylor series and BSA algorithm, which can be utilized for finding the optimal weights for establishing effective medical data classification. Here, the sparse-FCM is employed for selecting significant features. The incorporation of sparse FCM for the feature selection process provides more benefits for interpreting the models, as this sparse technique provides important features for detection, and can be utilized for handling high dimensional data. The obtained selected features are fed to DBN, which is trained by the proposed Taylor-BSA. The proposed Taylor-BSA is designed by integrating the Taylor series and BSA in order to generate optimal weights for classification. The proposed Taylor-BSA-DBN outperformed other methods with maximal accuracy of 93.4%, maximal sensitivity of 95%, and maximal specificity of 90.3%, respectively. The proposed method does not classify the type of heart disease. In the future, other medical data classification datasets will be employed for computing efficiency of the proposed method. In addition, the proposed system will be further improved to classify heart diseases, such ascongenital heart disease, coronary artery disease, and arrhythmia. Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2020-02-20T09:18:31.469Z
2020-02-12T00:00:00.000
213148745
{ "extfieldsofstudy": [ "Biology", "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2020.01027/pdf", "pdf_hash": "b71038bd78c79e58f66e1ccc6ca5f4dbea66beb2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42568", "s2fieldsofstudy": [ "Medicine", "Biology", "Chemistry" ], "sha1": "66b129c5f64bdbaf276298497ce154382edbf7d8", "year": 2020 }
pes2o/s2orc
Combination of Antifungal Drugs and Protease Inhibitors Prevent Candida albicans Biofilm Formation and Disrupt Mature Biofilms Biofilms formed by the fungal pathogen Candida albicans are resistant to many of the antifungal agents commonly used in the clinic. Previous reports suggest that protease inhibitors, specifically inhibitors of aspartyl proteases, could be effective antibiofilm agents. We screened three protease inhibitor libraries, containing a total of 80 compounds for the abilities to prevent C. albicans biofilm formation and to disrupt mature biofilms. The compounds were screened individually and in the presence of subinhibitory concentrations of the most commonly prescribed antifungal agents for Candida infections: fluconazole, amphotericin B, or caspofungin. Although few of the compounds affected biofilms on their own, seven aspartyl protease inhibitors inhibited biofilm formation when combined with amphotericin B or caspofungin. Furthermore, nine aspartyl protease inhibitors disrupted mature biofilms when combined with caspofungin. These results suggest that the combination of standard antifungal agents together with specific protease inhibitors may be useful in the prevention and treatment of C. albicans biofilm infections. Currently, the three major classes of antifungal drugs used to treat C. albicans infections are the polyenes, azoles, and echinocandins (Fox et al., 2015b;Prasad et al., 2016). The polyenes (e.g., amphotericin B) target ergosterol in the fungal cell membrane and are fungicidal against C. albicans. The azoles (e.g., fluconazole) inhibit the demethylase enzyme Erg11 from the ergosterol biosynthesis pathway and are fungistatic against C. albicans. Echinocandins (e.g., caspofungin), the most recently developed class of antifungal drugs, inhibit synthesis of the cell wall crosslinking component β-1,3-glucan and are fungicidal against C. albicans. Although novel derivatives within these classes have been introduced over the years, new classes of drugs have not been introduced. The limited size of the existing antifungals, both in terms of the distinct classes and in the number of drugs within several of these classes, creates several problems. As noted above, these classes of drugs typically have reduced effectiveness against biofilms relative to planktonic cells (Donlan, 2001;Kojic and Darouiche, 2004;Ramage et al., 2006;Tumbarello et al., 2007Tumbarello et al., , 2012Lebeaux et al., 2014). Furthermore, long term exposure to these drugs, especially to members of the azole class, can give rise to antifungal resistance. Although the development of new antifungal agents is clearly called for, several recent in vitro studies have shown that combinations of antifungals with other extant drugs can be effective against C. albicans biofilms (Delattin et al., 2014;De Cremer et al., 2015). Recently, we demonstrated the importance of several secreted proteases (Saps) for C. albicans biofilm formation Winter et al., 2016). Deletion of Sap5 and Sap6, both of whose expression is upregulated in biofilms , reduced biofilm formation in vitro and in vivo (Winter et al., 2016). Previous reports showed that treatment with aspartyl protease inhibitors, a class of drug commonly used to treat HIV patients, reduced the occurrence of oral candidiasis in immunocompromised patients independent of effects of the drug on the immune system through HIV remediation Diz Dios et al., 1999;Cassone et al., 2002). Further work showed that several of the commonly used antiretroviral HIV aspartyl protease inhibitors could inhibit the Saps (Cassone et al., 1999;Gruber et al., 1999b,a;Korting et al., 1999;Pichová et al., 2001;Skrbec and Romeo, 2002;Cenci et al., 2008;Braga-Silva et al., 2010). Exposure to these protease inhibitors also reduced C. albicans adherence to materials commonly used in medical devices and to layers of host cells (Borg-von Zepelin et al., 1999;Bektic et al., 2001;Tsang and Hong, 2009), although the magnitude of the latter effect differs greatly between distinct cell types (Falkensammer et al., 2007). Aspartyl protease inhibitors have also been observed to reduce C. albicans-induced tissue damage, proliferation, and virulence in vivo in a rat vaginal model (Cassone et al., 1999;de Bernardis et al., 1999). Finally, one study suggested that aspartyl protease inhibitors and the antifungal agents fluconazole or amphotericin B act synergistically against C. albicans in the planktonic form (Casolari et al., 2004). To date, the studies of aspartyl protease inhibitors with regards to C. albicans emphasized their effects on planktonic cells. The one exception found that exposure to amprenavir, a common HIV antiretroviral protease inhibitor, could reduce C. albicans biofilm formation in vitro (Braga-Silva et al., 2010). Given the number of protease inhibitors already approved for use in humans, including inhibitors of aspartyl proteases or other classes of proteases, we sought to evaluate the ability of a wide range of protease inhibitors to prevent (either alone or in combination with other antifungals) the formation of C. albicans biofilms or to act against mature biofilms. To evaluate the efficacy of these compounds in this regard, we screened three libraries containing 80 protease inhibitors in both biofilm inhibition and disruption assays. Each protease inhibitor was screened for biofilm efficacy individually and in combination with fluconazole, amphotericin B, or caspofungin. Although few of the protease inhibitors were effective against biofilms on their own, several, especially members of the aspartyl protease inhibitor class, were effective against biofilms when combined with either caspofungin or amphotericin B. Compound Libraries The 53 member SCREEN-WELL R Protease Inhibitor Library 1 was purchased from Enzo Life Sciences. The two aspartyl protease inhibitor libraries (from which we focused on nine FDA-approved HIV-1 protease inhibitors, the ten macrocycles, and eight linear peptidomimetics) have been previously reported (Clarke et al., 2016). Due to limited quantities of several aspartyl protease inhibitors, a minority of compounds were only screened in one biofilm assay. In these cases, we prioritized the Disruption Biofilm Assay over the Sustained Inhibition Biofilm Assay. Four other compounds from these libraries [one FDA-approved HIV-1 protease inhibitor and three linear peptidomimetics (API7-9)] were not used in any assay. A list of compounds tested can be found in Supplementary Table S1. Biofilm Assays The Sustained Inhibition and Disruption Standard Optical Density Biofilm Assays followed previously reported protocols for the 384-well format of biofilm screening assays Fox et al., 2015a;Lohse et al., 2017;Gulati et al., 2018). Compounds and antifungal agents were added during the 90-min adherence and 24-h growth steps of the Sustained Inhibition Biofilm Assay or for the second 24-h growth step of the Disruption Biofilm Assay. In brief, 1 µl of overnight culture was added to 90 µl media (or media with drug) in a well (final OD 600 = 0.15, roughly 2 × 10 6 cells/ml). Plates were then sealed with Breathe-Easy R sealing membranes (Diversified Biotech BEM-1) and shaken at 37 • C for 90 min at 350 rpm in an ELMI (DTS-4) incubator. Media was removed, wells were washed with PBS, and fresh media (or media with drug) was added back to wells. Plates were then resealed and shaken for a further 24 h. For the Sustained Inhibition Biofilm Assay, media was removed at this point and the absorbance (OD 600 ) was determined on a Tecan Infinite M1000 Pro or a Tecan M200. For the Disruption Biofilm Assays, media was instead removed in groups of 6-12 wells and fresh media containing the compound of interest was carefully added back to the wells. Plates were then resealed and shaken for an additional 24 h before removing media and recording absorbance as described above. Standalone Assays Compounds were tested at 40 µM in both the Sustained Inhibition and Disruption Standard Optical Density Biofilm Assays (Lohse et al., 2017;Gulati et al., 2018). Individual repeats of candidate compounds and DMSO solvent controls were performed. Each plate had groups of control wells spread throughout the plate to minimize position effects. For the SCREEN-WELL R Protease Inhibitor Library, the 53 compounds were screened once in both the Sustained Inhibition Biofilm Assay and the Disruption Biofilm Assay. Promising compounds from these initial screens were then tested a second time in the relevant assay(s). For the two aspartyl protease inhibitor libraries, we initially screened 21 compounds in the Sustained Inhibition Biofilm Assay and 25 compounds in the Disruption Biofilm Assay. Promising compounds from these initial screens were then tested two more times in the relevant assay(s). An additional three repeats were performed for four compounds (atazanavir, indinavir, nelfinavir, tipranavir) in the Disruption Biofilm Assay. For each experimental set of eight wells, significance was evaluated versus all of the control wells from the same plate by performing Welch's t-test (two-tailed, assuming unequal variance). In order to correct for the multiple comparisons performed, we then applied the Bonferroni Correction with α = 0.05. All of the comparisons for a given type of assay were pooled for this multiple comparisons correction step, giving a number of hypotheses, m, of 104 for the Sustained Inhibition Biofilm Assay and of 125 for the Disruption Biofilm Assay (final thresholds 4.81 × 10 −4 and 4.00 × 10 −4 , respectively). We then determined whether each experimental repeat had an average absorbance of less than the average of the control wells and was significant after the multiple comparisons correction. To be considered a validated hit, a compound had to satisfy both these criteria. Data and statistics for the Standalone Sustained Inhibition and Disruption Optical Density Biofilm Assays are compiled in Supplementary Table S1 "Standalone Inhibition" and "Standalone Disruption". A summary of hits from these assays are included in Supplementary Table S1 "Hit Listing By Type." BIC Assays We determined the biofilm inhibitory concentration (BIC) of nelfinavir, tipranavir, and TPCK using the 384-well format Sustained Inhibition Standard Optical Density Biofilm Assay (Lohse et al., 2017;Gulati et al., 2018). Both nelfinavir and tipranavir were serially diluted two-fold from a maximum concentration of 200 µM to a minimum concentration of 0.1 µM. TPCK was serially diluted two-fold from a maximum concentration of 512 µM to a minimum concentration of 0.06 µM. Equivalent volumes of DMSO were used as loading controls for the compounds. Statistical testing was performed as described above with the following changes. Significance was evaluated for a given concentration of compound (e.g., 50 µM nelfinavir) compared to the equivalent DMSO loading control (e.g., the 50 µM loading control). All BIC comparisons were then pooled for multiple comparisons correction, giving a number of hypotheses, m, of 38 (α = 0.05, final threshold 1.32 × 10 −3 ). We then determined whether each concentration of a drug had an average absorbance of less than the average of the relevant control wells and was significant after the multiple comparisons correction. The BIC of a compound was defined as the lowest concentration that met both of these requirements for which all higher concentrations of the same compound also met these requirements. If no concentration met these requirements, the BIC is indicated as greater than the highest concentration tested for that compound. Data and statistics for the BIC Sustained Inhibition Optical Density Biofilm Assay are compiled in Supplementary Table S1 "Inhibition BIC." Combination Assays The combination (candidate compound plus known antifungal agent) Sustained Inhibition and Disruption Biofilm Assays followed the protocols described above with the following modifications. The candidate compounds were included at 12.5 µM in both assays except for TPCK, Dec-RVKR-CMK, AEBSF·HCl, N-Ethylmaleimide, and acivicin, which were included at 4 µM, and gliotoxin, which was included at 1 µM. The Sustained Inhibition Biofilm Assays used 1 µg/mL amphotericin B, 0.125 µg/mL caspofungin, or 256 µg/mL fluconazole. The Disruption Biofilm Assays used 2 µg/mL amphotericin B, 0.5 µg/mL caspofungin, or 256 µg/mL fluconazole. The sensitivity of the strain used in this study to amphotericin B, caspofungin, and fluconazole are reported in Supplementary Table S1 "SN425 Sensitivity." Compounds and two sets of controls were included for all candidate compounds and antifungal agents tested on a given plate. The first set of controls contained the candidate compound, but no antifungal agent, while the second set of controls contained the antifungal agent, but no candidate compound. The concentration of candidate compound or antifungal agent in these control wells was the same as the experimental wells. In general, one set of wells was included for each experimental or control condition on a given plate. Statistical analysis was performed using Welch's t-test and the Bonferroni Correction as described above with the following modifications. Each experimental condition was compared to both the relevant antifungal agent and candidate controls (e.g., a nelfinavir plus caspofungin experiment was compared to the nelfinavir-only control and the caspofungin-only control from the same plate). All of the same comparisons for a given assay were pooled for the multiple comparisons correction, giving a number of hypotheses, m, of 213 for both the antifungal agent and candidate comparisons in the Sustained Inhibition Biofilm Assay (α = 0.05, final threshold 2.35 × 10 −4 ). The number of hypotheses, m, was 240 for both the antifungal agent and candidate comparisons in the Disruption Biofilm Assay (α = 0.05, final threshold 2.08 × 10 −4 ). To be considered a hit, any given experimental condition must have an average absorbance of less than the averages of both sets of relevant control wells and remain significant for both sets of comparisons after the multiple comparisons correction. Data, statistics, and concentrations used for the combination Sustained Inhibition and Disruption Optical Density Biofilm Assays are compiled in Supplementary Table S1 "Combination Inhibition" and "Combination Disruption." A summary of hits from these assays are included in Supplementary Table S1 "Hit Listing By Type." Protease Inhibitor Libraries We selected three libraries of protease inhibitors to screen for compounds with the abilities to inhibit and/or disrupt C. albicans biofilm formation in vitro. The first library, the SCREEN-WELL R Protease Inhibitor Library (Enzo Life Sciences), contains 53 protease inhibitors effective against several classes of proteases (Supplementary Table S1). The remaining two libraries contain 31 compounds known or predicted to specifically inhibit aspartyl proteases (Clarke et al., 2016), of which we tested 27 in at least one assay. We focused on nine FDA-approved aspartyl protease inhibitors, developed to inhibit HIV-1 protease, ten macrocycles (API12-21), and eight linear peptidomimetics (API1-6, 10, and 11) that were originally synthesized with the goal of identifying new aspartyl protease inhibitors (Clarke et al., 2016). Standalone Screens We screened the three libraries for their abilities to inhibit biofilm formation or to disrupt mature biofilms using the Sustained Inhibition Biofilm Assay and Disruption Biofilm Assay (Lohse et al., 2017;Gulati et al., 2018), respectively. In the Sustained Inhibition Biofilm Assay, compounds were included in media during the 90-m adherence and 24-h growth steps of the biofilm assay; the compounds were evaluated for their ability to reduce or prevent biofilm formation ( Figure 1A). In the Disruption Biofilm Assay, a biofilm was grown for 24 h before the compound of interest was added. The biofilm was then incubated for an additional 24 h before determining whether the compound affected the mature biofilm ( Figure 1A). In both assays, compounds were tested at a concentration of 40 µM. Three of the 53 compounds in the SCREEN-WELL R Protease Inhibitor library, acivicin, gliotoxin, and TPCK, inhibited biofilm formation on their own (Figure 1B, Supplementary Table S1 "Standalone Inhibition"). One of these compounds, gliotoxin, also disrupted mature biofilms on its own (Figure 1C, Supplementary Table S1 "Standalone Disruption"). TPCK irreversibly inhibits chymotrypsin (a serine peptidase) and can also inhibit some cysteine peptidases while gliotoxin inhibits the chymotrypsin-like activity of the 20S proteasome. Acivicin, on the other hand, is an inhibitor of gamma-glutamyl transpeptidase, an enzyme that transfers gamma-glutamyl groups from peptide donors to peptide acceptors as well as acting as a hydrolase to remove gamma-glutamyl groups from peptides. None of the 25 aspartyl protease inhibitors tested were able to disrupt mature C. albicans biofilms on their own, and only one of the 22 aspartyl protease inhibitors tested, the HIV-1 protease FIGURE 1 | Four protease inhibitors either inhibited biofilm formation or disrupted mature biofilms on their own. (A) Overview of the experimental setups for the Sustained Inhibition and Disruption Biofilm Assays used for these experiments. For the Sustained Inhibition Biofilm Assay, compounds were included during both the 90-min adherence step and the 24-h growth step of a standard biofilm assay. For the Disruption Biofilm Assay, compounds were included during a second 24-h growth step. (B,C) Statistically significant hits from the standalone (B) Sustained Inhibition and (C) Disruption assays with the SCREEN-WELL R Protease Inhibitor Library. Mean OD 600 readings with standard deviations are shown; significant differences from the DMSO solvent control as determined by Welch's t-test (two-tailed, assuming unequal variance) with the Bonferroni Correction are indicated for α = 0.05 (*) and α = 0.01 (**). Although a single repeat is shown, the indicated threshold was met by all of the repeats of each compound shown. (D) Statistically significant hit from the standalone Sustained Inhibition assays with the two aspartyl protease inhibitor libraries. Mean OD 600 readings with standard deviations are shown; significant differences from the DMSO solvent control as determined by Welch's t-test (two-tailed, assuming unequal variance) with the Bonferroni Correction are indicated. A single repeat is shown; the indicated significance threshold was met by two of the three repeats at α = 0.01 while the third repeat did not pass at α = 0.05. The "&" symbol indicates this mixed result. Combination Screens We tested whether any compounds from the three protease inhibitor libraries could inhibit biofilm formation and/or disrupt mature biofilms in the presence of sub-inhibitory concentrations of amphotericin B, caspofungin, or fluconazole (see methods for concentrations). Five compounds from the SCREEN-WELL R Protease Inhibitor library inhibited biofilm formation in the Sustained Inhibition Biofilm Assay when combined with fluconazole (Figure 2A, Supplementary Table S1 "Combination Inhibition"). We did not observe any synergies with amphotericin B or caspofungin in this assay. Two of these five compounds, gliotoxin and TPCK, were also "hits" in the standalone Sustained Inhibition Biofilm Assay described above. The remaining three compounds, lisinopril, Z-Prolyl-prolinal, and NNGH, were unique to the Sustained Inhibition Biofilm assay for synergies with fluconazole. Lisinopril inhibits the metalloprotease angiotensin-converting enzyme (ACE), NNGH inhibits matrix metalloproteinase 3 (MMP-3), and Z-Prolylprolinal inhibits prolyl endopeptidase (a serine protease). Two compounds from the SCREEN-WELL R Protease Inhibitor library, gliotoxin and Dec-RVKR-CMK, disrupted mature biofilms when combined with an antifungal agent (Figures 2B-C, Supplementary Table S1 "Combination Disruption"). Gliotoxin disrupted mature biofilms when combined with fluconazole ( Figure 2C, Supplementary Table S1 "Combination Disruption") while Dec-RVKR-CMK disrupted mature biofilms when combined with caspofungin ( Figure 2B, Supplementary Table S1 "Combination Disruption"). Dec-RVKR-CMK, also known as furin convertase inhibitor, inhibits the subtilisin (Kex2p-like) proprotein convertase (a type of serine protease). We next evaluated 17 aspartyl protease inhibitors in the Sustained Inhibition Biofilm Assay and 26 aspartyl protease inhibitors in the Disruption Biofilm Assay in combination with the same three antifungal agents. Seven For panels a-c, mean OD 600 readings with standard deviations are shown; significant differences from the compound without antifungal agent control (e.g., gliotoxin, -fluconazole), as determined by Welch's t-test (two-tailed, assuming unequal variance) with the Bonferroni Correction, are indicated for α = 0.05 (*) and α = 0.01 (**). Significant differences from the antifungal agent without compound control (e.g., DMSO, +fluconazole), as determined by Welch's t-test (two-tailed, assuming unequal variance) with the Bonferroni Correction, are indicated for α = 0.05 (#) and α = 0.01 (##). Data from separate plates are separated by two vertical lines on the x-axis; the DMSO solvent control is shown for each plate. aspartyl protease inhibitors (four HIV-1 protease inhibitors and three macrocycles) inhibited biofilm formation when combined with one or more of the antifungal agents (six with caspofungin, five with amphotericin B, and one with fluconazole) (Figure 3, Supplementary Table S1 "Combination Inhibition"). Specifically, lopinavir and API13 inhibited biofilm formation in combination with caspofungin while API19 inhibited biofilm formation in combination with amphotericin B. Ritonavir, saquinavir, and API15 inhibited biofilm formation in combination with caspofungin and amphotericin B while nelfinavir inhibited biofilm formation in combination with all three antifungal agents tested (Figure 3, Supplementary Table S1 "Combination Inhibition"). Nine aspartyl protease inhibitors (the HIV-1 protease inhibitors atazanavir, indinavir, lopinavir, nelfinavir, ritonavir, saquinavir; and the macrocycles API15, API16, API19) disrupted mature biofilms in combination with caspofungin (Figure 4, Supplementary Table S1, "Combination Disruption"). None of the 26 aspartyl protease inhibitors tested disrupted biofilms in the presence of amphotericin B or fluconazole. We were surprised to find compounds that For panels a-c, mean OD 600 readings with standard deviations are shown; significant differences from the compound without antifungal agent control (e.g., lopinavir, -caspofungin), as determined by Welch's t-test (two-tailed, assuming unequal variance) with the Bonferroni Correction, are indicated for α = 0.05 (*) and α = 0.01 (**). Significant differences from the antifungal agent without compound control (e.g., DMSO, +caspofungin), as determined by Welch's t-test (two-tailed, assuming unequal variance) with the Bonferroni Correction, are indicated for α = 0.05 (#) and α = 0.01 (##). Data from separate plates are separated by two vertical lines on the x-axis; the DMSO solvent control is shown for each plate. (D) Venn diagram illustrating the degree of overlap between the combination aspartyl protease inhibitor Sustained Inhibition Biofilm Assay screens with amphotericin B, caspofungin, or fluconazole. (E) Structure of the aspartyl protease inhibitors API13, API15, and API19. were effective at disrupting mature biofilms, but were not effective at inhibiting biofilm formation, namely atazanavir, indinavir, and API16. We also note that the macrocycle API19 had a synergistic effect with amphotericin B in the Sustained Inhibition Biofilm Assay but with caspofungin in the Disruption Biofilm Assay. FIGURE 4 | Nine aspartyl protease inhibitors disrupted mature biofilms in combination with the antifungal agent caspofungin. (A) Statistically significant hits from the combination Disruption Biofilm Assays with caspofungin. For each compound, the wells with caspofungin (+caspofungin) are indicated in yellow and the wells without caspofungin (-caspofungin) are indicated in red. Mean OD 600 readings with standard deviations are shown; significant differences from the compound without the caspofungin control (e.g., atazanavir, -caspofungin), as determined by Welch's t-test (two-tailed, assuming unequal variance) with the Bonferroni Correction, are indicated for α = 0.05 (*) and α = 0.01 (**). Significant differences from the caspofungin without compound control (e.g., DMSO, +caspofungin), as determined by Welch's t-test (two-tailed, assuming unequal variance) with the Bonferroni Correction, are indicated for α = 0.05 (#) and α = 0.01 (##). Data from separate plates are separated by two vertical lines on the x-axis; the DMSO solvent control is shown for each plate. (B) Structure of the aspartyl protease inhibitor API16. DISCUSSION The ability of C. albicans to form biofilms on biotic and abiotic surfaces presents a serious treatment challenge in the clinic as biofilms are typically resistant to all classes of antifungal drugs used to treat planktonic infections. Our results suggest that proteolysis is important for the maintenance of the C. albicans biofilm structure since anti-proteolytic agents contribute to the prevention and disruption of these biofilms. Proteases may play several different roles in C. albicans biofilm formation, an idea supported by the fact that proteases are dynamically expressed throughout the course of C. albicans biofilm formation (Nailis et al., 2010;Fox et al., 2015a). For example, Sap5 and Sap6, two secreted aspartyl proteases that are highly upregulated at certain stages of biofilm formation, are known to mediate adhesion of C. albicans cells to surfaces and possibly of C. albicans cells to one another (Kumar et al., 2015;Winter et al., 2016). Proteases may also contribute to the breakdown and acquisition of nutrients, the processing of molecules important for biofilm formation (e.g., adhesion molecules), quorum sensing, and/or extracellular matrix production throughout biofilm formation and maintenance. Although the involvement of secreted proteases in biofilm formation is a relatively new concept, there is some precedent for this idea in bacterial biofilms, where extracellular proteases were found to be involved in the processing of adhesion molecules during biofilm formation of Staphylococcus species (Koziel and Potempa, 2013;Paharik et al., 2017;Martínez-García et al., 2018). In this study, we identify several protease inhibitors from different classes that are effective at preventing biofilm formation and/or at disrupting established biofilms when combined with caspofungin, fluconazole, or amphotericin B, members of the three major antifungal classes used to treat fungal infections in the clinic. Aspartyl protease inhibitors, in particular those that inhibit HIV-1 protease, were the most effective compounds tested when combined with traditional antifungal agents. Combined with the known dependence on Sap5 and Sap6 for biofilm formation (Winter et al., 2016) and previous reports that aspartyl protease inhibitors affect C. albicans in vitro and in vivo (Borgvon Zepelin et al., 1999;Cassone et al., 1999Cassone et al., , 2002Cauda et al., 1999;de Bernardis et al., 1999;Diz Dios et al., 1999;Gruber et al., 1999b,a;Korting et al., 1999;Bektic et al., 2001;Pichová et al., 2001;Skrbec and Romeo, 2002;Cenci et al., 2008;Tsang and Hong, 2009;Braga-Silva et al., 2010), aspartyl protease inhibitors are potentially promising combination treatments for C. albicans biofilm infections which are recalcitrant to single drug treatments. We note, however, that we screened fewer inhibitors of other classes of proteases than we did for aspartyl proteases. Despite this bias, we succeeded in identifying several inhibitors of two additional classes of proteases, serine and metalloproteases. It may prove rewarding to conduct additional screens of FDAapproved drugs whose mechanisms rely on the inhibition of other classes of proteases with the goal of repurposing these drugs as novel antifungals. Perhaps the most unexpected result from this study was the identification of compounds capable of disrupting mature biofilms that were unable to prevent biofilm formation (Figure 5). Unlike the opposite case, where a compound that could prevent biofilm formation might be unable to penetrate a mature biofilm to have an effect, it is not readily apparent how the capacity to disrupt an established biofilm would not also inhibit the formation of a biofilm. Although we do not understand the basis for this result, it demonstrates that compounds that disrupt biofilms are not simply a subset of those that inhibit formation ( Figure 5). This observation underscores the importance of screening compounds for their antibiofilm capabilities in both types of assays. Although we focused on one type of compound, protease inhibitors, this study raises several points to consider when screening for antibiofilm agents. First, consistent with previous reports (Delattin et al., 2014;De Cremer et al., 2015), our results highlight the importance of screening for synergistic interactions, as we detected more hits and hits with stronger effects against biofilms when existing antifungal agents were present along with the compound of interest (Figure 5). Second, our results highlight the importance of screening using biofilms as opposed to planktonic cultures. For example, in our biofilm assays with saquinavir, amphotericin B showed more synergy than fluconazole whereas the opposite relationship was reported for planktonic cultures (Casolari et al., 2004). We also note that we identified compounds that had effects on their own but not in combination with existing antifungal agents, as well as the FIGURE 5 | A number of compounds had effects in just a subset of the four biofilm assays. Compounds with an effect in either the standalone or the combination versions of the Sustained Inhibition or Disruption Biofilm Assays are indicated. In total, 17 compounds had an effect in at least one of the four assays. reverse. As such, pursuing multiple assays (e.g., planktonic versus biofilm, standalone compounds versus combinations) maximizes the chance of identifying useful compounds. Finally, we note that this study was largely inspired by the discovery of the biofilm defects of the sap5 and sap6 single and double mutant strains (Winter et al., 2016). Thus, future compound library screening could be informed by other sets of gene knockouts with biofilm defects; likewise, results from chemical screens could identify genes (and their protein products) required for biofilm formation if the mechanism of action of the chemical compound is known. To further develop the idea of exploiting existing compounds, it should be possible to screen existing C. albicans mutant strain libraries for biofilm defects that arise in the presence of subinhibitory concentrations of traditional antifungal agents. Should biofilm formation by specific classes of mutant strains prove particularly sensitive to traditional antifungal agents, a subsequent combination screen between the traditional antifungal agents and compounds that affect that particular pathway of genes might prove informative. DATA AVAILABILITY STATEMENT All datasets generated for this study are included in the article/Supplementary Material. AUTHOR CONTRIBUTIONS ML, MG, CN, and AJ conceptualized the study. ML and MG worked on the methodology and carried out the investigation. ML contributed to validation, formal analysis, the data curation, and writing the original draft. ML and MG worked on the methodology and carried out the investigation. CC, CN, and AJ helped with the resources. ML, CC, MG, CN, and AJ reviewed and edited the manuscript. ML and CN helped with the visualization. ML, CN, and AJ were responsible for the supervision and project administration. ML, CC, CN, and AJ acquired the funding. FUNDING This work was supported by the National Institutes of Health (NIH) grants R43AI131710 (to ML), P50AI150476 (to CC), R01AI083311 (to AJ), and R35GM124594, and R41AI112038 (to CN). This work was also supported by the Kamangar family in the form of an endowed chair (to CN). The content is the sole responsibility of the authors and does not represent the views of the funders. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; and in the decision to publish the results. SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fmicb. 2020.01027/full#supplementary-material TABLE S1 | Compiled data and statistics from the standalone and combination Sustained Inhibition and Disruption Optical Density Biofilm Assays as well as the BIC Sustained Inhibition Optical Density Biofilm Assay. For each compound, the concentration used, average OD 600 , average OD 600 of relevant control(s), and value(s) for Welch's t-test versus the relevant control(s) are provided. Whether the average OD 600 was below the average OD 600 of the relevant control(s) and whether the difference from the relevant control(s) remains significant following the Bonferroni Correction (α = 0.05) are also indicated. The sensitivity of SNY425 to amphotericin B, caspofungin, and fluconazole in planktonic and biofilms assays are also indicated. A list of the 80 compounds from the three protease inhibitor libraries tested in this study and a summary of the hits from the standalone and combination Sustained Inhibition and Disruption Optical Density Biofilm Assays are also included.
v3-fos-license
2022-01-31T14:39:38.016Z
2022-01-31T00:00:00.000
246413140
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://globalizationandhealth.biomedcentral.com/track/pdf/10.1186/s12992-022-00797-6", "pdf_hash": "ee0b53c53ee41bd451d7db9b067098ee9eb1031e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42569", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "sha1": "ee0b53c53ee41bd451d7db9b067098ee9eb1031e", "year": 2022 }
pes2o/s2orc
Assessing the effects of mining projects on child health in sub-Saharan Africa: a multi-country analysis Background The African continent hosts many industrial mining projects, and many more are planned due to recent prospecting discoveries and increasing demand for various minerals to promote a low-carbon future. The extraction of natural resources in sub-Saharan Africa (SSA) represents an opportunity for economic development but also poses a threat to population health through rapid urbanisation and environmental degradation. Children could benefit from improved economic growth through various channels such as access to high-quality food, better sanitation, and clean water. However, mining can increase food insecurity and trigger local competition over safe drinking water. Child health can be threatened by exposure to mining-related air, noise, and water pollution. To assess the impact of mines on child health, we analyse socio-demographic, health, and mining data before and after several mining projects were commissioned in SSA. Results Data of 90,951 children living around 81 mining sites in 23 countries in SSA were analysed for child mortality indicators, and 79,962 children from 59 mining areas in 18 SSA countries were analysed for diarrhoea, cough, and anthropometric indicators. No effects of the launch of new mining projects on overall under-five mortality were found (adjusted Odds Ratio (aOR): 0.88; 95% Confidence Interval (CI): 0.68–1.14). However, activation of mining projects reduced the mortality risk among neonates (0–30 days) by 45% (aOR: 0.55; 95% CI: 0.37–0.83) and risk for a child to develop diarrhoeal diseases by 32% (aOR: 0.68; 95% CI: 0,51–0.90). The timing analysis of observed changes showed that there is a significant decline in the risk for childhood diarrhoea (aOR: 0.69; 95% CI: 0.49–0.97), and the mean height-for-age z-scores by 28 percentage points, during the prospection and construction phase; i.e., within four years to the initiation of extraction activity. No effects were found for cough and weight-for-height. Conclusion The results presented suggest that the impacts of mining on child health vary throughout the mine’s life cycle. Mining development likely contributes positively to the income and livelihoods of the impacted communities in the initial years of mining operations, particularly the prospection and construction phase; these potential benefits are likely to be at least partially offset by food insecurity and environmental pollution during early and later mining stages, respectively. Further research is warranted to better understand these health impacts and to identify policies that can help sustain the positive initial health impacts of mining projects in the long term. Supplementary Information The online version contains supplementary material available at 10.1186/s12992-022-00797-6. Introduction The African continent holds one-third of global natural resources [1,2] and hosts more than 2000 industrial mining projects at different development stages [2,3]. This number might further increase with the growing demand for various minerals to promote a low-carbon future [1]. While the extraction of natural resources represents an opportunity for countries rich in natural resources in sub-Saharan Africa (SSA), the impact of large-scale mining projects on the health of young children remains unclear. On the one hand, mining projects can positively influence determinants of health and, thus, improve child health. For example, the development of mining projects has the potential to increase the share of workers with regular incomesincluding women of reproductive age [4,5] and, thus, improves households' capacity to buy healthier foods, access health care, and send the children to school [5][6][7]. Furthermore, mining projects can improve housing conditions, including proper sanitation and safe water [8,9]. In turn, better housing, sanitation, and water conditions can reduce the incidence of environment-related diseases such as respiratory infections, diarrhoeal diseases, malaria, and undernutrition [10][11][12]. On the other hand, it has been reported that mining activities can have adverse effects on child health and development [13,14]. For example, mining activities can negatively affect local and regional agricultural production through environmental degradation and changes in land use [15,16]. Consequently, food insecurity can increase, which is of particular concern for young children and pregnant women [6,17,18]. Additionally, mining projects have high energy and water demand, potentially triggering local competition over existing resources, including access to safe drinking water [19][20][21]. In the contexts where natural resources are extracted, adverse environmental impacts such as air, noise, and water pollution are a significant concern for child health [20,22,23]. Studies found that exposure to environmentally poor conditions during the early stages of human life, including in-utero exposure, can result in long-term adverse effects on cognitive abilities, respiratory functions, and nutritional status [14,17,24]. Estimated impacts of mining projects on child health outcomes, such as diarrhoea, respiratory infections, and child mortality, have been highly heterogeneous to date [6,25,26]. One reason for the high heterogeneity seen in the empirical literature is the often differential focus on early (opening phase) vs. late (extraction phase) of mining [27,28]. It also seems plausible that the heterogeneity of the currently available results is due to the narrow focus of current studies either on just one country or one mineral (such as gold) or both [6,7,14,29]. This paper aims to understand the impacts of mining activities on child health using data from 81 mining projects launched across the sub-Saharan African region between 2002 and 2019. More specifically, we pursued the following research questions: (i) What is the effect of mine opening on child morbidity and mortality in sub-Saharan African countries? (ii) How many years before or after the launch of extractive activities can health impacts be detected? Data sources and management This study was conducted by combining two different georeferenced data sources, namely: (i) the sociodemographic and health data from Demographic and Health Survey (DHS) and (ii) mining data from the Standard & Poor's Global Market Intelligence (S&P GMI) Mining Database [3]. Both data sets were restricted to SSA. Socio-demographic and health data The DHS program conducts nationally and regionally representative household survey data in over 70 lowand middle-income countries. The DHS surveys are conducted following a two-stage cluster random sampling strategy, randomly selecting households within randomly selected enumeration areas. In most countries, DHS surveys are conducted every 4-6 years. The survey datasets are available on request on the website of the DHS program (www.dhsprogram.com). For this study, we use data from all DHS standard surveys from SSA for which geographic coordinates were available as of March 2020 (see Fig. 1, panel A). All household and child datasets were combined with the corresponding geographic data to merge with the mining data. Of note, the DHS program introduced random noise to the cluster coordinates to ensure the privacy of the respondents: in urban settings, clusters' coordinates are shifted up to 2 km (km), and in rural areas, clusters are typically displaced by 5 km. Mining data The proprietary mining dataset was accessed through a subscription to the S&P Global Market Intelligence platform (www.spglobal.com) [3]. The mining data comprises four primary indicators: geographic point location (Global positioning system, GPS) coordinates, extracted commodities, and historic mining activities between 1980 and 2019 (e.g., mine opening and closure years). We set the year of mine activation (i.e., initiation of exploration and evaluation activities) at 10 years before the reported extraction onset, i.e., the earliest year of the operation phase with reported extraction or production. We did this, aiming to include the prospection and construction phase of the project. We created a sub-sample of mines that opened within the period during which DHS data were available (i.e. 1986-2019). Finally, mines located closer than 20 km from another mine were excluded to avoid overlapping impact areas (see Fig. 1, panel B). Panel A of Fig. 1 shows the 81 mines analysed by primary commodity extracted (coal (N = 5), diamonds (N = 7), metals (N = 59), and other mines (N = 10)). Merging of datasets by spatial analysis strategy The GPS coordinates for each DHS survey cluster and the mine point locations were used to match all surveyed households and children to one or several mines. DHS clusters within 50 km of the distance of each mine were selected. Based on previous studies showing that impacts are centralised within 10 km from a mining, project we set the treatment group within this distance range [4, 6-8, 14, 17, 30, 31]. Hence, clusters within 10 km from the mine were classified as "impacted clusters" (or treated), while clusters at 10-50 km distance were classified as "comparison clusters" (or controls). To assess the impact of mine opening events on child health outcomes, we restricted our analysis to mines with DHS records before and after the mine opening year. Figure 1 exemplifies the selection of data around mining projects in Sierra Leone. Figure 2 summarises the overall data set construction process. Data merging was done using Arc-GIS Pro (Version 2.2.4, Environmental Systems Research Institute, Redlands, CA, USA). Study design This is a quasi-experimental difference-in-difference (DiD) study comparing child health outcomes in areas directly surrounding mines to more distant locations from the same regions before and after mine activation [32,33]. The primary parameter of interest is the interaction term between the DHS cluster's proximity to a mine and the post, i.e., observations made after the mine was activated. The interaction term estimates the additional change (improvement) in health outcomes seen in areas close to the mines relative to other areas nearby but outside of the direct influence of the mines. The resulting estimates can be given causal interpretation as long as the common trend assumption holds; i.e., as long as the treatment (within 10 km) and control areas (10-50 km from mines) would have experienced the same changes in health outcomes in the absence of the mining project. Selected variables Outcome variables In the present study, our centred attention is three primary child health outcomes. Firstly, we analysed child mortality indicators. All DHS surveys record all children born to the mothers in the last five years and the time point of any child death. Based on the information for age-at-death included in the DHS data, we computed a dummy variable indicating age-specific survival status (i.e. died or alive) for neonates (0-30 days), postneonates (1-11 months), and children (12-60 months). While we kept the original DHS definition for under-five and child mortality [34], we computed neonatal and post-neonatal mortalities as children who died before reaching the age of 1 and 12 months, respectively. To calculate post-neonatal and child mortality rates, we only included children that had survived the first month or the first year, respectively. Missing data for children's age at death was imputed using a hot deck approach by taking the same age at death as the last child encountered the same birth order in the data file [35]. Secondly, we analysed child morbidity indicators. The DHS datasets include morbidity data for all children under-5 years living at the survey time. We used information on whether a child experienced diarrhoeal or cough episodes in the last two weeks before the survey date. Of note, "don't know" responses were recoded into "missing values". Thirdly, we analysed child anthropometrics data: to compute the z-scores of height-for-age, weight-forheight, and weight-for-age, DHS surveys collect data on height (in centimetres) and weight (in kilograms) for all living children aged under-five years in the household and the age of the child in months. Height-for-Age (HAZ), Weight-for-Age (WAZ), and Weight-for-Height (WHZ) z-scores were then calculated using standardised reference growth curves [35]. Exposure variables The primary exposure variable in our analyses was the interaction of the distance to the mine (impacted and comparison clusters) and the mine's activity status at the year of childbirth (for child mortality) and the year of DHS survey (for morbidity and anthropometric indicators). Two variable definitions were used to determine the mine's activity status. For the primary analyses, mine activation (including the planning, exploration, prospection, and construction activities) was assumed to be at 10 years before the launch year (year zero) of mineral extraction (from now on referred to as "extraction onset"). Therefore, children born or surveyed less than 10 years before the extraction onset or later were considered exposed to an active mine, while children born/surveyed before were used as the reference group. The active mining phase was further divided into four phases corresponding to 5-year intervals for secondary analysis. These phases were defined relative to the year of extraction onset, namely: (i) the planning phase -9 to 5 years before the extraction onset, (ii) prospection and construction phase -4 years to extraction onset year, (iii) early extraction phase -between 1 to 5 years after the extraction onset and (iv) advanced extraction phasemore than 5 years after the extraction onset. The last phase was summarised in one category due to the low sample size. As for the dichotomous temporal categories, the time before mining activation (i.e., 10 years or more before the extraction onset) was used as the reference group. Covariates Many covariates were included in the analysis to adjust for child, maternal, and household characteristics. Childlevel covariates included sex, age in completed months, twin birth, and a child's birth order. Child age and birth order variables were recoded into 5 and 6 categories. At the maternal level, the included covariates were the highest education level, maternal age in five year-groups, and the total number of children born to women. We merged the "higher education" with "secondary education" responses and dichotomised the number of children at a cut-off value of five and above. Lastly, the household characteristics included were wealth index quintile and household location (i.e., rural vs urban). Beyond covariates, we included the mine fixed effect term in all models to account for spatial (i.e., mine location) and year fixed effect to account for temporal (i.e., year of the survey and year of childbirth) variability. Statistical analysis The descriptive statistics for child health outcomes and covariate variables were double stratified by mine activation status and the distance between the DHS cluster and the mine. Logistic maximum likelihood models for binary outcomes variables (i.e., mortality, diarrhoeal, and cough episodes) and ordinary least-squares linear regression models for continuous outcome variables (i.e., anthropometric z-scores) were estimated. The regressions control for child-, maternal-and household-level factors. In addition, mine and year (childbirth year for mortality outcomes and survey year for morbidity and anthropometric outcomes) are included as fixed effects, respectively. We assume that there are similar trends in the outcome variables across years in the absence of a causal effect induced by the presence of the mine activation [6,7,14,33] and that the location of the mine projects and their activity status are not systematically correlated with other factors affecting our main outcome variables [33]. We tested this assumption by plotting child health outcomes stratified by DHS cluster's proximity to the mine and mine activity status against the mine life stages periods. Main specification In the main analysis, we investigated the child health impact of mine activation using the interaction between the clusters' distance to the mine and the dichotomous mine's activity status at the year of childbirth for mortality analysis and the DHS survey year for morbidity and anthropomentric analysis (i.e., active vs non-active mine). This approach allowed us to compare the change in the prevalence of child health outcomes between the treatment group (interaction term takes the value one) and the control group (interaction term takes the value zero). Alternative specification For the secondary analysis, an alternative specification was used to investigate child health impact throughout the mine life stages (time-varying effects of mine exposure). For this purpose, the interaction term between the clusters' distance to the mine and the four-phased mine's activity status (planning, prospection and construction, early extraction, and advanced extraction phases) was used. In this approach, the prevalence of child health outcomes of each treatment group (interaction terms take values between 1 and 4) is compared against a unique control group (interaction term takes the value zero). Sensitivity analysis Given that mines may affect populations beyond the predefined 10 km boundary, we explore alternative exposure definitions in our sensitivity analysis. Specifically, we exclude these areas from the analyses by introducing an increasingly large buffer of potentially affected areas (i.e. 10-15 km, 10-20 km, and 10-25 km) around our treatment areas. This should also reduce misclassification concerns related to up to 5 km random noise added to DHS cluster coordinates. The regression models were estimated using the statistical software STATA version 14.2 (Stata Corporation, LLC, College Station, TX, USA). Statistics are reported as Odds Ratio (OR; logistic regression) and beta coefficients (linear regression) where applicable, with 95% Confidence Intervals (95% CI) clustered at the surveycluster level. P-values lower than 0.05 were considered significant. Descriptive statistics Two separate datasets were constructed and used in the study: (i) a data set focusing on child mortality and (ii) a data set containing all available information on childhood morbidity and anthropometric datasets (Fig. 2). Below, the descriptive statistic of the childhood mortality dataset is outlined, while the descriptive statistic of the childhood morbidities and anthropometrics is given in the Additional File 1. The final child mortality dataset contains a subset of data from 72 cross-sectional DHS datasets from 23 out of 34 SSA countries (67.6% coverage) (see Table 1). Ninety thousand nine hundred fifty-one children from 46,997 households around 81 mining projects were included. The Additional File 2 shows the complete list of included mines, country location, the year of extraction onset, the primary extracted commodities, and the total number of observations before and after mine activation. Some clusters with few observations were included in some countries, such as Gabon and South Africa, most probably located near a mine in a neighbouring country. The main descriptive statistics for child, mother and household-level characteristics are presented in Table 2. Most children (70.1%; N = 63,790) were born after mine activation. Table 2 also shows some differences among comparison and impacted groups in pre and post-mine activation periods. Overall, 3.4% (N = 3095) of children were born close to active mines. Child mortality was similar in impacted and comparison areas before mine activation and improved over time (was on average, lower after mines opened). Child mortality The time and spatial trends of under-five and agespecific crude mortality rates (deaths/1000 live births) in impacted and comparison groups are illustrated in Fig. 3. Similar mortality rates before mine activation are seen for under-five (panel A) and child mortality (panel D). An overall positive impact of mine activation is observed for all mortality indicators. Indeed, a noteworthy drop in the crude under-five mortality rate is observed during the advanced extraction phase in areas close to active mines (see panel A, Fig. 3). The same effect is observed during the prospection and construction phase and the Fig. 3). A similar trend is observed for older children (Fig. 3, panels C and D), except for an observed considerable decline during the planning and early extraction phases for child and post-neonatal crude mortality rate, respectively. Table 3 shows the average impact on child mortality indicators. On average, mine activation is associated with lower odds for neonatal mortality rate (aOR: 0.55, 95% CI: 0.37-0.83; column (2)). No statistically significant changes were observed for under-five (column (1)), post-neonatal (column (3)), and child (column (4)) mortality overall. Table 4 shows estimated impacts stratified by time relative to the start of mine extraction, i.e., the year of extraction onset. For neonatal mortality, we find largest reductions during the prospection and construction phase (aOR: 0.43, 95% CI: 0.25-0.75), as well as in the advanced extraction phase (aOR: 0.10, 95% CI: 0.02-0.61; column (2)). Post-neonatal and child mortality appear to increase slightly, but the rise is not statistically significant. Figure 4 shows the relative change in diarrhoea (panel A) and cough (panel B) prevalence over time (years before and after mine activation) and cluster proximity (0-10 km and 10-50 km) at the year of the DHS survey. Overall, diarrhoea prevalence in impacted areas declined considerably after mine activation compared with diarrhoea cases in comparison areas. Although similar in both areas over mine stages, a decline is also observed for cough prevalence. ¥ Live children only; sdstandard deviation Descriptive statistics are stratified by time to mine activation (i.e., ten years before the extraction) and the DHS clusters' distance to the mining sites. Data from 72 Demographic and Health Surveys from 23 SSA countries. The included DHS data was collected between 1992 and 2018 and restricted to clusters within 50 km from isolated mines (i.e., mines separated at a minimum distance of 20 km from each other). All measures represent unweighted sample proportions Child morbidity The impacts of the mine activity status on the anthropometrics mean z-scores are shown in panels C-E of Fig. 4. The trend of the mean HAZ is shown in panel C and ranges from − 1.6 and − 0.9 standard deviations (SDs) over the study period. When compared to children in comparison areas (i.e., 10-50 km away), mine activation seems to trigger a rapid decrease in mean HAZ, followed by an increase, but similar trends between the two groups.. The mean WAZ decreases after mine activation among children living nearby (0-10 km) but remains similar to those living far from active mines. Of note, the mean WAZ increases substantially soon after the extraction onset.. Moreover, the mean WAZ is above − 1.2, but below − 0.4 SDs, an indication of a low probability of underweight and overweight children over the study period (Fig. 4, panel D). The anthropometric WHZ seems to be much more positively affected than HAZ and WAZ measures (see Fig. 4, panel E), ranging from − 0.5 to 0.2 SDs over the study period. Again, mine activation seems to trigger an increase in the mean WHZ. It is important to note that the mean WHZ among children living close to an active mine remains higher over the study periods than children in comparison areas, nevertheless similar between the period before mine activation and in the advanced extraction Table 3 Estimates of association between mine exposure and child mortality indicators using the main specifications * p < 0.05, ** p < 0.01 $ -interaction term between clusters' proximity (0-10 km) and the mine activity status at childbirth year; † − model including interaction term only; ‡ − model adjusted for gender, twin births, birth order, number of children ever born to mother, maternal age, maternal education, residence, wealth index, mine, and birth year The treatment group corresponds to children born within 10 km from active mines. The reference group (control) are children born within a distance radius of 10 km before mine activation and those born 10-50 km away regardless of mine activity status The estimates are relative to the year of childbirth using logistic regression models. The reported estimates are crude and adjusted odds ratio (OR), and the 95% confidence intervals (CIs) are shown in parentheses and are clustered at the survey-cluster level phase. The mean WHZ is notably increased soon after the extraction onset. Table 5 shows the logistic regression results for diarrhoeal and cough episodes (columns (1) and (2)) and the linear predictions for anthropometrics z-scores (i.e., HAZ, WAZ and WHZ) (columns (3-5)). We found an indication of significant protection for experiencing diarrhoea among children living near active mines. The risk for diarrhoea significantly decreases by 32% soon after mine activation (aOR: 0.68, 95% CI: 0.51-0.90) (column (1)). Although not significant, children living in mining areas seems to experience a decreased percentage points on their mean HAZ (column (3)) but increased WAZ and WHZ z-scores (columns (4 and (5)). The time-specific variation's results for the interaction term between mine proximity to the survey cluster and the period of mine activity are illustrated in Table 6. We investigate the effect in four periods of 5-years each in the mine life stages. While the risk for diarrhoea decreases over time, the significant effect of the interaction on the risk for a child experiencing diarrhoea episodes is seen during the prospection and construction phase (aOR: 0.69, 95% CI: 0.49-0.97) (see Table 6, column (1)). Conversely, the risk for a child to experience cough episodes among those living close to an active mine is seen to increase over the study period, particularly during the advanced extraction phase, although not statistically significant (aOR: 1.51, 95% CI: 0.80-2.86). While the effect of the interaction on children's nutritional indicators over time does not show a clear pattern, we found a significant decrease of 28 percentage points on the mean HAZ during the prospection and construction phase, an indication of an increased rate of children shorter for their age in this mining phase across mining areas ( Table 6, columns (3)). It is worth noting that the percentage points of the mean z-score of weight-for-age and weight-of-height increase over the study period, although all statistically insignificant ( Table 6, columns (4) and (5)). Sensitivity analysis Results of the regression model sensitivity analysis are presented in Fig. 5. In all comparisons, the first bar (green diamond) represents the baseline point estimates from Table 3 and Table 5. The remaining bars show results when excluding 10-15 km (red dot), 10-20 km (blue triangle) and 10-25 km (red square) areas. We do not observe significant changes in the estimated impacts on either outcome. Discussion This paper assessed the impact of 81 large-scale mining projects launched in 23 SSA countries between 2002 and 2019. We found that mine activation lowered the risk for neonatal mortality by 45% and the risk for childhood diarrhoea by 32% among children born and living within 10 km of an active mine compared to children living further away. However, no immediate impact on cough and nutritional status was seen. Looking more closely at the timing of observed changes in health outcomes, we observed that the risk for neonatal mortality reduced significantly during the early mining phases (by 53% in the Table 4 Estimates of association between child mortality indicators and the interaction of mining proximity (0-10 km vs 10-50 km) and the mine life stages using alternative specifications * p < 0.05, ** p < 0.01 $ -interaction term between clusters' proximity (0-10 km) and the mine activity status at childbirth year; All models are adjusted for child sex, twin births, maternal age, maternal education, residence, wealth index, birth order, number of children ever born to mother, mine, and birth year The treatment group corresponds to children born within a distance radius of 10 km from active mines, categorised in four mine life stages. The reference group (control) are children born within 10 km before mine activation plus those born 10-50 km away regardless of mines' activity status Mine life stages stratify all logistic regression estimations compared against the reference comprised of the interaction between clusters located at 10-50 km and all periods of mine life stages The reported estimates are crude and adjusted odds ratio (OR), and the 95% confidence intervals (CIs) are shown in parentheses and are clustered at the survey-cluster level prospection and construction and 90% during the advanced extraction phases). The odds for childhood diarrhoea decreased by 31% and the mean height-for-age zscores reduced by 28 percentage points during the prospection and construction phase. The reduced neonatal mortality close to mines is in line with other studies using DHS data [4,7], which have documented similar decreases in infant mortality in the first 6 and 12 months of children's life [4,7]. The present study showed the primary benefits during the neonatal period, while no impacts on post-neonatal mortality were found. Another study investigating infant mortality around gold mines across SSA reported mixed effects, with impact heterogeneity primarily driven by mine location [6]. Our analysis shows that the lack of effect on post-neonatal mortality may come from the large and almost instant drops in mortality rates experienced by communities far away from mines, while no significant decline in the vicinity of the mines at the time of mine activation is observed. Drops in the risk for neonatal Fig. 4 Morbidity and child anthropometrics trends in impacted and comparison areas. Panel A, diarrhoea; panel B, cough; panel C, height-for-age z-scores; panel D weight-for-age z-scores and panel D, weight-for-age z-scores. Temporal comparison is relative to the mine extraction period (xaxis), and spatial comparison is relative to the cluster's proximity to the mine (impacted, 0-10 km vs comparison, 10-50 km areas). The yellow shade illustrates the baseline period used in the regression models. Error bars show standard errors clustered at the survey-cluster level mortality around recently opened mines are often linked with increases in local welfare and women empowerment [4,7]. Studies have reported that women living close to mines are more likely to have formal education, have better jobs, earn more income and live in wealthier households [17,31]. These changes could contribute to reducing neonatal mortality in mining areas, as observed in our study. Our results suggest that the impact on neonatal mortality risk likely differs substantially across the mining life stages. We found mortality reductions mainly during the prospection and construction phase and the advanced extraction phases. The pre-extraction period typically corresponds to the mine investment period, which generates local employment and consequently household economic growth [6,17,36]. The post-extraction effects Table 6 Estimates of association between child health outcomes, anthropometrics, and the interaction of mining proximity (0-10 km vs 10-50 km) and the mine life stages using alternative specifications Interaction (proximity*mining phase)$ (1) * p < 0.05, ** p < 0.01 $ -interaction term between clusters' proximity (0-10 km) and the mine activity status at survey year All models are adjusted for child sex, twin births, maternal age, maternal education, residence, wealth index, birth order, number of children born to mother, mine and birth year The treatment group corresponds to children located within a distance radius of 10 km from active mines at the DHS survey year, categorised in four mine life stages. The reference group (control) are children located within a distance radius of 10 km before mine activation plus those born 10-50 km away regardless of mines' activity status at the DHS survey year Mine life stages stratify all regression estimations compared against the reference comprised of the interaction between clusters located at 10-50 km and all periods of mine life stages Logistic regression models are used for estimating the odds ratio for diarrhoeal, and cough episodes (columns (1) and (2)) and linear regression models are used for anthropometric indicators (columns (3), (4), and (5)). The reported estimates for morbidities (i.e., diarrhoea and cough) are crude and adjusted odds ratios (OR), and the child's anthropometrics are crude and adjusted beta coefficients. The 95% confidence intervals (CIs) are shown in parentheses and are clustered at the survey-cluster level * p < 0.05, ** p < 0.01 $ -interaction term between clusters' proximity (0-10 km) and mine activity status at survey year; † − model including interaction term only; ‡ − adjusted for gender, child age, twin births, maternal age, maternal education, residence, wealth index, birth order, number of children ever born to mother The treatment group corresponds to children located within a distance radius of 10 km from active mines at the DHS survey year. The reference group (control) are children located within a distance radius of 10 km before mine activation and those born 10-50 km away regardless of mines' activity status at the DHS survey year Logistic regression models are used for estimating the odds ratio for diarrhoeal, and cough episodes (columns (1) and (2)) and linear regression models are used for anthropometric indicators (columns (3), (4), and (5)). The reported estimates for morbidities (i.e., diarrhoea and cough) are crude and adjusted odds ratios (OR), and the child's anthropometrics are crude and adjusted beta coefficients. The 95% confidence intervals (CIs) are shown in parentheses and are clustered at the survey-cluster level are more surprising, as mine-related pollution from extraction activities might increase over time and offset the positive employment and income effects seen in the initial stages [23]. During the latter stages of the mining projects, improvements can be explained by further economic development or health promotion activities supported by mines. In line with this hypothesis, it is argued that the size of primary exports by mines increases in later stages during the resource extraction phase [36,37]. Consequently, they may contribute more to local and national economic growth and potentially ramp up corporate social responsibility activities, such as investments in local water and sanitation infrastructures [8,36,38,39]. Overall, these results suggest that the impact of mining projects on local development [6,40,41] might be an opportunity for African countries to work towards the ambitious target of the 2030 Agenda for Sustainable Development to curb infant and under-five mortality [42]. Only a few studies examine the effect of mines on child morbidity and malnutrition [8,14,17]. Using similar data, Dietler and colleges found no effect of mining activities on diarrhoea prevalence [8]. Our findings suggest that mine activation reduces the risk for a child experiencing diarrhoeal episodes if living within 10 km. In addition, the impact of mining projects on childhood diarrhoea may be more prominent during the prospection and construction phase of the mine. Fluctuations of impacts according to the stage of mine developments have been reported by other studies [27,28]. In our study, the differential effect size depending on the mining life stages may explain the absence of consistent findings in other studies [6,43]. Although curable, diarrhoeal diseases remaina common cause of death for young children in SSA countries [6,41]. Many of these deaths attributed to poor water and sanitation infrastructures [39,[44][45][46][47]. The opening of many large-scale mining projects in the last two decades represents an excellent opportunity for lowering both prevalence of and mortality due to diarrhoea [1,2,6]. The planning, construction, and early extraction periods are capital and investment-intensive. In addition, intense corporative social responsibility interventions on water and sanitation and job creation characterise these project phases and, thus, are more likely to decrease waterborne diseases such as diarrhoea [8,38]. Further, this economic development is concurrent with women's empowerment, which can facilitate investments in child health and, thus, disease prevention [4,30,43]. Further investigation is warranted to illustrate economic growth translation into health gain at the local level in industrial mining areas. Contrary to diarrhoea, there was a sharp increase in the risk for cough episodes in impacted areas over the study period, particularly later in the advanced extraction phase, although not statistically significant. These results are similar to those of previous studies reporting increased likelihood for respiratory-related diseases in children living in mining communities [48,49] and may reflect the increased levels of air pollution around active mines found in other studies [50][51][52][53]. Additionally, mining-related and other environmental pollution have been associated with poor child health outcomes, including respiratory diseases [23,54,55]. Our findings, however, point to a need for further research to better understand the distribution pattern across countries and different types of mines. In addition, managing air pollution around recently active mining projects could help reduce the respiratory-related disease burden among young children. We found no evidence of the effect of mining on any nutrition indicator when using baseline specifications. However, when exploring the timing of observed changes, children 's growth appears more limited during Estimates are adjusted Odds Ratios of under-five and age-specific mortality rates (panel A) and child morbidities (panel B) and adjusted beta coefficients of child anthropometrics (panel C). The baseline specification model (control group is the entire 10-50 km area) is included for comparison. Error bars show 95% confidence intervals clustered at the survey-cluster level. bef -before; yrs. -years the prospection and construction phase of the mine. Our findings are in line with those reported in other studies conducted in low and middle-income countries, which found that mining activities were associated with an increased rate of poor nutritional status of children born from mothers living close to large mines [17,56]. However, these findings do not corroborate with those reported in a similar study conducted in the context of gold mines in Colombia [14]. Romero and Saavedra reported that living near active gold mines did not affect either low birth weight or stunting in newborns [14]. However, contrary to our findings, recent evidence suggests lower stunting and underweight rates in children living in mining communities [9]. Mixed-effects of mine operations on anthropometric indicators have also been reported in three SSA countries [6,43]. These differences may partially be explained by differences in the empirical approach used across studies. We use a different temporal exposure definition (i.e., only records from less than ten years before the mine extraction phase are considered to be exposed to mine activities). Our strategy may have affected our results in two ways: [1] the selection strategy for the control group led to smaller sample size and reduced overall statistical power, and [2] more positively, by analysing only changes over time and abstracting from cross-sectional relationships between mining locations and general population and health characteristics. The rapid change in the land use during the prospection and construction phase of mining projects, including land-grabbing by mining companies, environmental degradation, and structural shifts in income-generating activities, can lead to food insecurity and, thus, poor nutrition status, particularly for young children [16,27,57]. This land change and its effect is in line with our results showing an increase in the prevalence of stunted children during the prospection and construction phase. This period is usually considered the baseline period by most studies reporting improvement in the nutritional status of children living close to recently opened mining projects [8,17,43]. These differences can be explained by the fact that such a definition of temporal exposure can affect the estimated effect by allocating more stunted children to the control group and thus changing the estimated effect's direction and size. Our findings point to a need for further research to assess the temporal variations in childhood nutritional status in mining areas. This study was guided by a well-known and extensively discussed methodology [4, 6-8, 14, 17, 30] to explore mine-induced changes in child health outcomes. The main contribution to the existing scientific literature is that the modified identification strategy and alternative specifications better investigate causal effects over the mine life stages. Nevertheless, our findings have several limitations. Firstly, our temporal exposure definition reduced the sample size and, thus, the statistical power of most of the performed analyses. Specifically, our strategy resulted in smaller sample size before mine activation, which did not allow us to see trends in the prevalence of health outcomes in the absence of the mining projects. Similarly, a small sample size was also obtained for five years and later after the onset of extraction activities, limiting our analysis of desegregated observations at this period, i.e., between 5 and 10 years and more than ten years after the onset of extraction activities. At the same time, it allocated more children with 'positive' health outcomes to the treated group, which may have changed the direction and size of the estimates. Secondly, we focus on large-scale mining projects; however, a substantial proportion of the health-related effects may derive from artisanal and small-scale mining activities, which are often found in proximity to industrial mining projects [25,45]. Thirdly, we did not exclude large cities from our sample, which could introduce some bias. Many factors may play a role in child health, and substantial differences exist between city and non-city settings. We could not adjust for several factors such as population density and urbanisation. Furthermore, selfreported data such as diarrhoea and cough are prone to recall and reporting bias. Lastly, the inaccuracy of mine GPS data and the coordinate reallocation by the DHS could have introduced errors and reduced our statistical power. Conclusion and recommendations The results presented in this paper suggest that the impact of mines on child health is complex and likely nonlinear over time; i.e., significant effects can be found in some mine life stages but not in others. We find evidence that the launch of industrial mining projects accelerates the improvement of neonatal survival and reduction in the risk for childhood diarrhoeal in SSA countries, with significant contributions during the prospection and construction and in the advanced extraction phases. While the launch of industrial mining projects seems not to have any impact on childhood cough and nutritional status, our evidence points to an increase in stunting rate before the launch of extraction activities and increased rate of respiratory disease symptoms once extraction starts, reflecting an increase of food insecurity and environmental pollution, respectively. Therefore, health management plans with an emphasis on maintaining positive health impacts throughout the mining life stages and addressing the identified risks on respiratory and nutritional health in children are advisable. On the other hand, the varying effects of industrial mining on child health outcomes throughout the mining life stages may reflect differential mine-related contributions to economic growth and community development over time. Further research aiming to provide more insights into the temporal effects of mine impacts and, thus, a better understanding of these complex dynamics of health impacts are recommended. These future studies should be powered by using longitudinal data to determine whether the association between these health outcomes and mining varies based on the mining setting (e.g., type of resource extracted, country location of the mine, preventative measures taken by the company). The studies should include health monitoring data that should be part of the mine's health mitigation and monitoring plan.
v3-fos-license
2018-12-06T20:23:57.140Z
2017-11-10T00:00:00.000
55866692
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.ccsenet.org/journal/index.php/apr/article/download/70995/39223", "pdf_hash": "454e4e8e97a104d4f614c308f46cbacb85400937", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42570", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "454e4e8e97a104d4f614c308f46cbacb85400937", "year": 2017 }
pes2o/s2orc
Microbial Purification in Well-Water Using UV-Vis LEDs and Monitoring Using Laser-Induced Fluorescence Well-water contaminated by microbial bacteria has been purified using LEDs radiating in the ultra-violet and visible spectral regions. The contaminated water sample was exposed to the LEDs in specially constructed chambers and the purification process carried on for 3 days. The efficiency of the process was determined using a 445 nm diode laser to induce fluorescence (LIF) and the monitoring of coliform bacteria count (Total coliform, Fecal coliform and Escherichia coli) as well as Total Heterotrophic Bacteria (THB). The LIF peak fluorescence intensities at 526 nm (Raman water peak) and dissolved organic matter fluorescence intensity at 550 nm were determined. Using the fluorescence intensity of purified drinking water as reference, a fluorescence intensity ratio was calculated. A decrease in this ratio with time, at the two wavelengths indicated a proof of purification. Using the values of the slopes, the red and green LEDs proved most efficient while the UV was the least. From the counts of coliform bacteria and THB, the light sources registered zero after the first day of purification, but counts were recorded thereafter for some of the bacterial for some light sources. This may be attributable to bio-films formation on internal surfaces of the purification chamber due to excessive temperatures. A reduction in fluorescence intensity observed in the sample stored in dark environment could be attributed to the stationary and logarithmicdecline phases of the growth curve of bacterial population. This purification technique is inexpensive and can easily be adapted for domestic water purification for reducing waterborne bacteria. Introduction Water is associated with life, and indeed needed by all living things for survival.Its deficiency in active cells results in dehydration and has caused many deaths and adverse health situations.Its quality is a powerful determinant of good health, and has therefore gained a critical attention.This is because it is often contaminated by micro-organisms such as pathogenic enteric bacteria, viruses, intestinal parasites and other unwanted chemical substances leading to a reduction in its quality.Water of poor quality is a health hazard, with the most predominant waterborne disease, diarrhea, having an estimated annual incidence of 4.6 billion episodes and causing 2.2 million deaths every year.It results in malnutrition, weakness, and an increase in vulnerability to diseases which can be life threatening.In Africa, about 42% of healthcare providers do not have access to clean water, while globally, 663 million people (about 10%) do not have access to safe drinking water (WHO/UNICEF JMP Report, 2015).If authorities do not take measures to resolve such fundamental human need for safe and portable water, about 135 million people will die from water-related diseases by 2020 (Gleick, 2002). The wholesomeness of water means absence of suspended solids, inorganic solids and pathogens which are disease-causing organisms.Ideally, water must contain only its molecules without any contaminants.The Safe Drinking Water Act defined contaminant as any physical, chemical, biological, or radioactive substance or matter present in water (Contaminant Candidate List, 2016).The exponential growths in the population of most developing countries have put pressure on the quality of water sources due to the rise in waste production without the requisite increase in waste management techniques and facilities to handle the situation. Ghana is endowed with water resources from rainfall, surface water (rivers, lakes, ponds and streams) and groundwater sources (springs and wells), but the latter has proved to be cost-effective and a long-lasting source of water supply.This has led to the drilling of over 10,000 boreholes and over 45,000 hand-dug wells in the country.50% of the entire population use water from groundwater sources while the rural communities, which form about 70% of the total population, rely mostly on groundwater source for all chores (ISODEC, 2011). One way of assessing the quality of groundwater is by monitoring dissolved organic matter (DOM) as its concentration can reflect the possibility of contamination (Leenheer et al., 1974;Barcelona, 1984).Humic and fulvic acids of DOM affect the solubility of organic pollutants in groundwater and can contribute to the long-range transport of harmful chemicals (Chiou et al., 1986) which in groundwater can lead to production of carcinogenic disinfection byproducts (DBPs) during drinking water treatment (Singer, 1994;Chomycia et al., 2008).According to Tebbute (1992), traces of pathogenic organisms are found in groundwater as a result of poor well-construction techniques which are associated with bedrock aquifers in which large openings afford direct connection between the surface and groundwater Poorly constructed, cracked or unsealed wells can provide an avenue for coliform bacteria to enter groundwater.Coliform bacteria are a group of microorganisms found in soils, surface water and on plants, but can be washed into groundwater by rain.Such bacteria are used as pointers in water tests as their presence indicates that pathogens could also be in the water.Analysis is conducted for three different groups of coliform bacteria, with each having a well-defined level of risk to human health.Total coliform bacteria are generally harmless and may have the environment as its source.Fecal coliform bacteria (sub-group of total coliform bacteria) appear in great quantities in the feces of people and animals.Escherichia coli (E.coli) are found in great quantities in the intestines of warmblooded animals including humans.Some strains can cause illness, and their detection indicates fecal contamination of a well and poses a high risk for illness from disease-causing organisms (Coliform Bacteria, 2016;Water Fact Sheet, 2009).The regular monitoring and testing for coliform bacteria in groundwater is necessary to ensure the supply of safe water to consumers. Another quality assessment considered an accessory indicator in the measurement of coliform in water is the heterotrophic plate count (HPC).This is an aerobic-anaerobic bacteria test used for monitoring general bacteriological water quality, but cannot be used in fecal contamination test (Hsu et al., 1995).Heterotrophic bacteria are not indicators of pathogenic conditions, but can cause infections in skin, lung and gastroenteritis (Bartram et al., 2003;Chopra & Houston, 1999, Helmer et al., 1997).Heterotrophic bacteria are present in all water types (Foulquier et al., 2011;Griebler & Lueders, 2009) and drinking water treatment does not remove or inactivate all heterotrophic organisms.Some are resistant because they may be in a spore or vegetative form with an impervious membrane, or may be protected from treatment as part of an aggregate (Geldreich, 1996).Their concentration in water varies from less than 1 CFU/ml (colony forming unit per milliliter) to over 10000 CFU/ml (Payment, 1999;Pepper et al., 2004;Stine et al., 2005) and depends on temperature and the amount of absorbable organic material.The maximum permissible level of heterotrophic bacteria in potable water is 500 CFU/ml (Amanidaz et al., 2015).In groundwater, under the direct influence of surface water, their concentrations can be highly variable.Such variations can be sudden or gradual over time and may indicate a change in water quality. Water purification techniques which aim at removing all contaminants in order to make it safer for use have existed, while new and more efficient ones have recently been discovered and employed.Some basic purification methods in use include filtration (eliminates most of suspended particles), oxygenation (allows atmospheric oxygen to dissolve in the water killing microbes), solar disinfection, chlorination and reverse osmosis.Some of these techniques may be expensive and saddled with disadvantages.Chlorination water treatment, for instance, leaves behind disinfection by-products (DBPs) which may have adverse birth defects and outcomes and causes bladder cancer (Villanueva et al., 2006). Purification using light sources involves exposing organism in contaminated water to radiations mainly within the ultra-violet (UV) and visible (Vis) parts of the electromagnetic spectrum.This results in inactivating both pathogenic and non-pathogenic bacteria (Hamamoto et al., 2007) as the rays strike and penetrates the outer cell membrane, going through the cell body and disrupting the DNA -preventing reproduction (Here, nothing is being added except the energy which does the killing).Such radiation has the potential of causing destruction to a lot of bacteria species; mold spores, algae, virus, and yeast based on the dose of energy delivered to the organism by the radiation and do not form any significant DBPs (Oppenheimer et al., 1997).With respect to the use of light sources, LEDs are preferred and have been receiving tremendous attention amongst researchers over the past few years. They use electricity more efficiently, do not require any warm up-period and it is possible to adjust their wavelengths to supply desirable radiations (Jo, 2013;Nelson et al., 2013;Crook, 2011;Vilhunen et al., 2011).UV LEDs in particular are used to inactivate chlorine-resistant pathogens within a relative short contact time without producing undesirable DBPs during water treatment (Ibrahim et al., 2013;Bowker et al., 2011;Crawford et al., 2005). Simple and non-destructive spectroscopic methods using various light sources have been shown to be useful in the structural and functional studies of DOM (Chin et al., 1994).Lasers and LEDs have been used for disinfection in water (Adeboye, 2014;Gondal et. al., 2009;Copa & Gitchel, 1981) while induced fluorescence using these same sources (LIF for lasers) have been used to estimate water pollution and probe the composition of DOM in watersheds (Sharikova, 2009). The objective of this study is to assess the efficiency of purification in groundwater harvested from a heavily patronized hand-dug well located near the University of Cape Coast by determining its fluorescence spectra and measuring the count of coliform bacteria (Total coliform, Fecal coliform and Escherichia coli) and Total Heterotrophic Bacteria (THB) before and after treating with different LED sources within the UV-Vis spectral regions.The extent of purification using each source is also analyzed. Study Area The University of Cape Coast is located in the Central Region of Ghana (05°06′00″N, 01°15′00″W) with no elevation above sea level.Within the immediate environs of the University is Amamoma, a settlement where a large number of the students of the university live.Availability of water is usually a problem in Cape Coast and its environs and therefore most inhabitants tend to rely on hand-dug wells for their daily chores.Figure 1 Duplicate dilutions of 0.1 mL and 1 mL of each sample were inoculated on plate count agar using the spread and pour plate technique respectively and incubated at 37°C for 48 hours.All colonies were counted, and an average of duplicate samples recorded as THB counts/mL (CFU/milliliter) for the sample. Similarly, 2 duplicate dilutions of 0.1 mL and 1 mL of each sample were plated on Eosin Methylene Blue agar and one incubated at 37°C for 48 hours to observe for TC and the other duplicate incubated at 44°C for 48 hours to observe for FC.All purple colonies were counted, and an average of duplicate samples recorded as TC and FC counts/mL (CFU/mL), respectively for the sample. For Escherichia coli each of the presumptive colonies (metallic green sheen colonies on the FC) was sub-cultured in 10 mL of Peptone Water (Oxoid) for biochemical testing.Each colony was grown in peptone water and incubated at 44°C for 24 hours.A drop of Kovac's reagent was then added to the tube of peptone water.All the tubes showing a red ring color development after gentle agitation indicated the presence of indole and recorded as a confirmation of Escherichia coli.All colonies of that morphological type were then enumerated and recorded. Purification Chamber A vacuum thermos flask (1 liter volume) was used as the purification chamber and was chosen because it could provide and ensure an efficient and easily-maintained system.A glass tube was driven through a rubber cork and LED arranged on strips wound round the tube.The terminals of the LED were passed through the rubber cork.A specially fabricated white glass tube made of quartz was then lowered to the base of the cork to serve as an outer shield to protect the LED.This is as shown in Figures 2a and 2b.The shielded LED on the glass tube was then lowered into a flask, which served as the purification chamber, as shown in Figure 2c.The material used to protect the LED needed to be transparent in the UV-Vis spectral regions.A white quartz glass, which is transparent in the visible region, was therefore used as a previous study had confirmed its transparency in the UV region (Sackey et. al., 2015). Design Involving LEDs Five different LEDs (green, red, ultra-violet, white and blue) along with five flasks respectively were used.The reflecting inner surface of the flask ensured an even distribution of light while at the same time confining the heat generated within the purification environment.The LEDs were powered by a 12 V DC supply.Figure 2b shows the powered LEDs. Results and Discussions The spectra of the LEDs used were first taken using a USB 4000 spectrometer to determine the wavelength at which they each exhibited their maximum intensities.The graphs for their normalized spectra are shown in Figure 4. Temperature Analysis on Purification Chamber Two distinctive analyses involving temperature were carried out on each of the purification chambers.The first was a study conducted before the commencement of the purification process to determine the rate of heat loss from each flask.This was done by filling each flask with water at a temperature of 95 0 C, and the temperature measured after every hour for 4 hours.The temperature variations, which are represented in the vertical column graph in Figure 5, indicate a similar rate of heat loss for all 5 purification chambers.The second analysis involved a determination of the temperature within the purification chamber each time data was to be collected.As expected, the LEDs warmed-up with time; therefore this measurement provided the temperature of the sample at each time of measurement.The findings of these measurements for each LED are represented in Figure 6.The initial temperature reading for all the samples was 27 0 C. LIF DOM Measurements The LIF from the samples were determined for each light source used and for the 3 days of purification.Figure 7 is a composite graph showing the LIF after each day of purification.For each plot, a comparison is made with the natural drinking mineral water (represented as 'purified') and the original non-purified water sample (represented by 'NT').The plot labeled 'Dark' is for the control untreated sample kept in a completely dark environment throughout the study period.Comparing the plots with the plot for natural drinking mineral water (purified), it is noticed that both the peak fluorescence intensity at 526 nm (Raman water peak) and DOM fluorescence intensity at 550 nm reduced from Day 1 to Day 3 for all the LEDs used.This can be assigned to photo-degradation.Additionally, it will be noticed that the fluorescence intensity for the sample stored in the dark environment (control) also decreased with time, meaning that something else aside the photo-degradation was taking place.This observation has been explained under section 3.3: Analysis Based on Colony formation before and after purification.There was no shoulder observed on the graph for the natural drinking water (purified) at 550 nm and can be assigned to the fact that the natural drinking water has no DOM.DOM are only observed for contaminated water samples and was therefore only observed for the water samples collected from the well. Using the fluorescence intensity of the purified drinking water as reference, a fluorescence intensity ratio (FIR) was calculated after each day of purification and for each light source at 526 nm and 550 nm.This was calculated using the equation below. = where FI S is the fluorescence intensity of the sample at a given time (Day) of purification, for a given LED and emission wavelength (526 nm or 550 nm), and FI NDW is the fluorescence intensity of the natural drinking mineral water which was used as the reference.Figure 8 is a composite graph for the variation of the FIR for the different LEDs with time.Figure 8a represents the ratios at 526 nm while Figure 8b represents the ratios at 550 nm.It is clear from Figure 8 that the DOM substances in the water sample appear to be more sensitive to the visible light sources than the UV light source and therefore photo-degradation was more pronounced with the visible light sources.Generally, the fluorescence intensity ratios (FIR) calculated for the various light sources at 526 nm and 550 nm decreased with time and indicates an improvement in the quality of the water and a confirmation of some purification.Using the data obtained from the FIR, a line of best fit was drawn, a linear equation established for each light source (526 nm and 550 nm) and the slope of each line determined and used as an indicator of the rate and efficiency of purification.Higher slope values indicated a faster and more efficient purification process.Table 1 gives the values of the slopes extracted from Figure 8 and the ranking for the LEDs at 526 nm and 550 nm.It can be deduced that at both 526 nm and 550 nm, the red and green LEDs proved most efficient in the DOM purification process, while the UV was the least efficient.This observation made with respect to the UV conforms to a previous study that drinking water purification using UV LEDs is not an efficient technique (Adeboye, 2014). Analysis Based on Colony Formation before and after Purification There was a count of coliform bacteria (Total coliform, Fecal coliform and Escherichia coli) and Total Heterotrophic Bacteria (THB) before purification and a regular periodic count on a daily basis.These counts, as measured in Colony Forming Unit per milliliter CFU/ml, before purification (NT) and for the 3 days of purification using the various light sources, are presented in Table 2.The count for the sample stored in the dark is also presented.2, all the light sources registered a reduction to zero (0) count after Day 1 of purification in all the measured coliform and bacteria studied.Some counts were, however, recorded after Day 2 and 3 of purification for some of the light sources for some of the coliform and bacterial studied.It is known that purification effectiveness is a function of the contact time and an increase in temperature beyond the value appropriate for drinking water influences the rate of purification reactions. Temperatures exceeding the required temperature might have led to the formation of bio-films on internal surfaces (Ainsworth 2004).Bio-films are known to contain several living heterotrophic bacteria, fungi, protozoa, nematodes and crustaceans.It can be deduced from the temperature graph in Figure 6 that the temperature within the purification chamber increased after Day 1 and could be the reason for the observed counts thereafter.Also the microbial re-growth observed at the end of Day 2 may be due to the organic matter present in the water samples (as depicted in Figure 8b). With respect to the control sample it will be observed that although it was kept in a dark environment there was a reduction in all the counts with time after Day 1.This means that other phenomenon aside photo-degradation might be taking place.According to a study (Tortora, Funke, & Case, 2010), bacterial populations follow a sequential series of growth phases; the lag, exponential growth, stationary and logarithmic-decline.The observed reduction may be attributed to the stationary and logarithmic-decline phases of their growth curve.At the later stage of the stationary phase, increased cell density often causes an accumulation of organic acids and other toxic bio-chemicals as a result of depletion of nutrients and oxygen.In the logarithmic decline phase, limiting factors intensify and the population shows a decline leading to some cells going into dormancy although they remain viable but do not grow.Some cells enter a starvation mode that helps them resist the lack of nutrients (Talaro & Chess, 2015). Conclusion In this study, the use of LEDs radiating in the ultra-violet and visible spectral regions for the purification of handdug well-water samples have been demonstrated to have a great potential.The UV LED radiated at 396 nm while the visible LEDs, blue, green, red and white radiated at peak wavelengths of 447 nm, 510 nm, 629 nm and 442 nm respectively.The white LED also exhibited a broad band ranging from 520 nm and 626 nm.The purification was carried out for three (3) days in specially constructed chambers, and the efficiency and quality of purification periodically monitored using two techniques: laser-induced fluorescence spectroscopy and bacteria and Total Heterotrophic Bacteria count.The laser-induced fluorescence of the samples was determined for each light source and the peak fluorescence intensities at 526 nm (Raman water peak) and DOM fluorescence intensity at 550 nm determined.Using the fluorescence intensity of purified drinking water as reference, the fluorescence intensity ratio was calculated at these two wavelengths after each day of purification and for each light source. The decrease in the fluorescence intensity ratio with time at the two wavelengths for the various light sources generally indicated the extent of photo-degradation of the water samples, characterized by the slope of the line. The steeper the slope the greater is the rate and extent of photo-degradation.Comparing the values of the slopes at both wavelengths, it was clear that the red and green LEDs proved most efficient in the degradation process, while the UV was the least efficient due to the characteristics of the DOM. Samples were taken before and after exposure to the LEDs and were analyzed with culture techniques.A reduction in the coliform bacteria and Total Heterotrophic Bacteria counts was observed throughout the study.In particular, counts reduced to zero after the first day of irradiation for all light sources, after which microbial re-growths was observed for some light sources.We posit that this re-growth may be due to the presence of some DOM remaining and subsequent over-heating may have led to the formation of bio-films within the water samples.The sample stored in a dark environment also exhibited a reduction in its fluorescence intensity and coliform bacteria and Total Heterotrophic Bacteria counts with time.The observed reduction may be attributed to the stationary and logarithmic-decline phases of the growth curve of bacterial population where the intensification of limiting factors leads to a population decline. Light radiation based water purification has a great potential as nothing is added except the energy which does the killing of the bacteria and therefore does not form any purification by-products.This technique is therefore amendable for domestic water purification as it is easy to use, inexpensive and dependable for reducing waterborne pathogens. is a composite figure in which (a) is a map of Ghana highlighting the Central Region, (b) shows the study area Amamoma and (c) shows area details within Amamoma indicating the investigated well. Figure 1 . Figure 1.(a) Map of Ghana highlighting the Central Region, (b) Study area Amamoma, (c) Details within Amamoma showing the investigated Figure 2 : Figure 2: (a) Fabricated tube showing the LED's housed and protected within the quartz glass material.(b) The various LED's powered.(c) The individual purification chambers for the various LED's Figure 3 . Figure 3. LIF setup for DOM measurements Figure 4 . Figure 4. Normalized spectral emissions of the different LED sources used Figure 5 . Figure 5. Temperature variations with time within each purification chamber indicating similar rate of heat loss for all 5 flasks from an initial temperature of 95 0 C Figure 7 . Figure 7. Composite graph showing the LIF of the samples after purification using the various LEDs and comparing with the natural drinking mineral water ('purified') and the original non-treated water sample kept in a completely dark environment ('NT') Figure 8 . Figure 8. Changes in fluorescence intensity ratios for the different LEDs with time at (a) 526 nm and (b) 550 nm Table 1 . Slope values and ranking of the LEDs at 526 nm and 550 nm respectively Table 2 . Microbial Population (CFU/ml) before and after purification with the various light sources and for the 3 days of treatment Facts on Drinking Water, 2016) the maximum acceptable level for fecal coliform, total coliforms and E.coli are 'none detectable per 100 mL'.This simply means that for every 100 mL of drinking water tested, no coliforms should be detected.As shown in Table
v3-fos-license
2024-02-25T05:21:23.027Z
2024-02-01T00:00:00.000
267847302
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "81ca5e7529d9021dd2ba033ab4eff9a7f11a2880", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42574", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "sha1": "81ca5e7529d9021dd2ba033ab4eff9a7f11a2880", "year": 2024 }
pes2o/s2orc
Pulsed Hyperoxia Acts on Plasmatic Advanced Glycation End Products and Advanced Oxidation Protein Products and Modulates Mitochondrial Biogenesis in Human Peripheral Blood Mononuclear Cells: A Pilot Study on the “Normobaric Oxygen Paradox” The “normobaric oxygen paradox” (NOP) describes the response to the return to normoxia after a hyperoxic event, sensed by tissues as an oxygen shortage, up-regulating redox-sensitive transcription factors. We have previously characterized the time trend of oxygen-sensitive transcription factors in human PBMCs, in which the return to normoxia after 30% oxygen is sensed as a hypoxic trigger, characterized by hypoxia-induced factor (HIF-1) activation. On the contrary, 100% and 140% oxygen induce a shift toward an oxidative stress response, characterized by NRF2 and NF-kB activation in the first 24 h post exposure. Herein, we investigate whether this paradigm triggers Advanced Glycation End products (AGEs) and Advanced Oxidation Protein Products (AOPPs) as circulating biomarkers of oxidative stress. Secondly, we studied if mitochondrial biogenesis was involved to link the cellular response to oxidative stress in human PBMCs. Our results show that AGEs and AOPPs increase in a different manner according to oxygen dose. Mitochondrial levels of peroxiredoxin (PRX3) supported the cellular response to oxidative stress and increased at 24 h after mild hyperoxia, MH (30% O2), and high hyperoxia, HH (100% O2), while during very high hyperoxia, VHH (140% O2), the activation was significantly high only at 3 h after oxygen exposure. Mitochondrial biogenesis was activated through nuclear translocation of PGC-1α in all the experimental conditions. However, the consequent release of nuclear Mitochondrial Transcription Factor A (TFAM) was observed only after MH exposure. Conversely, HH and VHH are associated with a progressive loss of NOP response in the ability to induce TFAM expression despite a nuclear translocation of PGC-1α also occurring in these conditions. This study confirms that pulsed high oxygen treatment elicits specific cellular responses, according to its partial pressure and time of administration, and further emphasizes the importance of targeting the use of oxygen to activate specific effects on the whole organism. Introduction The appropriate availability and release of oxygen is fundamental to ensure cell functions, including cellular metabolism and growth.A relative decrease in O 2 supply, or hypoxia, may induce acute and chronic pathologies such as cancer, cardiovascular disease, chronic obstructive pulmonary disease (COPD), metabolic disorders, and other stress responses [1,2].However, some interesting positive benefits have been recently published using intermittent or "pulsed" hypoxia, showing interest in anti-aging, mitochondrial, or wellness and training effects [3][4][5][6][7][8][9][10]. Other research focuses on the hyperbaric side of hyperoxia, finding many benefits and positive outcomes in numerous diseases and other similar outcomes such as hypoxia in aging, post-traumatic stress disorders, or training, among others [11][12][13][14][15][16][17][18][19][20][21].Conversely, chronic oxidative stress can be a consequence of excessive O 2 exposure characterized by a persistent condition of an imbalance between the generation of ROS and the ability of the endogenous antioxidant system to detoxify them [22,23]. Oxidative stress induces glycoxidation reactions and modifications of free amino groups in proteins, resulting in the generation of Advanced Glycation End Products (AGEs) and Advanced Oxidation Protein Products (AOPPs).Both AGEs and AOPPs are considered circulating markers of oxidative stress in several pathologic conditions when an imbalance among oxidant and antioxidant responses occurs [24]. Although not intuitive, hypoxia also favors the formation of reactive oxygen species (ROS) and in the long run leads to an increase in oxidative stress, a condition that induces a strong prolonged inflammatory response [25][26][27].In parallel, situations such as intense aerobic exercise require high amounts of O 2 consumption, which consequently leads to an increased metabolism, an increase in ROS, and oxidative stress [28].During light-intensity exercise, breathing oxygen-enriched mixtures, or, on the contrary, hypoxic air, produces similar reactions [29,30]. To date, it is known that cells respond to O 2 fluctuations by activating two transcription factors: hypoxia-inducible factor-1α (HIF-1α) and Nrf2, which activate the transcription of multiple target genes [31].Related to this, we demonstrated in previous works that pulsed hyperoxia induces a "hypoxic like" response, defined as a "normobaric oxygen paradox" (NOP) [32][33][34][35][36].In particular, we observed that the return to a condition of normoxia after the induction of both mild and severe hyperoxia leads to an increase in HIF1 levels in human peripheral blood mononuclear cells (PBMCs) [33].Furthermore, we also observed an increase in Nrf2 activation in the same experimental conditions [33].Indeed, Nrf2 is activated in response to different types of stress related to a high flow of O 2 , inducing the repair or degradation of damaged macromolecules [37].As mentioned before, Nrf2 is primarily known as an important key modulator of cellular metabolism and its target genes are involved in GSH synthesis, scavenging mitochondrial ROS (peroxiredoxin 3, Prx3), xenobiotic metabolism (NAD(P)H, quinone oxidoreductase 1, NQO1), and drug elimination (glutathione S-transferase, GST), thus maintaining ROS homeostasis [38,39]. More recently, new functions for Nrf2 have been highlighted concerning its ability to activate the transcription of genes involved in mitochondrial biogenesis.Mitochondria are double-membrane organelles that provide a dynamic and multifaceted role in cell signaling and metabolism.They also play an important role in cellular redox homeostasis through their involvement in ROS metabolism as one of the main sites of ROS production in the cell [40].Mitochondria are organelles capable of dividing and growing in mass and size through the mechanism of mitochondrial biogenesis, which can be influenced by different factors, such as temperature, oxidative stress, and O 2 fluctuations [41].Mitochondrial biogenesis can be regulated by oxidative-stress-sensitive transcription factors such as NRF1, NRF2, and mitochondrial transcription factor A (TFAM) [23].The synthesis of the latter is regulated by Nrf2.The mechanism is considered as follows: when Nrf2 binds to the EpRE sequences of NRF1, TFAM, together with the transcriptional coactivator peroxisome proliferator-activated receptor-c coactivator-1 α (PGC-1α), elicits the synthesis of new mitochondria [42]. Moreover, a regulatory mechanism between PGC-1α and Nrf2 has been shown, which is important to the enabling of physiological mitochondrial functions and oxidative metabolism in different tissues.In particular, upon metabolic stress (fasting) and/or oxidative imbalance (GSH depletion), p53 binds to the PPARGC1A sequence within the promoter of both human and mouse genes and positively regulates PGC-1α expression, which in turn co-activates Nrf2 gene expression and stimulates the up-regulation of antioxidant genes mitochondrial SOD2 and γGCS.Based on these premises, we hypothesize that PGC-1α and Nrf2 contribute together to counteract oxidative stress and induce mitochondrial biogenesis [43].In fact, the protective roles of PGC-1α and Nrf2 in various pathologies have been highlighted [44].To the best of our knowledge, in the context of molecular signaling induced by O 2 fluctuations, there are no data available regarding the PGC-1α-Nrf2 pathway and TFAM. Here, we report that NOP induces the activation of a redox-mediated PGC1-α-NRF2 pathway, intersects mitochondrial-protein activation (TFAM), and modulates mitochondrial cellular adaptive responses to redox imbalance. Pulsed Hyperoxia Increases AGEs and AOPP Plasma Levels in Humans We investigated whether O 2 fluctuations induced oxidative stress markers.The plasma levels of AGEs and AOPPs were measured in healthy subjects after one hour of exposure to mild (MH), high (HH), and very high (VHH) hyperoxia, corresponding to 30%, 100%, and 140% O 2 , respectively.The exposure to 30% and 100% O 2 significantly affected plasma levels of AGEs, with a clear increase at 3 h and 24 h after oxygen exposure (Figure 1a,b).Conversely, VHH exposure increased the plasma level of AGEs, peaking at 0.5 h after the return to normoxia (Figure 1c).The same trend was observed in AOPP plasma levels, with HH inducing a plasmatic increase at 3 and 24 h after the return to normoxia (Figure 1e), but also in the case of VHH, where a significant increase in AOPP occurs earlier (Figure 1f) but continues for a longer period, up to 24 h.These results confirm the induction of oxidative stress in human plasma during the NOP effect, with different responses in time.Measurements were taken at baseline (before O2 exposure), 30 min, 3 h, and 24 h after exposure by means of spectrofluorimetric and spectrophotometric detection.Data are reported in percentual changes from baseline (n = 4 subjects for each time point).Histograms' colors are meant to ease reading and correspond to post exposure time.Measurements were performed in triplicate.* p < 0.05; ** p < 0.01; ns: non-significant; RM-ANOVA with Dunnet's post hoc test. Pulsed Hyperoxia Regulates Peroxiredoxin 3 Levels Nrf2 directly regulates mitochondrial ROS homeostasis by promoting detoxification of mitochondrial peroxides through Prx3 [39].Hence, to determine the role of ROS in the induction of Nrf2-mediated Prx3 expression, we determined the effect of 30%, 100%, and 140% O2 administration on human PBMCs at the same time intervals indicated above.Measurements were taken at baseline (before O 2 exposure), 30 min, 3 h, and 24 h after exposure by means of spectrofluorimetric and spectrophotometric detection.Data are reported in percentual changes from baseline (n = 4 subjects for each time point).Histograms' colors are meant to ease reading and correspond to post exposure time.Measurements were performed in triplicate.* p < 0.05; ** p < 0.01; ns: non-significant; RM-ANOVA with Dunnet's post hoc test. Pulsed Hyperoxia Regulates Peroxiredoxin 3 Levels Nrf2 directly regulates mitochondrial ROS homeostasis by promoting detoxification of mitochondrial peroxides through Prx3 [39].Hence, to determine the role of ROS in the induction of Nrf2-mediated Prx3 expression, we determined the effect of 30%, 100%, and 140% O 2 administration on human PBMCs at the same time intervals indicated above.The Western blot analysis reported in Figure 2 shows that Prx3 was first diminished after 3 h and 30 min, respectively, for 30% and 100%, then showed a strong increase up to 24 h.The VHH exposure triggered a different reaction, showing a remarkable and unique peck 3 h post hyperbaric oxygen, reaching similar levels as mild oxygen exposures, but this did not last.These results suggest that the ROS-mediated activation of Nrf2 triggers the up-regulation of Prx3 protein expression, resulting in the protection of PBMCs from oxidative stress associated with the hyperoxic stimuli in a different way from normobaric oxygen levels, showing a prolonged, significant increase present after 24 h, while after 140% oxygen exposure, a single significant increase appears after 3 h. Pulsed Hyperoxia Leads to PGC-1α Up-Regulation, but Only MH Results in TFAM Activation We analyzed the protein content of PGC-1α by Western blot analysis at 0.5, 3, and 24 h after 30%, 100%, and 140% O2 administration.As shown in Figure 3, PGC-1α significantly increased after 30 min, up to 24 h in MH (Figure 3a).HH treatment induced PGC-1α nuclear translocation only at 3 h (Figure 3b).VHH exposure was associated with an increase similar to what was observed after MH exposure (Figure 3c).These results indicate that the activation of PGC-1α precedes that of Nrf2, which is activated only at 3 h, suggesting a possible synergy between the two transcription factors.Similarly, HH treatment always causes an induction of PGC-1α at 30 min that goes along with the activation of Nrf2, which begins to increase at 30 min but peaks at 3 h.All together, these data suggest that Nrf2 is involved in the transcription and subsequent activation of PGC-1α in our experimental conditions. Pulsed Hyperoxia Leads to PGC-1α Up-Regulation, but Only MH Results in TFAM Activation We analyzed the protein content of PGC-1α by Western blot analysis at 0.5, 3, and 24 h after 30%, 100%, and 140% O 2 administration.As shown in Figure 3, PGC-1α significantly increased after 30 min, up to 24 h in MH (Figure 3a).HH treatment induced PGC-1α nuclear translocation only at 3 h (Figure 3b).VHH exposure was associated with an increase similar to what was observed after MH exposure (Figure 3c).These results indicate that the activation of PGC-1α precedes that of Nrf2, which is activated only at 3 h, suggesting a possible synergy between the two transcription factors.Similarly, HH treatment always causes an induction of PGC-1α at 30 min that goes along with the activation of Nrf2, which begins to increase at 30 min but peaks at 3 h.All together, these data suggest that Nrf2 is involved in the transcription and subsequent activation of PGC-1α in our experimental conditions.at 3 h, suggesting a possible synergy between the two transcription factors.Similarl HH treatment always causes an induction of PGC-1α at 30 min that goes along with th activation of Nrf2, which begins to increase at 30 min but peaks at 3 h.All together, thes data suggest that Nrf2 is involved in the transcription and subsequent activation of PGC 1α in our experimental conditions.Even though all tested oxygen exposures were associated with a significant activation of PGC-1α, only MH exposure resulted in a significant TFAM release and in the activation of mitochondrial biogenesis (Figure 4).Even though all tested oxygen exposures were associated with a significant activation of PGC-1α, only MH exposure resulted in a significant TFAM release and in the activation of mitochondrial biogenesis (Figure 4). Materials and Methods This study was conducted in accordance with the Declaration of Helsinki [45] and approved by the Academic Ethical Committee of Brussels (B200-2020-088).Every partic- Materials and Methods This study was conducted in accordance with the Declaration of Helsinki [45] and approved by the Academic Ethical Committee of Brussels (B200-2020-088).Every participant was fully informed of the procedures and was able to quit at any step of the procedure; written informed consent was obtained. Experimental Protocol After the obtention of full, written informed consent, twelve healthy non-smoking subjects (4 females and 8 males) enrolled for the experiment.These were physiotherapy students aged 21.8 ± 2.3 and 21.25 ± 2.1 years old (mean ± SD), with 1.75 m ± 6.6 height and 69.0 ± 8.7 kg weight.Participants were prospectively randomized into three groups, each comprising 4 persons, and exposed to different oxygen PO 2 levels for 1 h.The first group received 30% O 2 (0.3 bar; 300 hPa PO 2 ) by means of an orofacial non-rebreather mask with a reservoir; the breathing gas flow (from a pressurized gas tank with the appropriate mixture) was set at 10 L/min, with care being taken to fit and tighten the mask on the subject's face.Group two received 100% O 2 (1.0 bar, 1000 hPa PO 2 ) from an oxygen concentrator (NewLife Intensity, CAIRE Inc., Ball Ground, GA, USA) with a similar non-rebreathing mask setup.Group three received 140% O 2 (1.400 bar, 1400 hPa PO 2 ), using a one-person hyperbaric chamber (Biobarica, Buenos Aires, Argentina); the subject was breathing pure oxygen, 10 L/min, from a non-rebreathing mask inside the pressurized chamber. Venous blood samples were collected at baseline (before oxygen exposure), 30 min, 3 h, and 24 h after exposure.Subjects were instructed not to take any medication or perform strenuous physical exercise 24 h before and, stay in altitude up to 2 weeks before and during the entire study protocol and until blood collection was complete. Fifteen milliliters of blood were collected in ethylenediaminetetraacetic acid (EDTA).Human PBMCs were isolated from whole blood using a standard Histopaque-1077 (Sigma-Aldrich, Burlington, MA, USA) precipitation protocol, according to the manufacturer's instruction, before oxygen breathing (time 0), as well as at 0.5, 3, and 24 h after exposure to hyperoxia.The absence, of hemolysis in plasma was confirmed by measuring the absorbance of plasma at 414 nm, using an absorbance of 0.2 as a cut-off. Plasma Analysis of AGEs and AOPPs Determination of AGEs was based on the spectrofluorimetric detection as previously reported [46].Briefly, blood plasma was diluted 1:50 with phosphate-buffered saline (PBS) pH 7.4 and fluorescence intensity was recorded at the emission maximum (~440 nm) upon excitation at 350 nm (spectrofluorometer, Shimadzu, Carlsbad, CA, USA).The serum concentration of AGEs was normalized to the total protein amount, determined by the Bradford assay and expressed in arbitrary units (AU) per gram of protein (AU/g prot). Determination of AOPPs was based on spectrophotometric detection, as previously described [46].Blood plasma (100 µL) or the same volume of chloramin T (0-100 µmol/L), for calibration, was diluted 1:5 with PBS pH 7.4.Subsequently, 25 µL of 1.16 M KI and 50 µL of acetic acid were added to the diluted solutions and absorbance was measured immediately at 340 nm (spectrofluorometer, Shimadzu, Carlsbad, CA, USA).The concentration of AOPPs is expressed in chloramine T units (µmol eq Cl T/L). Statistical Analysis All statistical tests were performed using a standard computer statistical package, GraphPad Prism version 9.00 for MacOS (GraphPad Software, San Diego, CA, USA). Normality of data was verified by means of Kolmogorov-Smirnoff or Shapiro-Wilk tests, allowing us to assume a Gaussian distribution.Since each participant was their own control, data were analyzed using repeated measures ANOVA with Dunnett's multiple comparison or Tukey's post hoc test, and if the Gaussian distribution was not ascertained, Friedman with Dunn's post-test was preferred. A threshold of p < 0.05 was considered statistically significant.All data are presented as mean ± standard error on the mean (SEM). Discussion In this study, we identified changes in the plasma levels of AGEs and AOPPs in healthy human subjects after different O 2 concentration exposures.We also examined the involvement of the mitochondrial response to oxygen-fluctuation-induced oxidative stress by the activation of PRX3 and the modulation of mitochondrial biogenesis. Previous observations indicate that breathing 30% and 100% oxygen elicited a significant increase in plasmatic ROS, with a peak at 8 h after oxygen breathing, while the exposure to 140% (1.4 ATA) oxygen was associated with an increase in plasmatic ROS at 2 h after the return to normoxia [33,47,48]. In line with these results, we observed a consequent increase in oxidation end products in human plasma.In particular, the exposure to 30% and 100% oxygen increased the plasma levels of AGEs (Figure 1a,b) at 3 and 24 h, while 140% oxygen induced a higher significant increase in AGEs faster, with a peak at 0.5 h, and progressively returned close to baseline levels after 24 h (Figure 1c). A similar trend was also observed for AOPPs for 30% and 100% oxygen (Figure 1c,d), while for 140% oxygen a progressive significant increase starting from 0.5 h up to 24 h (Figure 1e) showed an opposite trend to AGEs for the same exposure. Advanced glycation end products (AGEs) are derived from nonenzymatic glycation occurring between the reactive carbonyl group of a reducing sugar and nucleic acids, lipids, or proteins, causing underlying tissue damage [49].In addition to nonenzymatic glycation, AGEs can also be formed through the polyol pathway and lipid peroxidation.AGEs produce reactive oxygen (ROS) and nitrogen (RNS) species, as well as oxidative stress and inflammation [50].We already acknowledged such oxidative stress increases with several markers, but without measuring the Maillard reaction; these results confirm our previous findings of higher oxidative stress for hyperbaric oxygen exposures at 1.4 ATA (140%) and 2.5 ATA after a single session [47] but, in the present results, with a faster recovery for 140%.These results were expected since we already found lipidic peroxidation after all levels of oxygen exposure, from hypoxia to hyperbaric hyperoxia, sometimes still present 48 h post exposure [27,47,51]. Interestingly, Advanced Oxidation Protein Products (AOPPs), a biomarker of oxidantmediated protein damage which can increase ROS levels, follow the same tendency of AGEs except for the 140% (1.4 ATA) exposure.We are unable to fully explain this trend; however, we observed a diminished cellular response compared to other exposures at 1.4 ATA [52].It is known that oxygen exposure elicits "Oxy-Inflammation", a term proposed by Valacchi et al. [53] for a condition characterized by the alteration of systemic inflammation and severely compromised redox balance, and we can understand why several hyperbaric sessions (at 2.5 ATA) are needed to adapt and counteract inflammatory or oxidative stress [54].Lower oxygen levels or even "hyperbaric air" have faster cellular hormetic responses [33,55]. The increase in oxidative stress and its metabolic consequences was confirmed by measuring, in human PBMCs, the cytosolic level of Peroxiredoxin 3 (Prx3), a mitochondrial antioxidant protein which serves as a major antioxidant enzyme and eliminates approximately 90% of H 2 O 2 in mitochondria [56]. We previously demonstrated that the administration of pulsed hyperoxia induces a "paradoxical" hypoxic response characterized by Nrf2 activation [33].In particular, we observed an increase in Nrf2 nuclear protein levels at 3 h when 30% and 140% O 2 were administered.Nrf2 levels remained constant up to 24 h.Conversely, the administration of 100% O 2 was associated with an increase in Nrf2 levels, starting at 30 min after the return to normoxia and levelling back to the baseline at 24 h [33]. The understanding of the roles of Nrf2 and PGC-1α in the regulation of oxidative stress and in maintaining mitochondrial homeostasis could provide novel information to support the treatment of various pathologies associated with O 2 fluctuations or to develop a novel approach to oxygen use [52,57]. One of the accepted mechanisms is the following: PGC-1α activates Nrf2 via the inhibition of GSK3b.GSK3b is inactivated by p38, which is positively regulated by PGC-1α.Therefore, the PGC-1α/p38/GSK3b/Nrf2 cascade is the most probable pathway for mitochondrial DNA transcription [58]. It is also possible that Nrf2 and PGC-1α form a feedback loop together [38]; our results show a largely parallel evolution of both (see Figure 5a-c).In our experimental conditions, despite the activation of PGC-1α, we observed a significant activation of mitochondrial biogenesis, in terms of TFAM expression, only following the exposure to mild hyperoxia.This suggests that the ability of PGC-1α to activate NFR2 and TFAM occurs when oxidative stress levels can be handled by antioxidant cellular responses.When oxidative stress levels overcome the cellular capacity to counteract oxidative stress, as previously reported, NF-kB activation takes over [33] and [33]) and PGC-1α in healthy subjects exposed to mild hyperoxia (a,d), high hyperoxia (b,e), and very high hyperoxia (c,f), corresponding to 30%, 100%, and 140% O 2 , respectively, for 1 h.Measurements were taken at baseline (before O 2 exposure), 30 min, 3 h, and 24 h after exposure by means of spectrofluorimetric and spectrophotometric detection.Data are reported in percentual changes from baseline (n = 4 subjects for each time point).Histograms' colors are meant to ease reading and correspond to post exposure time.Measurements were performed in triplicate.* p < 0.05; ** p < 0.01; *** p < 0.001; ns: non-significant; RM-ANOVA with Dunnet's post hoc test. In our experimental conditions, despite the activation of PGC-1α, we observed a significant activation of mitochondrial biogenesis, in terms of TFAM expression, only following the exposure to mild hyperoxia.This suggests that the ability of PGC-1α to activate NFR2 and TFAM occurs when oxidative stress levels can be handled by antioxidant cellular responses.When oxidative stress levels overcome the cellular capacity to counteract oxidative stress, as previously reported, NF-kB activation takes over [33] and the mitochondrial biogenesis response is lost. Moreover, erythropoietin (EPO) was found to activate mitochondrial biogenesis [59].Therefore, EPO can potentially activate both the Nrf2 and PGC-1α cascades.We have shown that in the NOP mechanism, varying oxygen levels below hyperbaric doses can, after one single session, activate EPO production up to 36 h.On the contrary, a single session of hyperbaric oxygen showed a reduction in plasmatic erythropoietin for 24 h [32,34,60,61]. Conclusions This study emphasizes the importance of targeting the use of oxygen to activate specific cellular responses [52,57].Further analysis is needed to understand how several sessions of different levels of oxygen breathing, for different durations, and with different in-between recovery periods, will modulate such responses. The Renaissance physician Paracelsus noted that, "Nothing is without poison-the poison is in the dose".The contemporary interpretation of this statement is that dose and effect move together in a predictably linear fashion and lower exposures to a hazardous compound will generally generate lower risks. Our results show that this "linearity" on reduced risk is not only present on the toxicity side, but also on the elicited response.In fact, it seems that in the first 24 h following a session, lower oxygen concentrations act more positively than higher levels of hyperoxia on mitochondrial biogenesis factors. We are aware that the number of subjects is small, and this limits the "power" of our study.Nonetheless, we consider this report a pilot study.Moreover, our data allow a better characterization of the complex spectrum of cell responses to pulsed oxygen concentration at the whole-organism level, resulting in a proof of principle study indicating the involvement of mitochondrial activity in the managing of oxidative stress.Additional studies are surely warranted to corroborate and confirm our observation. When an important production of ROS is present within the cytosol, (left part of the figure), pathways activated by Advanced Glycation End-products (AGEs) and Advanced Oxidation Protein Products (AOPPs) are initiated.AGEs trigger the AGE receptor (RAGE), leading to further formation of ROS and proinflammatory cytokines.AOPPs may induce oxidative stress through NADPH oxidases (Nox).As depicted in the lower left corner, NRF1-2 interaction, along with the transcriptional coactivator PGC-1α, triggers the synthesis of TFAM, facilitating mitochondrial biogenesis (indicated by the green dotted lines).This process indirectly contributes to ROS production since mitochondria serve as a significant ROS source.At the same time when a level of ROS is not to high and intermittent, TFAM maintains mitochondrial ROS balance by increasing the production of Prx3 (green dotted lines) thus facilitating the detoxification of mitochondrial peroxides via Prx3 (shown by the red dotted line from Prx3 to ROS) (Figure 6). gers the synthesis of TFAM, facilitating mitochondrial biogenesis (indicated by the green dotted lines).This process indirectly contributes to ROS production since mitochondria serve as a significant ROS source.At the same time when a level of ROS is not to high and intermittent, TFAM maintains mitochondrial ROS balance by increasing the production of Prx3 (green dotted lines) thus facilitating the detoxification of mitochondrial peroxides via Prx3 (shown by the red dotted line from Prx3 to ROS)(Figure 6).Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Figure 1 . Figure 1.AGE and AOPP production following 1 h hyperoxia.Percentual changes in plasma levels of AGEs and AOPPs in healthy subjects exposed to mild hyperoxia (a, d), high hyperoxia (b,e), and very high hyperoxia (c, f), corresponding to 30%, 100%, and 140% O2, respectively, for 1 h.Measurements were taken at baseline (before O2 exposure), 30 min, 3 h, and 24 h after exposure by means of spectrofluorimetric and spectrophotometric detection.Data are reported in percentual changes from baseline (n = 4 subjects for each time point).Histograms' colors are meant to ease reading and correspond to post exposure time.Measurements were performed in triplicate.* p < 0.05; ** p < 0.01; ns: non-significant; RM-ANOVA with Dunnet's post hoc test. Figure 1 . Figure 1.AGE and AOPP production following 1 h hyperoxia.Percentual changes in plasma levels of AGEs and AOPPs in healthy subjects exposed to mild hyperoxia (a,d), high hyperoxia (b,e), and very high hyperoxia (c,f), corresponding to 30%, 100%, and 140% O 2 , respectively, for 1 h.Measurements were taken at baseline (before O 2 exposure), 30 min, 3 h, and 24 h after exposure by means of spectrofluorimetric and spectrophotometric detection.Data are reported in percentual changes from baseline (n = 4 subjects for each time point).Histograms' colors are meant to ease reading and correspond to post exposure time.Measurements were performed in triplicate.* p < 0.05; ** p < 0.01; ns: non-significant; RM-ANOVA with Dunnet's post hoc test. Figure 2 . Figure 2. Mitochondrial Prx3 protein expression following 1 h hyperoxia.(a) Mild hyperoxia (30% O2); (b) high hyperoxia (100% O2); (c) very high hyperoxia (140% O2) before and after the recovery to normoxic conditions.In the above histograms, the picture shows a representative Western blot analysis.The density of immunoreactive bands was calculated using the software Quantity one (Bio-Rad) and data are shown as a ratio of PRX3/B-actin.Results are expressed as percentual change (n = 4) (mean ± SEM) in comparison to baseline (0); Histograms' colors are meant to ease reading and correspond to post exposure time.ns: not significant; *: p < 0.05, **: p < 0.01, ***: p < 0.001; RM-ANOVA with Dunnet's post hoc test. Figure 2 . Figure 2. Mitochondrial Prx3 protein expression following 1 h hyperoxia.(a) Mild hyperoxia (30% O 2 ); (b) high hyperoxia (100% O 2 ); (c) very high hyperoxia (140% O 2 ) before and after the recovery to normoxic conditions.In the above histograms, the picture shows a representative Western blot analysis.The density of immunoreactive bands was calculated using the software Quantity one (Bio-Rad, Hercules, CA, USA) and data are shown as a ratio of PRX3/B-actin.Results are expressed as percentual change (n = 4) (mean ± SEM) in comparison to baseline (0); Histograms' colors are meant to ease reading and correspond to post exposure time.ns: not significant; *: p < 0.05, **: p < 0.01, ***: p < 0.001; RM-ANOVA with Dunnet's post hoc test. Figure 3 . Figure 3. PGC-1α nuclear translocation following 1 h hyperoxia: (a) Mild hyperoxia (30% O 2 ); (b) high hyperoxia (100% O 2 ); (c) very high hyperoxia (140% O 2 ) before and after the recovery to normoxic conditions.In the above histograms, the picture shows a representative Western blot analysis.The density of immunoreactive bands was calculated using the software Quantity One (Bio-Rad) and data are shown as ratio of PGC-1α/Lamin B. Results are expressed as percentual change (n = 4) (mean ± SEM) in comparison to baseline (0); Histograms' colors are meant to ease reading and correspond to post exposure time.ns: not significant; *: p < 0.05, **: p < 0.01, ***: p < 0.001; RM-ANOVA with Dunnet's post hoc test. Int. J. Mol.Sci.2024, 25, x FOR PEER REVIEW 6 of 15 analysis.The density of immunoreactive bands was calculated using the software Quantity One (Bio-Rad) and data are shown as ratio of PGC-1α/Lamin B. Results are expressed as percentual change (n = 4) (mean ± SEM) in comparison to baseline (0); Histograms' colors are meant to ease reading and correspond to post exposure time.ns: not significant; *: p < 0.05, **: p < 0.01, ***: p < 0.001; RM-ANOVA with Dunnet's post hoc test. Figure 4 . Figure 4. TFAM activation following 1 h hyperoxia.Mild hyperoxia (30% O2); high hyperoxia (100% O2); very high hyperoxia (140% O2) before and after the recovery to normoxic conditions.In the above histograms, the picture shows a representative Western blot analysis.The density of immunoreactive bands was calculated using the software Quantity One (Bio-Rad) and data are shown as a ratio of TFAM/Lamin B. Results are expressed as percentual change (n = 4) (mean ± SEM) in comparison to baseline (0); Histograms' colors are meant to ease reading and correspond to post exposure time.ns: not significant; *: p < 0.05, **: p < 0.01, ***: p < 0.001; RM-ANOVA with Dunnet's post hoc test. Figure 4 . Figure 4. TFAM activation following 1 h hyperoxia.(a) Mild hyperoxia (30% O 2 ); (b) high hyperoxia (100% O 2 ); (c) very high hyperoxia (140% O 2 ) before and after the recovery to normoxic conditions.In the above histograms, the picture shows a representative Western blot analysis.The density of immunoreactive bands was calculated using the software Quantity One (Bio-Rad) and data are shown as a ratio of TFAM/Lamin B. Results are expressed as percentual change (n = 4) (mean ± SEM) in comparison to baseline (0); Histograms' colors are meant to ease reading and correspond to post exposure time.ns: not significant; *: p < 0.05, **** p < 0.0001; RM-ANOVA with Dunnet's post hoc test. Figure 6 . Figure 6.The bar graph in the upper right corner demonstrates the effect of NOP on redox-mediated PGC1α-NRF1-2 signaling, assessed by the upregulation of TFAM.It displays the relative expression levels of TFAM following 60 min of exposure to 30%, 100%, and 140% oxygen concentrations at various intervals (baseline, 0.5, 3, 24 h); **** p < 0.0001; ns: non-significant.Author Contributions: All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication: Conceptualization: C.B., D.F., F.V., S.B. and S.M.-S.; Investigation: S.B., C.B. and D.F.; Formal analysis: C.B., M.S., D.F., F.V. and S.M.-S.; Writing: F.V., C.B., D.F. and M.S.; Review and editing: S.B., M.S., F.V., D.F., C.B. and S.M.-S.All authors have read and agreed to the published version of the manuscript.Funding: This research received no external funding.Institutional Review Board Statement: This study was conducted in accordance with the Declaration of Helsinki and approved by the Bio-Ethical Committee for Research and Higher Education, Brussels, under reference B 200-2020-088 on 10 October 2020.
v3-fos-license
2022-11-05T06:17:04.792Z
2022-11-03T00:00:00.000
253303214
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40520-022-02286-7.pdf", "pdf_hash": "e870bc0f6817cf2c12252610babd727324792b3a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42576", "s2fieldsofstudy": [ "Medicine" ], "sha1": "6c1d680d76d40e487a575776c619d30ac29d7c8c", "year": 2022 }
pes2o/s2orc
Radiofrequency echographic multi-spectrometry and DXA for the evaluation of bone mineral density in a peritoneal dialysis setting Background The aim of this real-life cross-sectional explorative study was to compare radiofrequency echographic multi-spectrometry (REMS) with dual-energy X-rays absorptiometry (DXA) in the BMD assessment of patients receiving peritoneal dialysis (PD). Furthermore, we investigated the relationship between lumbar aortic calcifications (AOCs) and the DXA lumbar measurements. Methods Consecutive patients referring to the PD clinic of our hospital were included. Lumbar spine and femur scans were acquired with both techniques (including lumbar laterolateral DXA scans). The risk assessment of two fracture risk algorithms (FRAX® and DeFRA®) were compared. Cohen’s k coefficients were used to assess the inter-technique agreement in the classification of patients as osteoporotic. Lumbar AOCs were estimated semi-quantitatively on laterolateral DXA scans. Results 41 patients were enrolled. No significant differences were documented between the BMD T-scores measured through DXA or REMS at the femur. At the lumbar spine, the DXA anteroposterior mean T-score (− 0.49 ± 1.98) was significantly higher than both the laterolateral DXA (− 1.66 ± 0.99) and the REMS (− 2.00 ± 1.94) measurements (p < 0.01 vs both). No significant differences were found between the DXA and REMS fracture risk estimates with both algorithms. The inter-technique Cohen’s k coefficient (for the worst T-score, any site) was 0.421, p < 0.001. The discrepancy between the DXA laterolateral and anteroposterior lumbar T-score was positively associated with the AOCs extent and severity (r = 0.402, p < 0.01). Conclusions Our data showed a promising agreement, in a real-life PD setting, between DXA and REMS BMD assessment and in the consequent fracture risk estimation and confirm the AOCs interference on the diagnostic accuracy of lumbar DXA. Supplementary Information The online version contains supplementary material available at 10.1007/s40520-022-02286-7. Introduction Chronic kidney disease (CKD) is associated with a wide range of bone mineral and endocrine disturbances known as mineral and bone disease (CKD-MBD), a condition characterised by an increased risk of fragility fractures [1]. The fractures' impact on morbidity and mortality is especially burdensome among patients on renal replacement therapy (RRT) [2,3] as well as cardiovascular (CVD) complications [4,5]. Extensive vascular calcifications, a complication often seen in CKD patients, have been related to low bone mineral density (BMD) in this population [6,7]. Furthermore, also the specific RRT modality seems to play a role in this troublesome liaison. In terms of fracture risk, peritoneal dialysis (PD) has shown less detrimental effects than haemodialysis (HD) on early BMD changes [8]. On the other hand, the influence of the RRT modality on the development of vascular calcifications is unclear, with recent data suggesting that vascular calcifications might develop more in PD than HD [9]. Dual energy X-rays absorptiometry (DXA) is currently considered the gold-standard for the measurement of BMD in the clinical practice, and the Kidney Disease Improving Global Outcomes (KDIGO) 2017 recommendations suggest BMD testing to assess for fracture risk in CKD patients [1]. However, the DXA technique is affected by significant limitations, such as cumbersome machinery, use of ionising radiations with the necessity of shielded environments, and analytic limitations due to ectopic calcifications [10,11] or pathologic bone formation [12]. The bone densitometry by radiofrequency echographic multi-spectrometry (REMS) is a novel ultrasound-based technique that can reliably assess BMD at the lumbar spine, femoral neck and total hip. REMS has been already validated in post-menopausal osteoporosis [13] and it has been endorsed as a possible alternative to DXA [14]. However, data on populations affected by CKD-MBD are lacking. The aim of this real-life cross-sectional explorative study was to compare radiofrequency echographic multispectrometry (REMS) with DXA in the BMD assessment of patients receiving peritoneal dialysis (PD). Furthermore, we also explored the fracture risk estimates of two different algorithms when calculated with a DXA and REMSbased T-scores. Finally, we investigated the relationship between the extent and severity of lumbar aortic calcifications (AOCs) and their contribution in the overestimation of the DXA-derived lumbar spine BMD. Materials and methods For this study, we enrolled all the patients referring to the peritoneal dialysis clinic of the Nephrology Unit of our hospital between June and September 2021 who accepted to participate. Given the exploratory nature of the study, no sample size estimation was determined. The study was conducted within the protocol 1483CESC approved by our local Ethics Committee, in accordance with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Written informed consent was obtained from all participants included. Clinical and laboratory variables Data on the history of CKD, PD, bone-related medications, and fragility fractures were obtained by interviewing all patients during medical examinations and from the electronic medical records. The start date of CKD diagnosis was established after the observation of kidney damage or glomerular filtration rate < 60 mL/min/1.73 m 2 for three months or more, irrespective of the cause. Venous blood samples were drawn in the morning after an overnight fast. Measurements of serum calcium, phosphorus, and bone-specific alkaline phosphatase (BAP) were obtained using standard laboratory procedures at the central laboratory. Serum bone biomarkers Venous blood samples were drawn in the morning after an overnight fast. Serum samples were collected from all patients at the time of study recruitment, centrifuged, separated, and stored at −80 °C until measurement. An expert laboratory technician, who was blinded to patients' clinical details, measured 25-hydroxyvitamin D3 [25(OH)D] and intact parathyroid hormone (PTH). Specifically, 25(OH)D and PTH were measured using the IDS-ISYS Multi Discipline Automated Analyzer (Immunodiagnostic System, Boldon, UK) employing immuno-chemiluminescent technology on the fully automated microplate analyser Personal LAB (Adaltis, Rome, Italy). The intra-assay coefficients of variation (CV), in our laboratory, were 3% for PTH (interassay CV 6%), and 6% for 25(OH)D (inter-assay CV 9%). DXA A DXA scan was performed in all patients using the GE Lunar iDXA 194 system (GE Healthcare Lunar, Madison, WI, USA) by a single expert operator (R.N.), who was blinded to patients' clinical details. The employed DXA scanner underwent daily quality control and regular maintenance for the whole study period. We obtained BMD measurements expressed as T-scores and Z-scores at both the anteroposterior (AP) lumbar spine (L1-L4) and femur (neck and total hip). Trabecular bone score (TBS) was obtained as well (GE TBS INsight 3.0.3.0). Latero-lateral scans (LL) for BMD measurement were performed at the lumbar spine (L2-L3), with the obtainment of T-scores and Z-scores. A T-score ≤ −2.5 was considered for the diagnosis of densitometric osteoporosis, while a Z-score < −2 was considered for the diagnosis of BMD below the expected range for gender and age. Vertebral fracture analysis (VFA) was performed in all patients in order to detect the presence of vertebral fractures. To score the AOCs extent, we used the score described by Kauppila et al. [15] and applied it at the LL lumbar spine scans acquired with DXA. As described in the original paper, lesions were graded as follows: 0, no aortic calcific deposits; 1, small scattered calcific deposits filling less than 1/3 of the longitudinal wall of the aorta; 2, one third or more, but less than two-thirds of the longitudinal wall of the aorta calcified; 3, two thirds or more of the longitudinal wall of the aorta calcified. The semiquantitative score is applied to both the anterior and posterior walls for each of the four vertebrae (L1-L4), thus giving a final score from 0 to 24. Supplementary Fig. 1, panel A, summarizes the score, while in panel B we report an example from one of the subjects included into the study. To obtain an estimate of the contribution of the AOCs when measuring BMD with the DXA lumbar spine AP scan, we calculated the difference between the AP T-score and the LL T-score at the lumbar spine. REMS A REMS scan performed by a trained expert operator (M.G.), who was blinded to patients' clinical details, using EchoStation (Echolight Spa, Lecce, Italy) was obtained for all the patients at the lumbar spine, femoral neck, and total hip, and BMD and T-score measurements were obtained at each site. Given the real-life setting of this study, all reports were included: no patient was excluded from the analysis. Fracture risk algorithms Two different fracture risk assessment tools, the Fracture Risk Assessment Tool (FRAX ® ) [16], and the FRAX-Derived Fracture Risk Assessment (DeFRA ® ), an algorithm derived from FRAX ® and based on data on fracture risk in the Italian population [17], was calculated for each patient, with the BMD data obtained from the DXA and REMS. For the calculation of the FRAX ® values, for subjects younger than 40 years old, the age of 40 was selected. In addition, for all patients, the variable n. 10, namely secondary osteoporosis ("disorder strongly associated with osteoporosis"), was selected. The femoral neck BMD for DXA and REMS were considered, and, only for DXA, both the TBS-adjusted and unadjusted values were obtained. For the calculation of the DeFRA ® ) values, for subjects younger than 50 years old, the age of 50 was selected. The worst T-score at either the AP lumbar spine, LL lumbar spine (only for DXA), femoral neck or total hip for both DXA and REMS was entered. Statistical analysis Given the exploratory nature of the study, a sample size of at least 40 subjects was established, primarily based on clinical judgment and practical considerations and not on formal statistical reasoning. Normality for all variables was tested by Shapiro-Wilk test. To assess for the inter-technique (DXA vs REMS) agreement for the diagnosis of densitometric osteoporosis at each site and worst site we calculated Cohen's k coefficient. We considered values between 0.21 and 0.40 as fair, 0.41-0.60 as moderate, and 0.61-0.80 as substantial agreement [18]. Intraclass correlation coefficients (ICC) for single and average measures were also calculated for the agreement between the lumbar spine DXA AP and REMS T-scores, the lumbar spine DXA LL and REMS T-scores, and the femoral neck DXA vs REMS T-scores. A repeated measures one-way Analysis of variance (ANOVA) with Greenhouse-Geisser correction in the case of violation of sphericity assumption (as assessed through Mauchly's test), with post hoc analysis (Bonferroni) adjustment was used to compare the BMD data and fracture risk estimates acquired with DXA and REMS at the lumbar site (DXA AP and LL and REMS scans). A two-sided paired samples Student's t-test was used to compare the T-scores, Z-scores and FRAX ® and DeFRA ® values at the total hip and femoral neck measured with DXA and REMS. Differences in the AOCs score in the subgroup with and without vertebral fractures were tested through the Mann-Whitney U test. Spearman's rho was run to explore correlations between AOCs score and the REMS and DXA T-scores. Two-sided p values of 0.05 or less were considered to be statistically significant. Data were analysed using SPSS software, Version 22 (SPSS, Inc., Chicago, IL, USA). Results We enrolled 41 patients. The anthropometric characteristics of the sample, biochemical parameters, and the data on fractures and medications are reported in Table 1. T-scores and Z-scores comparisons The mean values with 95% confidence intervals (CI) of the T-scores at the AP and LL lumbar spine measured by DXA and REMS are depicted in Fig. 1A, while the T-scores at the femoral neck and total hip are reported in Fig. 1B. At the lumbar spine, we found a statistically significant difference between the T-score of the AP DXA scan and both the LL DXA and REMS, while no difference was found between the LL DXA and REMS scan. No statistically significant difference was found between the DXA and REMS T-scores at either the femoral neck or total hip. When all sites were considered, 51.3% of the patients satisfied the criterion for densitometric osteoporosis when measured by DXA (the prevalence decreased to 43.6% when the LL scan was excluded) and 32.4% with REMS. At the lumbar spine, the mean DXA AP Z-score was 0.08 ± 1.16, the mean DXA LL Z-score was − 0.62 ± 1.70, while the mean REMS Z-score was − 0.75 ± 0.78. The one-way repeated measures ANOVA among the tree Z-scores at the lumbar spine (DXA AP, DXA LL and REMS) resulted statistically significant (p = 0.006), and at the posthoc analysis we found significant differences was between the AP Z-score and the REMS-Z-score (p = 0.004) and between the DXA AP Z-score and LL Z-score (p = 0.002). 3 At the femoral neck, the mean DXA Z-score was − 0.78 ± 1.01, while the mean REMS Z-score was − 0.63 ± 0.65, p = NS. At the total hip, the mean DXA Z-score was − 0.51 ± 1.0, while the mean REMS Z-score was − 0.74 ± 0.68, p = NS. When all sites were considered, 15.4% of the patients satisfied the criterion for BMD below the expected when measured by DXA (12.8% when the LL measurement were excluded), and 7% with REMS. TBS The mean TBS was 1.290 ± 0.146 (min-max: 0.985-1.583). The TBS T-score was correlated with the T-scored measured through DXA at all sites (AP lumbar spine R 2 = 0.31, p < 0.01; LL lumbar spine R 2 = 0.21, p < 0.01; femoral neck R 2 = 0.27, p < 0.01; total hip R 2 = 0.31, p < 0.01), while was not correlated with the REMS T-score at any site. TBS was not significantly different between fractured and non-fractured subjects and was moderately negatively correlated with the AOCs score (Spearman's rho − 0.407, p = 0.01). TBS T-score was not correlated with disease duration. Risk assessment tools comparison No statistically significant differences in the DeFRA ® or FRAX ® outputs (both raw and TBS-adjusted) were found when calculated upon the data from DXA or REMS ( Fig. 2A, B, respectively). Agreement measures The Cohen's k correlation coefficients for the diagnosis of densitometric osteoporosis were, at the lumbar spine, between REMS and LL DXA: 0.321, p = 0.026 (fair agreement). For REMS LS and AP DXA: 0.19, p = NS. At the femoral neck: 0.445, p < 0.01 (moderate agreement) and at the total hip 0.784, p < 0.001 (substantial agreement). When we tested the agreement after considering the worst T-score among all the different sites: 0.421, p < 0.001 (moderate agreement). The Cohen's k correlation coefficient for the diagnosis of BMD below the expected (worst site considered) between DXA and REMS was 0.633, p < 0.01 (substantial agreement). The calculated ICC were in line with the Cohen's k analysis, showing the strongest association between DXA and REMS at the femoral neck and the weakest association at the lumbar spine between the AP DXA and the REMS measurements (Supplementary appendix). No significant difference was found in the AOCs score between the fractured and non-fractured subjects. Correlations between BMD (DXA and REMS-measured) and AOCs We found a statistically significant positive correlation of moderate strength between the total calcification score and the difference between the DXA AP T-score and the DXA LL T-score at the lumbar spine (p < 0.01, Spearman's rho correlation coefficient = 0.402), Fig. 3. On the other hand, we found a significant negative correlation between the total AOCs score and the T-score at all sites (both DXA and REMS); the only exception was for the AP DXA (Table 2). Discussion The present study showed for the first time a promising agreement between DXA and REMS in the diagnosis of densitometric osteoporosis in a real-life sample of PD patients. Moreover, these data also reported comparable results between the fracture risk estimates derived by the two technologies. Furthermore, our data supports the robustness of the REMS technique against the influence of AOCs when assessing the lumbar spine BMD, as already suggested in recent case series [19] and in a cross-sectional study [20]. On the contrary, as already shown in previous studies [10,11], AOCs predispose to an artefactual increase in lumbar spine BMD when measured with the commonly adopted anteroposterior approach, with the consequent risk of obtaining a misleading T-score value. A possible solution for this problem could be to turn the patient on the side and switch to a LL scan (thereby bypassing the aorta), as already suggested in other special populations characterised by axial ectopic ossification [12]. However, this is still uncommon in the daily clinical practice; not all densitometers have this feature, and the history of CKD may be overlooked by the technician performing the exam. For these reasons, REMS may help overcome these limitations in special populations such as CKD patients. In addition, the positive correlation between the calcification score and the DXA LL-AP T-scores discrepancy supports the role of AOCs as culprits for the artefactual BMD overestimation with DXA. Though we did not observe a significant difference in the calcifications score between the fractured and non-fractured subjects, arguably because of underpowered sample size, we did find a significant negative correlation between the calcification score and the BMD measured with both the imaging techniques. Our data therefore confirms the association Fig. 3 Scatter plot reporting the relationship between the difference between the AP and LL T-scores (∆ T-scores AP-LL; y-axis) and the aortic calcifications score (x-axis). Significance and R2 refer to the Pearson's correlation between systemic skeletal involvement and AOCs also in patients undergoing PD. This finding is in line with other clinical data reporting the worsening of CKD-MBD with the severity of the kidney damage and its disease duration [21]. Future studies and longitudinal data should focus on specifically testing this hypothesis and assessing the sensitivity to change (over time and after treatment with bone acting agents) of the REMS technique and comparing it to the current gold standard. Interestingly, we confirmed a negative correlation between the AOCs score and TBS. This is in line with a previous study on dialysis patients that used similar methods [22], and corroborates the potential usefulness of techniques assessing bone quality in CKD patients. Finally, in our cohort, only a minority of patients (6%, N = 2) patients showed increased levels of PTH. This is somewhat difficult to explain, as more than 50% subjects receiving PD usually show increased PTH serum concentrations [23]. Presently, we do not think that this observation could have significantly influenced our findings in terms of imaging comparisons, though we warrant further studies to investigate the relationship between serum and imaging biomarkers of osteometabolic health in CKD/PD patients. Our study has its limitations. First, we emphasize that this is an exploratory study with a limited sample size, not sufficient to run a validation process, and the absence of a control group represents a major limitation. In addition, the AOCs score adopted was originally studied for X-rays and not DXA. However, previous studies already applied it to LL DXA evaluation, with good correlation coefficients with X-rays [24] and reproducibility data in patients receiving haemodialysis [25]. Clearly, longitudinal designs are needed to scrutinize the REMS sensitivity to change over time and after treatment. In conclusion, this study shows a promising agreement, in a real-life PD setting, between the DXA and REMS BMD values and in the consequent fracture risk assessment. The availability of a novel technique for the assessment of BMD, characterised by nimble machinery, absence of ionising radiations and good robustness to measurement artifacts could be extremely useful in the everyday clinical practice.
v3-fos-license
2020-07-09T09:13:03.262Z
2020-08-20T00:00:00.000
221743380
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.int-res.com/articles/esep2020/20/e020p015.pdf", "pdf_hash": "fa504778b51bf3368af1dc40296e3c0977ba66c1", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42577", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "34a1f84c660019b09b0a78531c6f49df8ed51237", "year": 2020 }
pes2o/s2orc
Healing the wounds of marine mammals by protecting their habitat Important marine mammal areas (IMMAs) — ‘discrete habitat areas, important for one or more marine mammal species, that have the potential to be delineated and managed for conservation’ (IUCN Marine Mammal Protected Areas Task Force 2018, p. 3) — were introduced in 2014 by the IUCN Marine Mammal Protected Areas Task Force to support marine mammal and wider ocean conservation. IMMAs provide decision-makers with a user-friendly, actionable tool to inform them of the whereabouts of habitat important for marine mammal survival. However, in view of their non-prescriptive, evidence-based and biocentric nature, the conservation effectiveness of IMMAs is strictly dependent on politicians’ willingness to make use of them. It has been the customary task of advocacy non-governmental organisations to lobby decision-makers to stimulate respect for environmental law, but the scientific community is increasingly joining this effort. Scientists can effectively strengthen a healthy relationship between scientific objectivity and political advocacy without damaging the credibility of conservation science. Thus, those undertaking the identification of IMMAs can be among those responsible for strongly advocating the implementation of IMMAs and other conservation initiatives. INTRODUCTION The oceans are being denuded of the life that supports and determines what the ocean is and what it means to humans. Many marine species may be going extinct even before they are named or properly studied (Snelgrove 2010). The problems of habitat degradation and destruction, chemical and noise pollution, disturbance from ship traffic and the escalating signs of climate disruption define our age (Halpern et al. 2008, Notarbartolo di Sciara 2015. The Living Planet Index for marine organisms, monitoring 5829 populations from a complex of 1234 mammal, bird, reptile and fish species, shows a decline of 49% based on trends observed between 1970 and 2012 (Tanzer et al. 2015). The level of marine species' extinctions caused by human activities remains lower than that of terrestrial species, yet it is impossible to avoid concluding that humans have profoundly affected life in the oceans through the widespread alteration of ecosystem structure and functioning (McCauley et al. 2015, Payne et al. 2016. Marine mammals, including cetaceans, pinnipeds, sirenians, sea otters and polar bears, are particularly vulnerable to human impacts due to their life history traits (Lotze et al. 2017). Fishermen use dolphins to find tuna and other fish, incidentally catching and killing the dolphins as part of fisheries bycatch. Others hunt whales and dolphins directly; run into them with ships; or displace or kill them with navy sonar, seismic exploration, oil-and-gas extraction and discharges of harmful substances at sea (Avila et al. 2018). Out of a total of 130 marine mammal species, the status of 38 (29.2%) is classified in the IUCN's Red List of Endangered Species in a threatened category ('Critically Endangered', 'Endangered', or 'Vulnerable'), while 31 (23.8%) are still listed as 'Data De ficient' (IUCN 2020). Most populations of some large whales, such as humpback whales Megaptera novaeangliae, grey whales Eschrichtius robustus and southern right whales Eubalaena australis, have shown substantial recoveries in the decades following the decrease in hunting pressure (Clapham 2016, Zerbini et al. 2019), yet most other marine mammal populations persist only as fragments of their former abundance. As the awareness of threats to marine mammals and other ocean species increased, the imperative emerged in multilateral environmental agreements to adopt targeted conservation policies. Marine mammals have benefitted, perhaps beyond their intrinsic conservation importance, with the consideration that these species are potent conservation icons (Parsons et al. 2015, Mazzoldi et al. 2019. Marine mammals also function as flagship and umbrella species, extending the benefits of their protection to other less visible species, thus enhancing the value of conservation action (Kalinkat et al. 2017). Agreements directed toward conserving marine mammals, amongst other species, include the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES, signed in 1973), the Convention on the Conservation of Migratory Species of Wild Animals (Bonn Convention of 1979) and the Convention on Biological Diversity (CBD of 1992), followed by many other agreements and regulations having regional scope, and reflected in turn in widespread national legislation. On paper, most of the world's marine mammal species are protected from being killed or harassed, or having their parts traded across national borders. Yet, even this protection means little when it comes to the exploitation of commercial fish and interaction with fishing gear, which are often the cause of marine mammal entanglements, incidental catch and depletion of their prey. (Brownell et al. 2019, Jefferson 2019; and populations of the dugong Dugong dugon across wide portions of its range (Marsh et al. 2011). Not only is all life in the ocean under full-scale attack, but also the complex conditions for life in the sea, the habitats and ecosystems, are being altered such that one day it may be impossible to reconstruct the conditions for a healthy ocean as we once knew it. Concerned by the results of scientific investigations revealing the plight of many marine mammal species and populations, most governments and members of the public today generally want to save marine mammals. Still, translating what is on paper into action to mitigate negative effects is proving difficult in view of the 'out-of-sight, out-of-mind' nature of fisheries interactions, shipping and tourism impacts, environmental contaminants and climate disruption. There are occasional localised successes, but in most cases, these are insufficient to counter widespread population declines (Marsh et al. 2003). Conservation failures stem in large part from the false perception by decision makers that by the sole fact of having adopted conservation policies they have solved the problem (Bearzi 2020). Several methods have been adopted to implement marine mammal conservation efforts (Twiss & Reeves 1999). Amongst these, resorting to habitat protection has been increasingly attempted (Hoyt 2011. Place-based conservation can be an effective tool for addressing threats to marine mammals (e.g. Gormley et al. 2012), although to date most marine protected area (MPA) efforts have been ecosystem-, not species-, oriented (Hoyt 2018). When it comes to whales, the prospect of protecting vast areas that cross national boundaries and span oceans beyond national jurisdictions is often impractical. One alternative is to employ protected area networks covering critical breeding or feeding habitat (Notarbartolo di Sciara et al. 2016), which often leaves migratory corridors devoid of specific protection. Aside from these issues, a more fundamental problem with marine mammal habitat protection is that vast amounts of data remain unpublished, and indeed are fairly inaccessible for use in defining habitat (Hoyt 2018). For these reasons, marine mammals remain under-represented in place-based conservation solutions around the world (Hoyt 2011). To facilitate the consideration of a place-based approach to marine mammal conservation, to inform decision making and to integrate conservation activities into strategic planning, a user-friendly tool was developed. The tool aimed to identify habitats using criteria based on data gathered and peer-reviewed as part of a scientifically robust process. Such a tool, serving the purpose of outlining special areas that might be beneficial to the protection of specific marine mammal populations, and addressing the wide range of challenges to marine mammal conservation when planning human activities at sea, was launched in 2014 by the IUCN Joint Species Survival Commission/ World Commission on Protected Areas (SSC/ WCPA) Marine Mammal Protected Areas Task Force (hereafter the 'Task Force', www.marinemammal habitat .org). The first such tools were identified in the Mediterranean Sea in 2016. The Task Force adopted the name and matching acronym, inspired by that used for important bird and biodiversity areas, the so-called IBAs (Donald et al. 2019). Thus, the important marine mammal area (IMMA) was born (Hoyt & Notarbartolo di Sciara 2014). IMPORTANT MARINE MAMMAL AREAS IMMAs are designed to identify 'discrete habitat areas, important for one or more marine mammal species, that have the potential to be delineated and managed for conservation' (IUCN Marine Mammal Protected Areas Task Force 2018, p. 3). IMMAs are meant to integrate other marine spatial assessment tools, including the CBD's ecologically or biologically significant areas, biologically important areas as set out by the US government (Ferguson et al. 2015) and Australian legislation (Commonwealth of Australia, https://bit.ly/3anSZkU) and key biodiversity areas (KBAs) identified through the global KBA Standard (IUCN 2016). IMMAs have the potential to function as a marine mammal layer, indicative of biodiversity and ecosystem health in the world's oceans and relevant inland water bodies, for consideration by governments, inter-governmental organisations, conservation groups, industry, conservation scientists and the general public (Agardy et al. 2019). Thus, IMMAs are purely advisory; not prescriptive: they are not MPAs, although they could subsequently be used in marine spatial planning and specific conservation planning that might result in MPAs or other initiatives being taken. IMMAs are identified through a biocentric expert process that is independent of any political and socio-economic pressure or concern. Covering at this time most of the Southern Hemisphere and a small portion of the Northern Hemisphere (IUCN Marine Mammal Protected Areas Task Force 2020) ( Fig. 1), the present 158 IMMAs have been identified through international expert consultation and consensus. Regional workshops were held in the Mediterranean (2016), the Pacific Islands (2017) IMMAs are identified on the basis of 4 main criteria (species or population vulnerability; distribution and abundance; key life cycle activities; and special attributes such as distinctiveness and diversity) designed to capture critical aspects of marine mammal biology, ecology and population structure (IUCN Marine Mammal Protected Areas Task Force 2018). Only one criterion is needed to identify an IMMA, but that criterion must be supported by robust data. The identification of IMMAs is performed on a region by region basis ). Initially, a call for submissions of 'Areas of Interest' (AoIs) in a particular region is broadly advertised and solicited within the scientific and conser vation practitioners' communities; submissions are accepted from any person or institution. If the AoIs have one or more criteria that can be defended by data, they are then elaborated as candidate IMMAs (cIMMAs) during the regional expert workshops. Having undergone independent peer review after the regional work shop, if the criteria have been successfully met, cIMMAs formally become IMMAs and are posted on the e-Atlas (Fig. 1) and in an online searchable database. A cIMMA that has not reached a sufficient level of robustness but only needs minor adjustments to become an IMMA (adjustments that can be made without resubmission to a successive workshop) maintains its status as a cIMMA and is shown as such on the e-Atlas (Fig. 1); in contrast, cIMMAs that need substantial additional knowledge and a reassessment including new collective scrutiny through a workshop and review are posted on the e-Atlas as AoIs (Fig. 1). To address the concern that unstable environmental conditions including climate change, distributional changes in a population and increasing ecological knowledge are likely to render original IMMA designations less useful and potentially obsolete (Agardy et al. 2019), a regionbased review is built into the identification process with the aim that this should recur every 10 years. BEYOND DRAWING BOUNDARIES ON THE WORLD MAP Michelangelo sculpted his Pietà with a chisel. Yet, with only a chisel untouched by Michelangelo's hand, humans would have been deprived of the expression of his artistic genius. Like Michelangelo's chisel, IMMAs are but a tool, and can contribute little to the effort of enhancing marine mammal conservation status unless they are used by policy makers, decision makers and the various marine stakeholders. Unless IMMAs are used as a conservation tool, they are condemned to join other tools in the junk closet of 'marine conservation on paper' (Bearzi 2020). However, despite peer-reviewed and approved IMMAs having been around for only 3 years, their buy-in by a diverse array of stakeholders has been encouraging. As of early 2020, the Task Force had received 79 requests for IMMA shapefiles and metadata. Such requests, per se, are not proof of use, but they are an indicator of potential conservation action. Amongst requesters, 35% were from universities or were other academic researchers, 22% were from non-governmental organisations (NGOs), 20% from industry or business, 18% from government and 5% from inter-governmental organisations. Most of the requests (41%) were motivated by a declared conservation intent, though the stated research and educational purposes, as well as some of the commercial purposes, would have conservation outputs as well. Positive conservation examples include (1) 2 proposed MPAs in Vietnam that received helpful support and acknowledgment of their value through IMMA identification, and (2) Yet, there are also cases where IMMAs have been failing in their conservation intent. A large area of southern and southwestern Greece, in coastal and offshore waters, was recently granted by the Greek government as a concession to the oil and gas industry for hydrocarbon exploration and exploitation (Fig. 2). The concession area extends from the northeastern Ionian Sea to south of Crete, covering approximately 56 000 km 2 (WWF-Greece 2019). This area overlaps to a significant extent with 2 IMMAs identified by the Task Force in 2016: the first, the 'Ionian Archipelago IMMA' (Fig. 2), was identified for Mediterranean monk seals and common dolphins Delphinus delphis. The second, the 'Hellenic Trench IMMA' (Fig. 2), contains core habitat for Cuvier's beaked whales Ziphius cavirostris and for the last remaining sperm whales Physeter macrocephalus in the eastern Mediterranean Sea (Notarbartolo di Sciara 2016). All of these marine mammal subpopulations are assessed as En dangered in the IUCN Red List, with the exception of Cuvier's beaked whales, which are rated 'Vulnerable'. As these species are listed in Annex IV of the Habitats Directive 92/43/EEC, which includes 'animal and plant species of community interest in need of strict protection', it is a clear obligation for Greece to ensure good conservation status of these species in their habitats included within the country's jurisdictional waters. Another example of government-sanctioned activities occurring in, or planned to occur in, an IMMA can be found in the 'Bazaruto Archipelago to Inhambane Bay IMMA' which was identified for dugongs along the southern coast of Mozambique. Listed globally as 'Vulnerable' in the IUCN Red List, the dugong population off East Africa likely merits the more extreme 'Endangered' status (H. Marsh pers. comm). The species has almost completely disappeared from much of East Africa and Madagascar, persisting today in small, scattered numbers from Egypt to Mozambique -numbers too small to maintain a healthy population and thus likely to disappear in the near future. Only the waters of the Bazaruto Archipelago extending north to Inhambane Bay harbour a still healthy population estimated at between 250 and 350 individuals (Cockcroft et al. 2018). This is now the last known viable dugong population in East Africa. In part recognizing the need for dugong protection as well as the value of ecotourism, the Mozambique government protected the Bazaruto Archipelago in 1971 as a national park, which is administered by African Parks. However, the scientific conclusion from the 'Bazaruto Archipelago to Inhambane Bay IMMA' identification is that 70% of the area's dugongs live outside of the boundaries of the national park where they are (1) at extreme risk of entanglement in illegal gillnets and (2) within the footprint of proposed oil and gas developments. Should the Bazaruto dugong population become extinct, dugongs would likely soon be extirpated from the entirety of Africa's coastal waters. In both of the above cases, governments are faced with a choice between respecting their own environmental obligations and promoting productive activities, such as in oil and gas exploration and exploitation at sea, that they believe will have a positive outcome on their economies. Marine hydrocarbon development, in addition to well-known impacts on climate, causes detrimental effects to marine mammals by inducing behavioural disruption (habitat exclusion) and even physiological damage, due to active introduction of noise into the marine environment as well as adding to marine traffic, which poses a significant risk from ship collisions (Weilgart 2007). Yet, Greece, Mozambique and many other governments may persist with all-out development as if the presence of protected species' habitat there, and of an IMMA designation, were unknown to regulators. A VISION EXTENDING BEYOND HABITAT PROTECTION Moving the identification of IMMAs to implementation on the ground, from science to actual conservation, was anticipated as a slow but deliberate process. As noted above, there is some movement, yet the speed at which it is happening is ill-matched to the rate at which some of the marine mammal populations in IMMAs are declining. Dissecting the problem, we find that there is a complex transition from the non-prescriptive IMMA identification to the incorporation by the relevant authorities of IMMA-derived ecological knowledge into management planning and eventually the implementation of effective designations or laws. This transition can take years, particularly when the establishment of an effective marine protected area is part of the process. To achieve the ultimate goal of having a real effect on the lives and habitats of the relevant marine mammals, careful coordination, even choreography, is needed among all players on the local to the national and even the global level. The players, or active members of human societies, can be deconstructed into 3 broad components: scientists, politicians, and civil society. Scientists, focussing on nature or conservation issues, detect problems, indicate solutions and offer tools to implement solutions. In the case of IMMAs, the role of the Task Force is to put the IMMAs on the world map based on a rigorous and transparent process, thereby making relevant scientific knowledge available in an actionable, userfriendly fashion to politicians and managers. Politics is the art of compromise. The politicians' challenge is to strike an often-elusive balance between development and conservation (Apostolopoulou & Adams 2015), ensuring (1) that management occurs sustainably, (2) that human activities at sea that have the potential of harming marine mammals are regulated, and (3) that rules are enforced. All are re quired in order to minimise environmental damage and ensure that the legal obligations concerning protected species are respected. In an ideal world, with societal agreement on the essential values of nature, and the stakeholders welcoming a triage-like process for rapid implementation, the system could work well, harnessing a proactive interaction between science and politics. Science identifies the habitats authoritatively, in this case IMMAs, and the government establishes legal MPAs or imposes zoning or other restrictions to address threats to the environment and improve the conservation outcomes. However, history has shown how political compromises bend, twist and distort this interaction. Indeed, in many cases, the science is ignored completely by the politicians. The results swing dramatically towards the development side, with token regard for the conservation of nature, if that. The result, in the worst cases, is in effect the abandonment of a species, or population, by allowing a habitat to be exploited beyond recognition, or condemned to a slow death by degrees over time. The loss of opportunity can be avoided by intervention from the third societal component -civil society -including those stakeholders working in support of non-governmental advocacy organisations that act as watchdogs of government action in defence of the environment. Even in the young IMMA world, we see them at work in the growing number of IMMA metadata requests received by the Task Force from NGOs. Clearly motivated by conservation, these requests provide an indicator of the willingness of a portion of civil society to use IMMAs to support marine mammal place-based conservation. At the same time, however, industry, led by well-funded lobbyists, marketers and executives, will sometimes try to negate the efforts of NGOs. The matter then gets handed to government to make decisions, but increasingly conservation scientists (including ecologists, conservation biologists, taxonomists, botanists and zoologists) have also been called upon, as well as voluntarily agreeing, to take an advocacy role. In terms of conservation advocacy for species and ecosystems, things began to change in the 1980s when leading scientists such as Thomas Lovejoy, with WWF-US, Peter Raven from the Missouri Botanical Garden, Oxford's Norman Myers, Harvard's E. O. Wilson and others started using the word 'biodiversity' to focus theoretical debate on conservation issues (Franco 2013). Biologist Daniel H. Janzen, part of this group but the only one living full time in the tropical forest (in Costa Rica), went further. In an article entitled 'The Future of Tropical Ecology' and in a legendary address to biologists, Janzen urged scientists to undertake much needed political activism for conservation, with the stated threat that if they failed to take on a role beyond only science there would be no species left to study (Janzen 1986, Franco 2013. Today, a participatory approach and advocacy role by subjects who are involved as scientists in designing or implementing an MPA experience growing acceptance within the scientific community (Gray & Campbell 2009). Conservation policy is a vital link between science and politics, and maintaining a healthy relationship between scientific objectivity and political advocacy without damaging conservation science's credibility is becoming an accepted practice (Horton et al. 2016). Natural or conservation scientists may refer to this dual role as 'wearing different hats'. This concept is well exemplified by the IMMA process. IMMAs are identified and defined strictly on the basis of data and on how well the available evidence satisfies the scientific criteria. Adherence to scientific objectivity in the identification of IMMAs is further ensured by having candidate IMMAs proposed by regional expert workshops followed by independent review before approval. Once the process is complete, however, and an IMMA is engraved on the world map, nothing prevents those who have been involved in its identification from changing hats and now advocating application, so that the primary goal of ensuring that marine mammals can benefit from their right to habitat is realised and their survival is secured. IMMA identification captures what nature generally, and marine mammals more specifically, need us to hear: there are priority areas that require our immediate conservation attention. Through the science-mediated interpretation of empirical observations made in marine mammal habitat, many scientists feel that they are entrusted with this information. It therefore becomes an imperative that carries a moral obligation to do everything possible to ensure that marine mammal spaces are not only identified, but respected, even preserved. This may be partly fuelled by a sense of guilt for the damage that our species is inflicting on the planet that hosts us and all other forms of life, but pushes most of these species off the cliff. At the same time, conservation scientists today recognize the importance of extending the conservation focus from populations to individual animals. This shift in conservation emphasis is driven by an increasing understanding of the conservation value of the animals' social learning and cultural transmission of behaviour, particularly in those cases in which the survival of key individuals, depositaries of special knowledge, confers unique conservation value to the community (Brakes et al. 2019). A special focus on the individual may have grown out of photographic identification (photo-ID), the starting point for hundreds of studies on marine mammals beginning in the 1970s (Wells 2018). The importance of individuals to the survival of the group is revealed especially when there are low numbers of individuals comprising a population (e.g. the cases of the North Atlantic right whale Eubalaena glacialis and the southern community of killer whales Orcinus orca), such that all individuals are known and followed through their life history. Eventually, ethical considerations may spring naturally from a focus on the individual. Many conservation scientists and practitioners today strive not merely to ensure that species and populations can survive within their habitats and ecosystems, but also to grant individual marine mammals a right to their habitat based on a lifestyle unperturbed by human effects, and where they can enjoy an environmental quality as close as possible to the conditions they have evolved to live in (Hoyt 2017). It has been argued that marine mammals, and cetaceans in particular, meet the requirements for possessing 'personhood, e.g., being alive; being aware; having positive and negative sensations, emotions, and a sense of self; controlling one's own behaviour; recognizing other 'persons' individually and treating them appropriately: and possessing a variety of sophisticated cognitive abilities' (White 2007), and that if this argument is accepted, then cetaceans 'are persons and should be given rights somewhat similar to those of humans' (Whitehead & Rendell 2015). We live today in a world divided by those people exploiting species and ecosystems and those embracing widespread societal changes resulting from urgent concerns for the state of the planet. The imperative of acting in remediation, and an increasing empathy for the plight of non-human animals, has moved into the mainstream. Perhaps Aldo Leopold's (1993) lamented loneliness ('one of the penalties of an ecological education is that one lives alone in a world of wounds') is finally becoming a thing of the past, with natural scientists no longer afraid of expanding their remit from being descriptors of the planet's diseases, to more boldly assuming the role of doctors engaged in implementing the cure. In such a scenario, the IMMAs can be taken as a case in point. Yes, they are an expression of a highly specialised science-based undertak-ing, but they can point the way to addressing one aspect of the planetary malaise. Although only one aspect, IMMAs are, however, tied together with other specialised conservation tools and initiatives which, taken together and supported by scientists and stakeholders, have the potential of helping to achieve a global, comprehensive, multidisciplinary and ultimately effective global conservation strategy.
v3-fos-license
2022-05-21T15:08:18.649Z
2022-05-19T00:00:00.000
248935668
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00419-022-02161-x.pdf", "pdf_hash": "17b8711e42b3859f76169c8a7e426a42d2fd0deb", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42578", "s2fieldsofstudy": [ "Engineering" ], "sha1": "d03de60880fd52f2d4b8ddd767c6eaae059ffb6d", "year": 2022 }
pes2o/s2orc
Elasticity problems of beams on reaction-driven nonlocal foundation A challenging task in nonlocal continuum mechanics consists in formulating constitutive relations leading to well-posed structural problems. Several strategies have been adopted to overcome issues inherent applicability of Eringen’s pure nonlocal theory to nanostructures, such as local/nonlocal mixtures of elasticity and integral models involving modified averaging kernels. These strategies can be applied to the ill-posed problem of flexure of a beam on Wieghardt nonlocal foundation without considering any fictitious boundary forces of constitutive type. A consistent formulation of nonlocal elastic foundation underlying a Bernoulli–Euler beam is thus conceived in the present paper by requiring that transverse displacements are convex combination of reaction-driven local and nonlocal phases governed by Winkler and Wieghardt laws, respectively. The proposed integral mixture is proven to be equivalent to a more convenient differential problem, equipped with nonlocal boundary conditions, which can be effectively exploited to solve nonlocal problems of beams resting on mixture reaction-driven continuous foundation. Effectiveness of the developed nonlocal approach is illustrated by analytically solving simple elasto-static problems of structural mechanics. Introduction Interaction of beams with surrounding elastic continuous media is an engineering problem of current interest both in theoretical and applied mechanics. The theoretical interest lays in detecting suitable mathematical models for such interactions which can be exploited to formulate well-posed structural problems. From an applicative point of view, such a problem can be technically relevant in transportation, civil and geotechnical engineering if surrounding elastic media are characterized by soils or foundations. In mechanical and aerospace engineering, the role of surrounding elastic medium can be played by a part of texture of a structured continuum. For instance, a cylindrical shell of revolution might be seen as an ordered arrangement of woven beams along meridian and parallel lines. Ideal beams along parallels act as a linear elastic soil with respect to ideal beams along meridians. An elastic beam subjected to transversally distributed loading proportional to its deflection was first considered by Winkler [1] and, then, it was exploited to represent railway tracks on continuous linear elastic foundation [2]. Winkler and Zimmermann's methodology quickly had followers, due to its simplicity and easy mathematical treatment since the soil is modeled as continuous bed of independent linear elastic onedimensional springs with uniform stiffness. In the framework of soil models described by two material parameters, the one proposed by Filonenko-Borodich [3] assumed that a membrane under tension is interposed between beam and springs while the one conceived by Pasternak [4] supposed a shear interaction among springs modeling the soil. Then, a foundation model consisting of two spring layers interconnected by a shear layer was proposed by Kerr [5]. By virtue of their convenient mathematical formulation, Pasternak and Kerr foundation models have been recently adopted in [6,7] for buckling and vibration problems of nanobeams lying on elastic foundation. Moreover, Pasternak foundation model has been also exploited to simulate microtubules embedded in cell cytoplasm [8] adopting nonlocal integral formulations of internal elasticity with mixed-type normalized kernel [9,10]. A discussion on formulation of beam-soil and plate-soil interaction can be found in the review by Wang et al. [11], but one will be surprised not to find any reference to Wieghardt there. On the other hand, Wieghardt [12] remarked that Winkler's soil reactions are not physically reliable since they predict sharp discontinuities in the beam-soil profile at beam ends which are not actually present in real phenomena. Then, Wieghardt proposed a strategy in which the deflection at each point of the beam depends on the response of the entire contact region through an integral of soil reactions weighted by a suitable averaging kernel. The mathematical model thus depends on a stiffness factor and on an additional nonlocal parameter entering the kernel. This problem was reconsidered later by Prager [13] and Neményi [14] for two-dimensional foundations. Actually, the difference between one-and two-parameter soil models and Wieghardt's model is that the latter is of nonlocal nature. This aspect makes it different from all the others models which are local, viz. the response at a point depends only on the strain at that point. As a consequence, the differential equation for the nonlocal problem is of higher order than the classical (local) elastic model. Thus, additional boundary conditions have to be prescribed to close the relevant elastic problem. Such an issue was highlighted by Wieghardt himself and analysed in contributions by Van Langendonck [15], Sollazzo [16], Ylinen and Mikkola [17]. In these models, fictitious reactive concentrated forces exerted by the soil are introduced at the beam end points and additional boundary conditions are used to evaluate such reactive forces. Many problems have been addressed using the Wieghardt elastic foundation in order to get outcomes of technical interest. The contribution of a soil modeled by Wieghardt's elastic foundation was then considered also in some problems of static instability by compression for the beam, for instance in papers by Smith [18], in its extension [19] and in the contribution by Anderson [20]. Wieghardt's foundation was considered also in problems of dynamics and of dynamic instability, see, e.g. the paper by Celep [21]. In order to extend previous standard investigations to the case of fluid-structure interaction, such as beams representing a pipe conveying fluid, we may quote the paper by De Bellis et al. [22,23] where, however, Winkler model has been considered. Motivation of the present paper is in conceiving a well-posed nonlocal integral elastic model by enhancing the classical Wieghardt formulation without introducing any fictitious reactive forces at end points of Bernoulli- Euler beams to solve the relevant nonlocal structural problem. Specifically, a two-phase integral mixture of elastic foundation is formulated by convexly combining Winkler local and Wieghardt nonlocal contributions. The constitutive equivalence theorem proved in [48] for strain-driven models and in [49] for modified nonlocal gradient methods is applied to Wieghardt integral formulation of elastic foundation in order to obtain a simpler (but equivalent) differential problem, equipped with nonlocal foundation boundary conditions, which can be effectively implemented to solve soil-beam interaction problems of engineering interest. The plan is the following. The nonlocal model of Wieghardt elastic foundation is recalled in Sect. 2. The elastic equilibrium problem of a Bernoulli-Euler beam on Wieghardt foundation is formulated in Sect. 3. Then, the modified model of Wieghardt foundation with introduction of fictitious reactive forces for the solution of the nonlocal model is presented in Sect. 4. The proposed integral elasticity mixture considering a convex combination of Winkler (local) and Wieghardt (nonlocal) laws is illustrated in Sect. 5. The developed mixture model is validated in Sect. 6 by investigating nonlocal effects in case-problems of technical interest. In particular, fully clamped and free beams on Wieghardt foundation under uniform transverse loading are analysed. Closing remarks are outlined in Sect. 7. Nonlocal reaction-driven model of Wieghardt elastic foundation Let us consider a beam of length L laying on the surface of a Wieghardt elastic foundation. The x − y − z coordinates are, respectively, taken along the length, thickness (height) and width of the beam originating at the cross-sectional elastic centre C (see Fig. 1). The pair {y, z} are principal axes of geometric inertia of the two-dimensional cross section . The classical Winkler theory of a continuous elastic soil supporting a beam, see, e.g. [1,2], considers the foundation composed by a sequence of linear elastic springs unconnected with each other and, at each point, the reaction per unit length is directly proportional to the deflection of the foundation. The elastic medium is characterized by a volumetric density of force β representing the pressure to be orthogonally applied to the surface to get a unit vertical displacement of the foundation. Denoting by b the width of the beam cross section in contact with the foundation, the stiffness of the elastic foundation is given by k = βb. Hence, the transverse displacement v of the surface of the Winkler foundation is linked to the reaction per unit length r by the classical relation [1] v We assume that the beam remains in contact with the foundation so that the transverse displacement of the beam coincides with the transverse displacement v of the surface of the foundation. The refinement proposed by Wieghardt [12], afterwards analysed in [15][16][17], consists in assuming that the displacement v of foundation is obtained by integral convolution between reaction field r and a suitable averaging kernel φ : For simplicity, in the sequel, the explicit dependence of v on the characteristic length L c will be dropped. The smoothing kernel φ depends on the characteristic length of Eringen nonlocal elasticity L c , having the physical dimension of a length, and is given by the bi-exponential averaging function The bi-exponential function fulfils the following properties, see, e.g. [50,51]: -Maximum value of φ (x, L c ) attained at x = 0 for any L c and decaying to zero at large distances; -Normalization -Symmetry and limit impulsivity conditions, that is where δ(x) is the Dirac unit impulse at point x. The symmetry condition Eq. (5) 1 of the bi-exponential kernel φ Eq. (3) expresses the mechanical assumption that symmetrically placed points of the foundation with respect to the considered point x have the same influence on the displacement v of the surface of the foundation at x. Moreover, the characteristic parameter L c is a measure of how rapidly the influence of the displacement v at a point t decreases with the distance from the considered point x. and the classical Winkler model of elastic foundation, see Eq. (1), is recovered at the internal points of the structural interval. Bernoulli-Euler beam on Wieghardt elastic foundation Let us consider a straight planar Bernoulli-Euler beam whose displacement components are expressed by with v cross-sectional transverse displacement. Symbol ∂ x • is the derivative of the function • along the beam axis x. The rotation ϕ of the beam cross section is ϕ (x) = ∂ x v (x) so that the non-vanishing kinematically compatible deformation is given by the axial strain where is the kinematically compatible flexural curvature of the beam. In the absence of thermal distortions, the kinematically compatible flexural curvature χ coincides with the elastic flexural curvature. Equilibrium is expressed by the following differential condition with M bending moment, q transverse distributed loading and r foundation reaction per unit length. where the second moment of elastic area I E about the z axis is evaluated by considering the distribution of Euler-Young moduli E(y) as Using the differential condition of equilibrium in addition to the definition of flexural curvature χ , we get the beam differential equation in the form The nonlocal elasto-static problem of a beam on Reaction Driven (RD) Wieghardt foundation can be formulated by considering the beam elastic equilibrium Eq. (11) and the constitutive convolution of Wieghardt foundation Eq. (2) as reported in the next Box 1. BOX 1. Elasto-static integro-differential problem of a beam on RD Wieghardt foundation. The integral law Eq. (12) 3 of the nonlocal RD model of Wieghardt foundation can be replaced with an equivalent differential problem with foundation boundary conditions according to the next Proposition proved in "Appendix A". Such a result is a consequence of the choice of the special bi-exponential kernel Eq. (3) and is based on contributions provided in [48]. with x ∈ [0, L], subject to the two homogeneous foundation boundary conditions (FBCs) Hence, the RD convolution Eq. (12) 3 can be substituted with the differential equation (13) and the FBCs Eq. (14). (13) has the same mathematical form of the differential law of elastic foundation proposed by Pasternak in [4]. Equation (13) equipped with the foundation boundary conditions in Eq. (14) is equivalent to the Wieghardt integral law as stated in Proposition 1 and it is coincident with the Pasternak model if and only if the FBCs are satisfied. However, Pasternak differential law in Eq. (13) (setting the shear stiffness as k s := k L 2 c ) may not be able to capture long-range interactions, in general, foundation problems, with exhibition of paradoxical results as those occurring in constitutive differential equations relating stress and elastic strain fields [48]. An exemplar case is provided by a free beam on Pasternak foundation under uniformly or linearly distributed transverse loading; indeed, solution of the relevant elasto-static problem is a uniform or linear displacement field independent of the shear stiffness and coincident with that of a free beam on Winkler foundation. Remark 2 Constitutive equation As will be shown in Sect. 6, the mixture reaction-driven nonlocal model of foundation proposed in the present paper provides stiffening structural responses for increasing characteristic length L c , in agreement with the outcomes recently contributed in [10] where elasto-statics of nanobeams lying on Pasternak foundation is examined. Indeed, increase of characteristic length L c corresponds to an increase of shear stiffness k s of Pasternak foundation (for a fixed Winkler parameter k) to which is associated a stiffening structural response, as shown in [10] in the limiting case of nonlocal internal elasticity tending to the local one. The global softening response exhibited in [10] is only due to the predominant softening small-scale effect of nanobeam, modeled by strain-driven integral internal elasticity with modified averaging kernel, with respect to the stiffening effect of the foundation modeled by Pasternak external elasticity theory. To solve the nonlocal elasto-static problem of a beam on RD model of Wieghardt foundation, we can replace the reaction r , obtained by Eq. (13), into Eq. (12) 1 . The differential equation governing the elasto-static problem in terms of transverse displacement v is reported in the next Box 2. BOX 2. Elasto-static differential problem of a beam on RD Wieghardt foundation. Kinematic and static BCs The reaction r follows from Eq. (13) in terms of transverse displacement v as the bending moment is given by In fact, compatibility between FBCs and kinematic boundary conditions of a beam laying on Wieghardt foundation is a necessary requirement for existence of a displacement solution of the elasto-static nonlocal problem. It is apparent that FBCs as Eq. (15) 3−4 impose a peculiar relationship between transverse displacements and rotations of beam cross sections involving also the characteristic length L c . Hence, in general, a displacement v fulfilling the beam kinematic boundary conditions cannot meet the FBCs Eq. (15) 3−4 . As a consequence, the four integration constants following from the solution of Eq. (15) 1 cannot be evaluated by solving the linear system of equations obtained by imposing the four classical constraint conditions coming from Eq. (15) 2 and the two FBCs as Eq. (15) 3−4 . Hence, no solution of a beam laying on Wieghardt elastic foundation does, in general, exist. Remark 3 As an example of applicative interest in which a beam on RD Wieghardt foundation can be successfully solved is provided by the fully clamped beam since the FBCs are compatible with kinematic boundary conditions of the beam as shown in Sect. 6. To solve the nonlocal elasto-static problem of a beam laying on Wieghardt foundation, a modified nonlocal model of Wieghardt elastic foundation has been contributed in the literature, see, e.g. [16]. Such a model is briefly formulated in the next Sect. 4. Modified reaction-driven model of beam on Wieghardt foundation The modified reaction-driven (MRD) nonlocal model defines the transverse displacement v of the surface of Wieghardt elastic foundation by postulating existence of two fictitious forces R 1 and R 2 at end points x = 0 and x = L. Accordingly, the RD convolution Eq. (2) is modified in the following form [16] v The integral formulation Eq. (17) of the MRD nonlocal model can be replaced with an equivalent differential problem and modified foundation boundary conditions (MFBCs) according to the next Proposition 2 proved in "Appendix B". subject to the two homogeneous modified foundation boundary conditions (MFBCs) The nonlocal elasto-static problem of a beam on MRD Wieghardt foundation can be solved by considering the beam elastic equilibrium Eq. (11) with kinematic and static boundary conditions L} and the MRD model of Wieghardt foundation Eq. (17). The nonlocal elasto-static problem is formulated in the next Box 3. BOX 3. Elasto-static integro-differential problem of a beam on MRD Wieghardt foundation. The constitutive convolution law Eq. (20) 3 can be equivalently replaced, according to Proposition 2, with the differential equation Eq. (18) and the MFBCs as Eq. (19). Therefore, to solve the nonlocal elasto-static problem of a beam on MRD Wieghardt foundation reported in Box 3, we substitute the reactions r , obtained from Eq. (18), into Eq. (20) 1 . The governing nonlocal differential problem is thus reported in the next Box 4. BOX 4. Elasto-static differential problem of a beam on MRD Wieghardt foundation. Kinematic and static BCs The four unknown integration constants following from the solution of the fourth-order differential equation Eq. (21) 1 and the two fictitious forces can be evaluated by imposing the four BCs coming from kinematic and static BCs in Eq. (21) 2 and the two MFBC Eq. (21) 3−4 . Then, bending moment is given by M(x) = I E ∂ 2 x v (x) and shear force is x v (x). Finally, the reactions r follow from Eq. (18) in terms of transverse displacement v as If the foundation extends outside the beam interval [0, L], the transverse displacement field of the surface of Wieghardt foundation v 1MRD for x ≤ 0 and v 2MRD for x ≥ L can be evaluated by [16] ⎧ It is shown that the elasto-static problem of a beam on MRD Wieghardt foundation subject to applied loads postulates the existence of fictitious forces acting on the beam end points in order to solve the structural nonlocal problem. Such fictitious forces enter in the static boundary conditions of the beam. Based on this observation, a different nonlocal model of beam on Wieghardt foundation that does not postulate the existence of forces at beam end points is provided in Sect. 5. The proposed model is cast in the framework of mixture nonlocal models which are nowadays widely adopted for small-scale structural problems, see, e.g. [52][53][54][55]. Mixture reaction-driven model of beam on Wieghardt foundation An interaction model between the foundation springs can be obtained by resorting to a nonlocal theory where the transverse displacement v is linked to the reactions r by a nonlocal mixture reaction-driven integral (XRD) model. The XRD model provides the transverse displacement v of the surface of the foundation in terms of the following two-phase model defined by convex combination of local and nonlocal phases The phase parameter α belongs to the interval (0, 1] so that the classical (local) Winkler model corresponds to α = 1 and the nonlocal RD model is recovered for α = 0. Hence, a vanishing parameter α cannot be adopted in the XRD model Eq. (24) due to the previous discussion on the nonlocal RD model. The nonlocal XRD model Eq. (24) can be replaced with an equivalent differential formulation and foundation boundary conditions according to the next Proposition proved in "Appendix A". Proposition 3 Equivalence property for the XRD model of Wieghardt foundation. The transversal displacement v obtained from the mixture reaction-driven integral equation (24) with the special kernel Eq. (3) provides the unique solution of the constitutive differential equation of the elastic foundation with x ∈ [0, L], subject to the two homogeneous mixture foundation boundary conditions (XFBCs) The nonlocal elasto-static problem of a beam on a XRD Wieghardt elastic foundation can be solved by considering the beam elastic equilibrium Eq. (11) and the XRD integral of the Wieghardt foundation Eq. (24) as reported in the next Box 5. BOX 5. Elasto-static integro-differential problem of a beam on XRD Wieghardt foundation. ⎧ To solve the nonlocal elasto-static problem of a beam on a XRD Wieghardt foundation reported in Box 5, we consider the equivalent nonlocal differential formulation Eqs. (25)-(26) of the XRD integral law Eq. (27) 3 . Substituting the reactions r and its second derivative obtained from Eq. (27) 1 into Eq. (25), we provide the nonlocal differential problem reported in the next Box 6. BOX 6. Elasto-static differential problem of a beam on XRD Wieghardt foundation. Kinematic and static BCs The sixth-order differential equation Eq. (28) 1 , equipped with four BCs following from the kinematic and static BCs in Eq. (28) 2 and the two XFBCs in Eq. (28) 3−4 , can be solved and the transverse displacement v is obtained. Then the bending moment is given by Transverse displacements of XRD Wieghardt foundation outside the beam interval If the XRD Wieghardt elastic foundation extends outside the beam interval [0, L], we can evaluate the transverse displacement field of the surface of the Wieghardt elastic foundation v 1X R D , for x ≤ 0, and v 2X R D , for x ≥ L according to the XRD model as where the reactions r are the solution of the model reported in Box 6. Note that using Eq. Therefore, continuity of the displacement field at the beam end points x = 0 and x = L is fulfilled. Remark 4 The nonlocal elasto-static problem of a beam on XRD Wieghardt elastic foundation does not require to postulate the existence of fictitious forces at the beam end points, as in the MRD model, in order to obtain a well-posed nonlocal model. Numerical applications In this section, we will show some numerical results of technical interest to illustrate the effectiveness of the proposed XRD model for the analysis of Bernoulli-Euler beams on Wieghardt foundation. Hence, free-beam, fully clamped beam and simply supported beam under uniform load are considered. The results are presented in tabular and graphical forms. The solution of the nonlocal elasto-static problem for a beam on Wieghardt foundation can be obtained using the nonlocal RD differential problem reported in Box 2, the nonlocal MRD differential problem reported in Box 4 and the nonlocal XRD differential problem reported in Box 6. Fully clamped beam on Wieghardt foundation under uniformly distributed load Let us consider a fully clamped beam on a Wieghardt elastic foundation subjected to non-dimensional uniform transverse load q * y = −1. Accordingly, the fourth-order non-dimensional differential equation (33) and the non-dimensional classical kinematic boundary conditions Eq. (34) 1÷4 provide the solution of the fully clamped beam on RD Wieghardt foundation. Solution of the fully clamped beam using the XRD model The solution of the fully clamped beam on XRD Wieghardt foundation using the differential approach can be got by solving Eq. (28) 1 of Box 6 rewritten in the non-dimensional form equipped with the classical kinematic boundary conditions at the beam end points ξ = {0, 1} following from Eq. (28) 2 and the XFBCs following from Eq. (28) 3−4 of Box 6 in the non-dimensional form The non-dimensional reactions r * follow from Eq. (22) in terms of non-dimensional transverse displacement v * . Non-dimensional bending moment and shear force are given by A stiffening response is exhibited by the RD and XRD methods for increasing the nonlocal parameter λ or for increasing values of the non-dimensional Winkler parameter k * , see Table 1. The non-dimensional transverse deflection v * for increasing the nonlocal parameter λ is plotted in Fig. 2a for the RD method and Fig. 2b for the XRD method with k * = 10. The midpoint non-dimensional displacement v * (1/2) is plotted in terms of the nonlocal parameter λ for increasing values of the non-dimensional Winkler parameter k * in Fig. 3a for the RD method and in Fig. 3b for the XRD method. These results show that the XRD method is softer than the RD method for a given value of the nonlocal parameter λ or for a given value of the non-dimensional Winkler parameter k * . The non-dimensional reactions r * by the RD and XRD methods increase for increasing the nonlocal parameter λ or for increasing values of the non-dimensional Winkler parameter k * , see Table 2. The plot of the non-dimensional reactions r * for increasing the nonlocal parameter λ is reported in Fig. 4a for the RD method with k * = 10 and in Fig. 4b for the XRD method with k * = 10. The inversion of the sign of the reactions r near the end points of the beam, in both models, is acceptable by remembering the hypothesis of bilaterality of the continuous constraints. The midpoint non-dimensional reactions r * (1/2) are plotted in terms of the nonlocal parameter λ for increasing values of the non-dimensional Winkler parameter k * in Fig. 5a for the RD method and in Fig. 5b for the XRD method. The non-dimensional bending moment M * by the RD and XRD methods decreases for increasing the nonlocal parameter λ for a given k * or for increasing values of the non-dimensional Winkler parameter k * , see Table 3. The plot of the non-dimensional bending moment M * for increasing the nonlocal parameter λ is reported in Fig. 6a for the RD method with k * = 10 and in Fig. 6b for the XRD method with k * = 10. The midpoint non-dimensional bending moment M * (1/2) is plotted in terms of the nonlocal parameter λ for increasing values of the non-dimensional Winkler parameter k * in Fig. 7a for the RD method and in Fig. 7b for the XRD method. The non-dimensional bending moment M * (1/2) by the XRD method is greater than the corresponding value of the RD method. It is worth noting that the XRD model provides stiffening transverse displacements v * for increasing nonlocal parameter λ. This result is related to the Wieghardt integral convolution (i.e. second term of the constitutive law in Eq. (24)) since increasing λ lowers the peak of the averaging kernel and extends its support. The predominant effect of the peak reduction leads to a decrease of the foundation elastic compliance 1/k * that motivates the stiffening mechanical behavior. Accordingly, bending moment M * and shear force T * decrease for increasing nonlocal parameter λ. Moreover, it is apparent from the parametric plots that increasing k * provides a reduction of beam structural responses and an increase of reactions r * due to the increase of the foundation elastic stiffness. Table 3 Fully clamped beam under non-dimensional uniform load q * y = −1 Solution of the free beam using the RD model The solution of the free beam on RD Wieghardt foundation using the differential approach follows from Eq. (15) 1 of Box 2 rewritten in non-dimensional form subject to the non-dimensional classical static boundary conditions from Eq. (15) 2 and the FBCs Eq. (15) 3−4 given by The RD model has no solution since the FBCs Eq. (40) 5−6 are incompatible with the fact that transverse displacements and rotations of beam end points at ξ ∈ {0, 1} are free. Hence, the fourth-order differential equation Eq. (39), equipped with six boundary conditions Eq. (40), cannot be solved. Solution of the free beam using the MRD model The solution of the free beam on MRD Wieghardt foundation using the differential approach can be got by solving Eq. (21) 1 of Box 4, rewritten in the non-dimensional form Recalling the assumption of existence of two fictitious forces R * 1 and R * 2 at the points ξ = {0, 1} of the surface of Wieghardt foundation, the non-dimensional static boundary conditions Eq. The four integration constants associated with the differential equation (41) and the two fictitious forces R * 1 , R * 2 can be obtained by solving the MFBCs as Eq. (42). Hence, the non-dimensional transverse displacement v * is obtained. The non-dimensional reactions r * follow from Eq. (22) in terms of non-dimensional transverse displace- Non-dimensional bending moment and shear force are The transverse displacement fields of the surface of Wieghardt foundation outside the beam interval [0,1] are v * 1MRD , for ξ ≤ 0, and v * 2MRD , for ξ ≥ 1 , and can be directly evaluated using Eq. (23) in the following non-dimensional form Solution of the free beam using the XRD model The solution of the free beam on XRD Wieghardt foundation using the differential approach can be provided by solving Eq. (28) 1 of Box 6 rewritten in the non-dimensional form The non-dimensional reactions r * are given by Eq. (22) in terms of transverse displacement v * . Nondimensional bending moment and shear force are M * (x) = ∂ 2 ξ v * (ξ ) and T (x) = −∂ 3 ξ v * (ξ ). being r * the reactions obtained by the XRD model. The maximum non-dimensional transverse displacements v * , reactions r * and bending moments M * at the midpoint ξ = 1/2 of the free beam subject to uniform transverse load q * y = −1 are presented in Tables 4, 5 and 6 using the MRD and XRD models for several values of non-dimensional Winkler parameter k * and length scale parameter λ. The mixture parameter in the XRD model is α = 0.3 . The MRD and XRD methods yield the classical solution of a beam on a Winkler foundation by letting λ → 0 + . The non-dimensional fictitious forces R * 1 and R * 2 of the MRD method are coincident, i.e. R * 1 = R * 2 , and are reported in Table 7 in terms of non-dimensional Winkler parameter k * and length scale parameter λ. The fictitious forces increase for increasing the length scale parameter λ and decrease for increasing the Winkler parameter k * . The non-dimensional transverse displacement v * of the beam in the interval [0.5, 1] and of the surface of Wieghardt foundation outside the beam in the interval [1,3] are reported in Table 8 for MRD and XRD models for increasing Winkler parameter k * and of the length scale parameter λ = 0.5. A stiffening response is exhibited by MRD and XRD methods for increasing length scale parameter λ or the non-dimensional Winkler parameter k * , see Table 1. Note that the displacements of the surface of Wieghardt foundation obtained by the XRD method are greater than the corresponding ones provided by the MRD method for a given λ and k * , see Table 8. The non-dimensional transverse displacement v * obtained by the MRD method is plotted in Fig. 8a for increasing the length scale parameter λ with k * = 10. The function v * presents a slope discontinuity at the beam end points ξ = 0 and ξ = 1. The non-dimensional transverse displacement v * obtained by the XRD method is reported in Fig. 8b for increasing the length scale parameter λ with k * = 10 where a zoom of the beam deflection is reported. The midpoint non-dimensional displacement v * (1/2) is plotted in terms of the length scale parameter λ for increasing values of the non-dimensional Winkler parameter k * in Fig. 9a for the MRD method and in Fig. 9b for the XRD method. The plot of v * (1/2) in terms of the non-dimensional Winkler parameter k * for increasing values of the length scale parameter λ is reported in Fig. 9c for the MRD method and in Fig. 9d for the XRD method. The plot of r * by the MRD and XRD methods is reported in terms of the length scale parameter λ in Fig. 10a, b with k * = 10 . The resultant of the reactions r * for the XRD method is equal to the resultant of the applied load for any value of λ and k * . On the contrary, the resultant of the reactions r * for theMRD method is equal to the resultant of the applied load plus the fictitious forces applied to the beam. The midpoint non-dimensional reactions r * (1/2) are plotted in terms of length scale parameter λ for increasing non-dimensional Winkler parameter k * in Fig. 11a, b for the MRD and XRD method respectively. The non-dimensional reactions r * (1/2) by the XRD method decrease for increasing the length scale parameter λ ∈ 0 + , 0.1, 0.2, 0.3 and then increase for λ ∈ {0.4, 0.5} for any value of k * . Moreover, the non-dimensional reactions r * (1/2) by the XRD method increase for increasing the non-dimensional Winkler parameter k * , see Table 5. The non-dimensional bending moment M * and shear force T * by the MRD method are plotted in Fig. 12a, b with k * = 10. The fictitious forces at the beam end points yield a non-vanishing value of the shear force at ξ = 0 and ξ = 1. The non-dimensional bending moment M * and shear force T * by the XRD method are plotted in Fig. 13a, b with k * = 10 and vanishing values of T * at the beam end points are provided. The maximum value of T * is attained at an interior point of the beam depending on λ. As shown by the obtained results, stiffening transverse displacements v * are got by the XRD model for increasing nonlocal parameter λ. This result is due to the special properties fulfilled by the kernel that lead to a decrease of the foundation elastic compliance 1/k * in the integral convolution of the constitutive law Eq. (24). Bending moment M * and shear force T * consequently decrease for increasing nonlocal parameter λ. Moreover, it has been shown that increasing the foundation elastic stiffness k * provides a reduction of the beam displacements, bending moments and shear forces and an increase of reactions r * . Concluding remarks The main results of the present paper may be summarized as follows. 1. Wieghardt's nonlocal theory of elasticity has been extended by developing a well-posed local/nonlocal mixture able to model effectively reaction-driven foundations underlying Bernoulli-Euler beams. Unlike classical reaction-driven strategies, such as [16], no fictitious boundary reactive forces have been formulated in the proposed model to ensure mathematical consistency. 2. The mixture local/nonlocal integral model of Wieghardt theory has been proven in Prop. 3 to be equivalent to a simpler differential formulation, equipped with non-classical foundation constitutive boundary condi- tions, which can be conveniently exploited to analytically solve applicative problems of beams resting on elastic foundation. 3. The elasto-static problem of a fully clamped Bernoulli-Euler beam resting on classical Wieghardt nonlocal foundation has been shown to be well-posed, detecting thus an exception regarding applicability of classical Wieghardt theory to structural mechanics. This result, implication of Proposition 3, is due to the special kinematic boundary conditions of fully clamped beams that do not conflict with the foundation constitutive boundary conditions as Eq. (14). 4. The proposed nonlocal methodology has been applied to evaluate parametric solutions of exemplar case problems of soil-beam interaction. with x ∈ [0, L] , is equivalent to the differential relation subject to the following two foundation boundary conditions (XFBCs) Proof Since the bi-exponential averaging function is given by and the integral convolution Eq. (48) can be explicitly rewritten in the form a direct evaluation provides the first derivative of the transverse displacement v Analogously, the second derivative of the convolutions Eq. (48) follows from Eq. (53) to get Recalling Eq. (48) and rearranging the terms in Eq. (54), the equivalent differential equation (49) and the XFBC in Eq. (50) 1 is recovered. Analogously, setting x = L in Eq. (53) we get and the XFBC in Eq. (50) 2 is recovered. The uniqueness of the solution of Eq. (49) is consequent to the fact that the homogeneous differential problem (r (x) = 0), with the FBCs, admits only the trivial solution. Appendix B Let us prove that the integral formulation Eq. (17) of the MRD nonlocal model can be replaced with an equivalent differential formulation and foundation boundary conditions, involving the fictitious forces R 1 and R 2 , according to the next Proposition. Proposition Equivalence property for the model of Wieghardt foundation. The following nonlocal constitutive law Eq. (17) equipped with the bi-exponential kernel with x ∈ [0, L] , is equivalent to the differential relation subject to the following two modified foundation boundary conditions (MFBCs) Proof Recalling that the expression of the bi-exponential averaging function is given by and the integral convolution Eq. (57) can be rewritten in the form a direct evaluation provides the first derivative of the transverse displacement v Analogously, the second derivative of Eq. (57) follows from Eq. (62) to get Recalling Eq. (57) and rearranging the terms in Eq. (63), the equivalent differential equation (58) and the MFBC in Eq. (59) 1 is recovered. Analogously, setting x = L in Eq. (62) we get and the MFBC in Eq. (59) 2 is recovered. Uniqueness of the solution of Eq. (58) is consequent to the fact that the homogeneous differential problem, with the MFBCs, admits only the trivial solution.
v3-fos-license
2019-08-16T18:13:48.000Z
2019-08-16T00:00:00.000
201070194
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1088/1367-2630/ab5c76", "pdf_hash": "110a6a3f45ac4210184eaa77ea21fa3afe60760c", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42579", "s2fieldsofstudy": [ "Physics" ], "sha1": "279d6a697608e40836c51e912447020b4dbc919f", "year": 2019 }
pes2o/s2orc
Relaxation of electrons in quantum-confined states in Pb/Si(111) thin films from master equation with first-principles-derived rates Atomically thin films of Pb on Si(111) provide an experimentally tunable system comprising a highly structured electronic density of states. The lifetime of excited electrons in these states is limited by both electron-electron (e-e) and electron-phonon (e-ph) scattering. We employ the description by a master equation for the electronic occupation numbers to analyze the relative importance of both scattering mechanisms. The electronic and phononic band structures, as well as the matrix elements for electron-phonon coupling within deformation potential theory were obtained from density functional calculations, thus taking into account quantum confinement effects. For the relaxation dynamics, the contribution of impact ionization processes to the lifetime is estimated from the imaginary part of the electronic self-energy calculated in the GW approximation. By numerically solving rate equations for the occupations of the Pb-derived electronic states coupled to a phononic heat bath, we are able to follow the distribution of the electronic excitation energy to the various modes of Pb lattice vibrations. While e-e scattering is the dominant relaxation mechanism, we demonstrate that the e-ph scattering is highly phonon-mode-specific, with a large contribution from surface phonons. At electron energies of about 0.3 eV above the Fermi surface, a 'phonon bottleneck' characteristic of relaxation in nanostructures with well-separated electronic states is observed. The time scales extracted from the simulations are compared to data from pump-probe experiments using time-resolved two-photon photoemission. I. INTRODUCTION The thermalization of hot carriers in metals after optical excitation is accomplished both by the Coulomb scattering among the carriers (electron-electron (e-e) interactions) and by the scattering of electrons and holes by lattice vibrations (electron-phonon (e-ph) interaction). In a well-established picture 1,2 , the relaxation can be understood as a two-step process: at early times (t < 0.3 ps), e-e scattering dominates and brings the electrons to a thermal (or possibly non-thermal) distribution. At later times (> 0.3 ps) the e-ph interaction establishes equilibrium between the electronic distribution and the lattice temperature. In this second stage, the high density of excited carriers close to the Fermi energy (within an energy interval corresponding to few phonon quanta) is thought to be responsible for most of the energy flow between the electronic and the phononic system. If so, the e-ph coupling inferred from thermalization experiments should relate directly to the microscopic e-ph coupling constant that governs electric resistivity or the superconducting transition temperature 3 . In this prevailing view, the role of e-ph interactions already in the early stages of relaxation is usually ignored. However, this simple picture is questioned by studies, both experimental and theoretical 4,5 , suggesting overlapping timescales of e-e and e-ph-driven thermalization. Moreover, there is little knowledge how the electrons far above the Fermi level (several tenth of eV) interact with the phonons. For instance, long-lived population of such states, e.g. at the Pb-covered Si(111) surface, has been observed in photoemission experiments 6 . The situation at high energies is in contrast to the e-ph interaction in close vicinity to the Fermi surface, which is crucial for a variety of physical phenomena such as electrical resistivity or supercon-ductivity induced by electron-phonon coupling in thin films 7 , and is quite well explored utilizing the concept of the Eliashberg function (for an overview, see Ref. 8). In conclusion, there is a need for more studies of the eph interaction at energies further away from the Fermi energy. In this paper, we attempt to obtain a better understanding of the relative importance of e-e and e-ph interaction in highly excited states of a metal and their respective contributions to the early stage of relaxation. To introduce our approach, we have chosen thin multilayer Pb films on Si(111). The fact that this materials system shows a highly structured electronic density of states due to confinement effects 9,10 has been a great advantage for analyzing the energy-dependent lifetime of the excited electrons using time-resolved pumpprobe spectroscopy. 11 The experimental results were rationalized in Ref. 11 in terms of e-e interaction only, and it was concluded that the electronic lifetime closely follows the behavior expected from Landau's theory of Fermi liquids. 12, 13 Yet, a contribution of e-ph scattering to the lifetime cannot be excluded completely based on the achieved level of agreement between experiment and theory. Therefore, we aim at a detailled analysis of the role of e-ph scattering for the features observed in photoemission. Since ample experimental and computational data are available for the Pb/Si(111) films, we consider this system a good test case for quantitative studies of electronic relaxation dynamics. In a previous paper 14 by us, we have worked out a realistic atomistic description for multilayer Pb films on Si(111) and have carried out first-principles calculations of the electronic and phononic band structure and of e-ph coupling in electronic states far away from the Fermi level. While the e-ph interaction in bulk solids has become accessible to first-principles calculations by using density functional perturbation theory together with Wannier interpolation methods to enhance the number of reciprocal-space sampling points [15][16][17][18] , thin films on a substrate are still difficult to treat on a microscopic level because the adequate supercell typically contains tens to hundreds of atoms and computational costs are high. For the Pb films on Si(111), for instance, the complex phase diagram 19,20 results in various reconstructions requiring large supercells for their description 21 . In the present work, we constructed a √ 3 × √ 3 unit cell of Si(111) matched with a (2 × 2) unit cell of Pb(111) to describe the atomic structure consisting of 40 Pb and 30 Si atoms. 14 While the two-dimensional Brillouin zone of reconstructed surface plus interface is smaller than the Brillouin zone of a bulk material, the supercell contains a large number of bands, both in the electronic and phononic spectra. Therefore, a thoughtful selection of bands will be required to arrive at a tractable model for e-ph coupling. The approach via density functional perturbation theory and the calculation of the Eliashberg function would be too cumbersome for large supercells. In this paper, building upon the knowledge of our previous work 14 , we elaborate on the consequences of these microscopic data for the e-ph scattering rate using a kinetic master equation. The detailed modeling of e-ph scattering is combined with a description of the e-e interaction at the level of Fermi liquid theory. This combination allows us to simulate the temporal evolution of electronic populations on the relevant scales and to make contact with experimental observations. II. THEORY The general problem of an excited electronic system coupled to lattice degrees of freedom can be approached from various perspectives. If one is satisfied with a classical description of the atomic positions and velocities and their dynamics can be described in the trajectory approximation, carrying out non-adiabatic molecular dynamics simulations (e.g. with the methodology described in 22,23 ) is the method of choice. As an advantage, this approach can handle large deviations of the atomic positions from their ground state, and the forces acting on the atoms are calculated directly within the first-principles electronic structure framework. Thus, it is suitable for systems with very strong and non-linear electron-phonon coupling, as encountered e.g. in two-dimensional materials 24 . In this work, we emphasize the quantum nature of the phonons, while the weak coupling of the electrons to phonons and to external fields can be treated in first-order perturbation theory. Casting the problem into the form of a model Hamiltonian, it reads H = H 0 + H int with H 0 being the ground-state Hamiltonian with phonons described in the harmonic approximation, The creation and annihilation operators c nk , c † nk and b IQ , b † IQ obey the usual anticommutator relations for fermions and commutator relations for bosons, respectively. The first, integer index n specifies the band, while the second index k describes the crystal momentum in the form of a two-dimensional vector within the Brillouin zone of a thin slab. Capital letters are used to index phonon modes, whereas small letters refer to electronic bands. In contrast to the molecular dynamics approach mentioned at the beginning of this paragraph, the full quantum treatment is best suited when the coupling terms in the interaction Hamiltonian H int are weak, and the model Hamiltonian H 0 provides already a good starting point for the coupled dynamics. Utilizing model Hamiltonians for describing electronic dynamics is a well-established technique in the field of ultrafast soild-state optics, see e.g. 25,26 . In semiconductor bulk materials and quantum wells, the dispersion ε nk entering the Hamiltonian can be approximated as being quadratic (and sometimes as being linear, e.g. for graphene 27 ), and a full solution of the relaxation dynamics for various scattering mechanisms has been achieved in these cases. Here, we are interested in a realistic description of the ground state of a particular system. For this reason, all the band energies and phonon frequencies entering H 0 are determined by density-functional theory calculations. The VASP code 28 with the settings described in Ref. 14 has been employed for this purpose. The electronic single-particle energies ε nk are taken to be equal to the Kohn-Sham eigenvalues obtained with the GGA-PBE exchange-correlation functional 29 . The phonon frequencies Ω IQ and the corresponding eigenmodes are obtained from DFT calculations using the method of finite atomic displacements within a supercell, as detailed in Ref. 14 and 30. In case of the Pb/Si(111) films, such a detailed first-principles description is considered necessary in view of the experimental findings: The two-photon photoemission spectra show peaks at certain intermediate-state energies of the electrons that are referred to as quantum well states (QWS). These are energies where the electronic density of states is high and/or where the excited electrons are long-lived. For a correct prediction of the energetic position of the QWS, the (1 × 1) periodicity of a free-standing Pb(111) films is not sufficiently accurate. 31 . It is required to take the larger ( √ 3 × √ 3) periodicity enforced by the Si(111) substrate into account. As major achievement of the first-principles calculations in Ref. 14, we were able to reproduce quantitatively the dependence of the energetic position of the QWS on the number of Pb layers in the film, as well as the very small dispersion of the occupied QWS in the films with an odd number of Pb layers. On this basis, the present work is addressing the role of the elec-tronic lifetime in the QWS for the experimentally detected peaks. The interaction Hamiltonian H int contains any further interactions required to describe the problem at hand. These interactions could e.g. be the electron-electron interactions beyond the effective mean-field description of density functional theory (see below). Moreover, the interaction with an external electromagnetic field, e.g. of a laser pulse, can be considered as part of H int . Most importantly for the present study, H int contains a term H ep describing in linear order the coupling of the electrons to quantized phonons, The term in parentheses is linear in each phonon coordinate. In principle it is possible to describe the quantum nonequilibrium dynamics under the action of H exactly by a density matrix. Schemes for evolving the density matrix in time have been worked out 32 , and applications to surfaces and low-dimensional systems can be found in the literature. 27,[33][34][35] However, since the system we want to describe is quite complex, we resort to a simpler description of the dynamics which is appropriate if the coherent excitation by an optical pulse and the subsequent relaxation take place on separable time scales. While quantum coherence is important during the interaction of the system with the light field, electron-electron scattering usually leads to a fast loss of coherence. 36 For Pb films, an example of calculations taking the effects of coherence into account can be found in 6 . In the limit of vanishing coherence, only the diagonal elements of the density matrix, the populations f nk of states indexed by n and the wave vector k, are important. For the investigation of the ultrafast population dynamics in our system, the quantities which we have to look at are the electronic occupation numbers f nk = ĉ † nkĉ nk and the phononic occupation numbers n IQ = b † IQb IQ . For the latter, we employ a bath approximation In the numerical calculations presented below, we will use different baths, one for each high-lying optical mode of the Pb film (Ω IQ ≥ 2 THz) with temperature T I , and a common one for all low-frequency phonons of the Pb film (Ω IQ < 2 THz) with temperature T 0 . More details are given in the appendix. Using the Markov approximation and the second-order Born approximation for the transitions, it is possible to derive from the density-matrix equations a set of coupled differential equations that can be cast into the form of a master equation (cf. Ref. 37). The expressions for the rates, both for scattering into and out of the state nk, are made up of an electronic and a phononic contribution each, i.e., Γ nk = Γ (ee) nk . This holds for both Γ in and Γ out that both consist two terms owing to electron-electron scattering and electronphonon scattering: Exploiting conservation of crystal momentum parallel to the film, k−k = ±Q with the sign depending on phonon emission or absorption, the electron-phonon scattering rates originating from the Hamiltonian (2) can be expressed according to Fermi's golden rule as These expressions include processes where the electron absorbs a phonon as well as phonon emission processes. This is denoted by the ± signs in the equations, where the minus sign stands for absorption, and the plus sign for both spontaneous and induced emission, proportional to n IQ + 1. It is our goal to calculate the contribution of e-ph scattering to the lifetime of specific quantum well states (QWS) in Pb/Si(111) films. In a very simple picture, the conduction band electrons of Pb with crystal momentum normal to the surface or interface of the Pb(111) films are confined, similar to the quantum-mechanical particle-ina-box problem. In an atomistic picture, these conduction band states are derived from the 6p z orbitals of the Pb atoms and their wavefunctions extend both above the surface and into the Si(111) substrate, see Ref. 14 . As described in the experimental paper 11 , there are significant differences between the lifetimes in films with an even and an odd number of Pb layers. Therefore, we study two representative systems, a Pb film with 4 monolayers (ML) and one with 5 ML on Si(111). Side views of the corresponding slabs are depicted in Fig. 1. Motivated by the experimental focus on excited electrons in unoccupied bands, we include e-ph scattering rates for the electrons excited into QWS. Since the population of the valence bands was not analyzed in these experiments, the hole states are treated in less detail, and and only Coulomb scattering, as described in Section III B, will be considered among the holes. To solve the rate equations, we need explicit expressions for the quantities D mk−Q nk, IQ and n IQ (T ) in eq. (6) entering the decay rates Γ in, (ep) and Γ out, (ep) . Both quantities depend on the phonon . The bands with large Pb 6pz character are highlighted by the red symbols and thick red lines. The dashed ellipses mark the regions of quantum wells states (QWS) whose lifetime under e-ph scattering is shown in Fig. 2(a). For 5 ML Pb, the occupied quantum well resonance at ∼ −0.25 eV has been included in the plot. The electronic eigenvalues are represented on a 32 × 32 Monkhorst-Pack grid. The plots are shown along a diagonal cut K' -Γ -K through the Brillouin zone of the ( branches Ω IQ . Of all phonon modes Ω IQ of the supercell obtained with our first-principles approach 30 , those with Pb character are taken into account, see Fig To keep the number of individual scattering processes at a tractable level, we also restrict ourselves to a subspace of the electronic bands: Since we are interested in the electron-phonon coupling in QWS in Pb, only those electronic bands that have a significant overlap with the Pb 6p z orbitals, as indicated by the VASP calculation, are retained in the Hamiltonian H ep in eq. (2). The electronic states belonging to a specific Pb-derived band are grouped together into subsets indexed by α(k) ∈ {n} of all band indices n. To be specific, we used the five (six) lowest-lying conduction bands with appreciable Pb 6p z character for the 4 ML and 5 ML Pb film, respectively, i.e. α = 1, . . . 5, (6). These 5 (6) bands are displayed in Fig. 1 by the thick red lines and symbols, together with the full band structure (dashed lines) that is also shown (over a wider range of energies and wavevectors) in Fig. 3 and 4 of Ref. 14. Due to the use of a supercell and backfolding of the bands, these bandstructures are different from the bandstructure of Pb(111)(1 × 1) slabs that had been used previously 11 in the experimental data analysis. For evaluating the electron-phonon scattering rates, eq. (6), we use techniques based on deformation potential theory that allows us to obtain Γ out (ep) nk from firstprinciples calculations of the phonon spectrum, the electronic wavefunctions, and Kohn-Sham eigenvalues, with only few approximations. As the most significant one, we neglect of the Q-dependence of the deformation potential, while keeping its dependence on band index n and crystal momentum k. This is a good approximation for optical phonons and corresponds to keeping the leading (constant) term in an expansion in powers of Q, cf. Ref. 38. In the energy-conserving δ-function in eq. (6), we retain the finite phonon energy Ω IQ ≈ Ω I0 , but neglect the dispersion of the optical phonon branches. This is justified since the dispersion remains small (cf. Fig. 6 in Ref. 14) due to the large real-space unit cell, and hence small Brillouin zone, of the Pb films. Within these approximations (see appendix for the derivation), the matrix element for electron-phonon scattering in eq. (6) can be replaced by 3) supercell used to model the Pb/Si(111) film, M Pb and v Pb atom are the atomic mass and atomic volume of Pb. D nk, I is the deformation potential of the n th electronic band under the phonon mode I. The D nk, I have been obtained from DFT calculations 14 by evaluating the electronic eigenvalue shift under finite displacements of the atomic positions given by the corresponding mode eigenvector of the phonon. The two δ-symbols reflect conservation of crystal momentum in e-ph scattering, and the projection of H ep to the finite electronic subspace, as described above. The matrix elements I mk nk account for the difference between intra-band (n = m) and interband scattering (n = m), and for the dependence on both the initial and final electron momenta k and k = k − Q. They are obtained from the overlap of the corresponding DFT wave functions. More details of the derivation are given in the appendix. In summary, this approach allows us to arrive at a simplified and computationally tractable, yet parameter-free description of e-ph scattering even for such a complex systems as an overlayer on a substrate. III. RESULTS In metals, e-ph and e-e scattering are closely intertwined, since the vast majority of phonons is emitted by sec-ondary electrons and holes rather than by the charge carriers initially excited by the light pulse. This is because ee scattering quickly generates an avalanche of secondary electron-hole pairs with small energies around the Fermi level. Since these secondary electrons and holes are produced with high density and their energy still exceeds typical phonon energies, they play a major role in determining the rate at which the energy is dissipated from the electronic system into the lattice. Nevertheless, we start our discussion by considering the contribution of both e-ph and e-e scattering separately. A. Relaxation due to e-ph scattering First we investigate how the population of a QWS decays under the sole effect of e-ph scattering. For this purpose, we initially populate a single QWS at the Γ-point and let the population evolve according to the master equation (3) using only the rates Γ in, (ep) and Γ out, (ep) . The results are shown in Fig. 2(a). At comparable energies of the QWS of ∼ 0.5eV, the decay is much faster in the 4 ML than in the 5 ML Pb film. This is to be expected from the different size of the deformation potentials in the two films reported in Ref. 14 . The relaxation rate increases with the temperature of the phonon heat bath, which is indicative of the role of stimulated emission of phonons. By decreasing the phonon temperature from 400 K and 100 K, the lifetime of the QWS in the 4 ML film increases from 1.3 to 2.7 ps. For the 5 ML film, the lifetimes fall between 13 and 37 ps. B. Relaxation due to e-e scattering The lifetime of hot electrons due to e-e scattering can be described by a self-energy formalism, as discussed in Ref. 39. The loss term Γ out (ee) nk in eq. (4) is given by Γ out, (ee) nk = −2ImΣ(ε nk )/ . The self-energy Σ is obtained from a GW calculation of bulk Pb. Here, G stands for the electronic Green function, and W for the screened Coulomb interaction. These quantities are calculated from the DFT wave functions and Kohn-Sham eigenvalues using the built-in capabilities of VASP 40 . To be specific, a 11 × 11 × 11 k-point mesh is used, and the denominator in G is evaluated with a small shift of the transition energy away from the real axis, η = 0.08 eV, much smaller than typical values used in GW calculations of band structures. The result obtained for −2ImΣ in the conduction band is fitted to the α(ε nk − E F ) 2 dependence expected from Landau's theory of the Fermi liquid. Our result α = 0.022 (eV) −1 is in excellent agreement with earlier GW calculations of bulk Pb 41 . Finally, we obtain the expression Γ out, (ee) nk plotted in Fig. 2(b). In the relaxation of highly excited electrons and holes, the energy is dissipated to secondary electron-hole pairs. This process is very efficient in metals, since, in contrast to semiconductors, there is no energy gap preventing the generation of secondary particles. These effects are included in the the scattering-in term Γ in (ee) nk of eq. (5). Although this term can be obtained from the master equation 39 as well, we choose for computational convenience a simpler treatment in our present study. The gain term is assumed to factorize into an energy-dependent and a time-dependent factor, Γ in, (ee) nk = Φ(x)N (t).The distribution function Φ describes the secondary electrons and holes produced via impact ionization by a relaxing high-energy electron. Following the work of Baranov and Kabanov 5 , we use for Φ a stationary solution of the Boltzmann equation with a Coulomb scattering kernel, where the electronic temperature T el = 650 K was chosen in accordance with the energy of E dep = 0.1eV = π 2 6 g(E F )(k B T el ) 2 deposited by the laser and the electronic heat capacity of Pb The time-dependent factor N (t) for creation of secondary electrons and holes is determined by energy conservation in the e-e scattering. Our simplified treatment assumes that the initial electron in state nk ends up at the Fermi energy, transferring all its initial energy to secondary electron-hole pairs. This motivates the choice where g(ε) is the electronic density of states. Both g(ε) and N (t) are evaluated numerically using as input the DFT band structure of slab models for Pb/Si(111)( √ 3 × √ 3) multilayer films. C. Competition between e-e and e-ph scattering In this Section, we compare simulation results for Pb films of 4 ML and 5 ML thickness as representatives of films with an even and odd number of layers studied by optical pump-probe experiments in Ref. 11. While these experiments measure the total probability for two-photon photoemission, our simulations model the population of the intermediate electronic states that are reached by the electron after applying the pump pulse and subsequent relaxation. The second step of the two-photon photoemission, which kicks the electron into the vacuum, is not modeled. Provided that the probability of ionization by the probe pulse is a smooth function of energy, the measured yield can be considered approximately proportional to the population of the intermediate state. The initial distribution is chosen such that it describes the response of our specific system, multilayers of Pb on Si(111), to a short optical pulse with frequency centered around hν = 1.9 eV. This corresponds to the photon energy of the pump laser used in the experiment 11 . The polarization of the electric field, denoted by the unit vector e, is chosen parallel to the Pb film surface. Before the laser pulse arrives, the system is described by a Fermi-Dirac distribution with low temperature, T el → 0; hence f nk (t < 0) = Θ(E F − ε nk ). To be specific, we evaluate dipole matrix elements 42 In the numerical evaluation 43 , a broadening of the δfunction by 0.02 eV is used. The proportionality factor A 0 is chosen such that the energy of excited electrons and holes deposited in the Pb films amounts to ∼ 0.1 eV per supercell area, equivalent to 3.7 µJ/cm 2 . By solving the master eq. (3) numerically, we are able to follow the relaxation of the excited electrons in real time. We define an energy and time dependent population of the intermediate state Fig. 3 shows on a logarithmic scale the energy distribution P (ε, t j ) of the excited electrons for various times t j after the excitation. For plotting the results, the δfunction in eq. (11) has been replaced by a rectangle with a width of 0.06 eV. We start with a discussion of the initial distribution, shown by the thick black line, calculated according to the transition dipole strength, eq. (10). For the 4 ML Pb film ( Fig. 3(a)), the distribution is highly structured with a sharp maximum at 0.58 eV and a broad peak around 1.21 eV. For the 5 ML Pb film (thick black line in Fig. 3(b)), only the peak at 1.21 eV (and possibly a short-lived peak at higher energies) remain visible, while the low-energy peak is much less pronounced. These results are in excellent agreement with the experimental observations of Ref. 11. In this work, a high-energy peak in the range of 1.1 to 1.2 eV was observed for films with an odd number of Pb monolayers, whereas the peak at 0.6 eV was dominant in Pb films with an even number of layers. Note that, due to experimental limitations of the probe laser energy, excited electrons with energies lower than ∼ 0.5 eV could not be detected in Ref. 11. Next, we analyze the relaxation of the energy distributions for later times. From Fig. 3 it is obvious that all distributions develop a low-energy part corresponding a quasi-thermal distribution of secondary electrons, showing up as an exponentially decreasing function of energy. The high-energy part of the initial spectrum decays mainly due to e-e scattering, thereby creating secondary electrons via impact ionization. Therefore, the highenergy tails decay quickly, simultaneously accompanied by an increasing weight of the secondary-electron distribution. Now turning to longer time scales, we observe that the low-energy part in the 4 ML Pb film and the population in the energy window between 0.4 and 0.6 eV decay more slowly. At the same time, a broad shoulder on top of the secondary electron distribution builds up at ∼ 0.3 eV (marked by the arrow in Fig. 3(a) and magnified in the inset). Both phenomena can be traced back to the effect of e-ph scattering: Excited electrons of the initial distribution at energies of 0.4 to 0.6 eV 'glide down' the electronic band structure (cf. Fig. 1(a) ), thereby emitting phonons. The relatively slow rate at which this occurs leads to a 'phonon bottleneck', i.e., to the buildup of the shoulder centered at ∼ 0.3 eV. This effect, related to the discreteness of electronic states, is quite common for the e-ph relaxation in nanostructures, and has been observed e.g. in quantum dots 44 as well as in two-dimensional layered semiconductors 45 . While there is still continuous energy dissipation to the crystal lattice by very low-energy secondary electrons, the 'phonon bottleneck' affects electrons at higher energies and results in additional, but delayed production of phonons emitted by these electrons 'gliding down' the conduction bands. A similar, but weaker 'phonon bottleneck' can be observed on top of the secondary electron distribution in the simulations for the 5 ML Pb film in Fig. 3(b), marked by the arrow and magnified in the inset. In Fig. 4, we analyze the time scales associated with the electronic population decay at selected energies where peaks had been found in the populations in Fig. 3. The lines without symbols in Fig. 4 show the decay according to the full relaxation dynamics, including both e-e and e-ph scattering. All populations show a nearly exponential decay, albeit with different decay rates. The lifetimes for the population maxima at 1.21 eV and 0.58 eV, extracted via exponential fits, are 21 fs and 101 fs, respectively. For the lowest energy of 0.46 eV, we find an initial rise of the population due to scattering-in from electrons at higher energies, followed by a population decay after about 30 fs, corresponding to a lifetime of 183 fs. The rather broad (in time) maximum of the 0.46 eV curve results from a compensation of the rates of incoming electrons from higher energies, mostly originating from e-e scattering at these high energies, and losses due to both e-e and e-ph scattering, the latter one gaining in relative importance as we go to lower energies. Interestingly, the described broad maximum of the population evolution at low energies is also seen in the experimental data 11 . In the computer simulation, we can deliberately turn off the e-e scattering channel after a very short initial time interval. The e-e scattering was permitted only in the very early times, somewhat arbitrarily chosen to be less than 6 fs, since some mechanism is required to establish a realistic smoothened electron distribution including an appropriate low-energy secondary-electron part. The result of these runs are displayed in Fig. 4 by the lines with circular symbols. If the relaxation after 6 fs proceeds by e-ph scattering only, the population at 0.46 eV initially decays on a time scale of 350 fs (green symbols in Fig. 4(a)), which can be taken as an estimate of the e-ph scattering rate in this energy range. This initial decay is followed by a much slower decay over several picoseconds. At the higher electron energies of 0.58, 0.81 and 1.21 eV, the scattering-in events of electrons from higher energies are equally probable or even more frequent than the scattering-out events, and hence a net contribution of e-ph scattering to the decay is not detectable on the time scale shown in Fig. 4. A similar analysis has been carried out for the 5 ML Pb film, see Fig. 4(b). For the peak energies at 1.21, 0.81 and 0.52 eV, overall lifetimes of 21, 47 and 126 fs are obtained from exponential fits to the full relaxation dynamics. Again, it is possible to estimate the relative importance of e-ph scattering by watching the population decay after the e-e scattering has been 'turned off'. From the slopes of the curves marked by the circular symbols in Fig. 4(b), characteristic times of 24 ps and 4.1 ps are obtained for 0.81 and 0.52 eV electron energy, respectively. At the highest electron energy of 1.21 eV, again we find that the contribution of e-ph scattering is too small to be detectable on the time scale shown in Fig. 4. From this analysis, we learn that the contribution of eph scattering to the total lifetime of the peaks at energies larger than 0.5 eV is much smaller compared to the e-e contribution. This finding confirms the original analysis of the experimental data by Kirchmann et al. 11 where e-ph scattering had been disregarded. Analyzing the experimental data for many Pb film thicknesses, they concluded that the low-energy peak is clearly observed in films with an even number of atomic layers and has a lifetime of 115 ± 10 fs, while the high-energy peak is visible only in the odd-layer films and has an energydependent lifetime which turns out to be 10 ± 5 fs for 5 ML Pb. Our simulation results of 101 fs and 21 fs are in reasonable agreement with their experimental findings, in particular if it is taken into account that the e-e scattering rate is very sensitive to the precise energetic position of the peak. The lifetimes extracted from our simulations are summarized by the circular symbols in Fig. 2(b). Despite the additional decay channel of e-ph scattering being taken into account, the simulated lifetimes lie above the lifetime of isolated electrons due to e-e scattering alone. This is because the simulations describe a realistic distribution of excited electrons, and the incoming flux from higher-lying electronic states effectively 'conserves' the population of the lower lying states over longer times. D. Excitation of lattice vibrations Although the contribution of e-ph scattering to the lifetime of the quantum well states was found to be small, the low-energy states populated by the secondary electrons couple significantly to the lattice vibrations. At these low energies, the e-ph scattering as loss mechanism even dominates over e-e scattering, since the lifetime due to e-e interactions rises above 300 fs for electrons below 0.33 eV according to eq. (9). With the help of the simulations, it is possible to follow the energy transferred from the electrons to each of the phonon modes separately. Fig. 5 shows the increase in time of the excess vibrational energy (in addition to the thermal energy corresponding to the initial substrate temperature) in the various Pb vibrational modes. Summation over all modes yields a total energy transfer between the electronic and the lattice degrees of freedom of 8.2 meV/ps for the 4 ML Pb and 0.79 meV/ps the 5 ML Pb film, respectively. The smallness of these quantities (a few meV compared to 0.1 eV electronic energy in the film) gives an a posteriori justification for the perturbative expression used for the interaction Hamiltonian H int . It is seen from Fig. 5 that the energy transfer is highly mode-selective 46 . One particular surface phonon mode receives a major part of the energy. The dominance of this single mode depends on film thickness; in the 4 ML film it is clearly more pronounced than in the 5 ML film where several modes participate in the energy uptake. In the 4 ML Pb film, this is a mode with frequency 2.26 THz (labeled 47 in our previous publication 14 ), while in the 5 ML Pb film it is a phonon mode at 2.03 THz. In experiments using a high laser fluence, the strong coupling to a specific mode has indeed been observed; in the 5 ML Pb film, it was detected experimentally by a periodic shift of the quantum well energy with a frequency of 2.0 ± 0.1 THz 47 , which is in excellent agreement with the frequency of 2.03 THz identified in our simulation. For both film thicknesses, the modes at frequencies below 2 THz, which are similar to phonon modes in bulk Pb, receive only a much smaller amount of energy on average. As seen from Fig. 5, the increase of the energy over long time scales, e.g. over 4 ps, is sub-linear and saturation is expected at even longer times. We attribute this slow response to the rather long time required by the electrons to relax from high energies down to E F via multiple phonon emission processes. Due to the 'phonon bottleneck' described above, this takes longer than expected from Allan's formula 3 . This formula requires as sole input the electron-phonon coupling constant of the bulk material that can e.g. be determined from ultrafast reflectivity measurements 48 . Eventually, we comment on the possibility to directly observe the excitation of the lattice experimentally. The excitation of low-lying Pb modes results in irregular displacements of the atomic positions on medium to large length scale that can be followed in the Debye-Waller factor of a time-resolved electron diffraction experiment. Preliminary experimental data for Pb films 49 indicate that the low-energy phonons are indeed getting excited on the time scale of a few picoseconds. Interestingly, recent diffraction experiments were able demonstrate the mode-selective energy transfer to phonons even for bulk materials such as aluminium 50 or nickel 51 . Apparently, phonon excitation by strongly excited charge carriers opens up the exploration of a new class of nonequilibrium phenomena, not only in bulk metals, but also in semiconductors 52 and nanostructures. Due to the widely different timescales of e-ph interaction and phonon-phonon interaction, the non-equilibrium distribution of phonons created by the hot carriers can persist over a time span of several picoseconds. Thus, non-thermal phonon distributions could be a more widespread phenomenon than previously thought, and further research along these lines could be fruitful. IV. CONCLUSION Simulations of electronic relaxation have been performed for a realistic system, metallic multilayer Pb films on Si(111), with the help of a parameter-free approach based on density functional theory. Electron-electron (e-e) and electron-phonon (e-ph) scattering were both included in the master equation. Not surprisingly, e-e scattering was found to dominate over e-ph scattering for short times and highly excited electrons more than 0.5 eV above the ing the fastest e-ph relaxation found in the present simulations with a characteristic time of 350 fs as a marker for the cross-over between e-e and e-ph scattering, we conclude that e-ph scattering significantly contributes to the relaxation of electrons in Pb at energies below 0.3 eV. Indeed, the simulations show a population pile-up around 0.3 eV due to the cross-over of the e-e and e-ph scattering time scales at this energy. After 300 fs, it is fair to describe the electronic population by a thermal distribution, however, only up to an excess population of electrons getting stuck in the 'phonon bottleneck'. Remark-ably, this does by no means imply that the phonon populations could be described by a temperature as well. Contrarily, the simulations show that even up to 4 ps after excitation high-frequency surface vibrational modes are preferentially excited by strongly mode-selective phonon emission. In summary, our simulations enable us to disentangle the contributions of e-e and e-ph scattering at short times, < 0.3 ps after optical excitation. Although a first glance at the data is compatible with electronic thermalization by e-e scattering, a contribution of e-ph scattering can be observed already in this early stage, in particular at low electron energies. The phonon system requires much longer (several picoseconds) to equilibrate. Additional simulations beyond the scope of the present work are desirable to gain an improved understanding of the energy transfer between the two subsystems in the later stages of relaxation. The derivation of the form of the electron-phonon matrix element, eq. (8), starts from the Hamiltonian H ep brought to its real-space representation, The physical interpretation of ∆V I Q ( r) is the additional perturbing potential in the Kohn-Sham equation due to the presence of a phonon of mode I and wave vector Q. The vector notation refers to vectorial quantities in threedimensional (3D) space. In a very general setting, Fermi's golden rule requires us to calculate matrix elements of the form involving 3D real-space integration. We now switch to the situation of interest, a two-dimensional film with periodic boundary conditions in the x, y coordinates. Within deformation potential theory 38 , the integration over the coordinates x and y in the film plane are carried out to yield a (spatially independent) shift of the Kohn-Sham eigenvalue, denoted by D nk, I Q , multiplied by a δ-symbol of parallel momentum conservation, The bold symbols denote two-dimensional vectors in the film plane; the '+' sign refers to absorption, the '−' to emission of a phonon of wavevector Q. Technically speaking, the deformation potentials are obtained in the following way: A phonon eigenvector labeled I, as obtained from the phonopy code, is scaled with √ M Pb and then added or subtracted from the Cartesian positions of the Pb atoms in the unit cell. For the two geometries obtained in this way (called + and −), static DFT calculations are carried out, yielding Kohn-Sham energy eigenvalues ε + nk I and ε − nk,I . The eigenvalue shift determines the deformation potential via Note that the (optical) deformation potential D nk, I0 has by definition the physical unit of energy/length. Now we return to the evaluation of the matrix element in eq. (A2). We work out the equations for general wavevectors k and k and will enforce momentum conservation in a later step. After squaring the matrix element to obtain the expression for the rate constant, we can write Here, the constant eigenvalue shift has been taken out of the brackets, and φ nk (z) denotes the Bloch-periodic part of the full 3D wavefunction Ψ nk ( r) suitably averaged over x and y. This expression requires to evaluate integral(s) over the remaining z coordinate that denotes the spatial direction normal to the film in which the electrons are confined, where the definition has been used. To obtain the overall scattering rate due to one specific phonon mode I, the summation over Q in eq. (A1) needs to be carried out, or equivalently an integration over the Brillouin zone of the phonon. Therefore the expression for the rate will include Qx,Qy If we now assume that the deformation potential D is only weakly dependent on Q (which is reasonable for optical phonons in a large supercell), we can take it out of the sums and integral, set Q = 0 and D nk,I0 =: D nk,I , and perform the integration over Q z prior to the integration over z and z . Motivated by this procedure, we define a quantity (cf. the analogous case for a 1D wire, rather than a 2D film, in the appendix of Ref. 53) As a consequence of the definition of J n k n k in eq. (A8), it follows that the Q z integration yields δ(z − z ), and the two factors with arguments z and z in the expression (A7) turn out to be complex conjugates of each other. Performing the integral over z together with δ(z − z ) leaves us with a single integral over z, but now with the squared moduli of the wavefunction in the integrand. In the numerical implementation, we not only integrate over z, but simultaneously perform the spatial average over the supercell by integrating over x and y and dividing the result by the area of the supercell A supercell : For this expression to be valid, the squared wavefunctions must be normalized such that in other words, Ψ must have the physical unit length −1/2 . Finally, by inserting the expression (A10) for the double integration in eq. (A7) and properly normalizing to A supercell , we arrive at the result given in eq. (8). In practice, the squared wavefunctions |Ψ nk (x, y, z)| 2 are obtained from the DFT calculations, using the PARCHG keyword of the VASP code. To reduce the sheer number of calculations, we calculate I n k n k explicitly for all combination of band indices n and n of the bands in the energy range [E F , E F + 2eV], and for the three combinations of the two k-vectors, k x = (1/4, 0, 0) and k y = (0, 1/4, 0). For other combinations of k-vectors, we interpolate using the angle θ between the wave vectors as variable, defined via the scalar product cos θ = kk kk , and obtain Already from inspecting the numerical values of the deformation potential D nk, I , it becomes clear that the coupling of an excited electron to a phonon can be highly phonon-mode specific. In particular, coupling to the highest-lying Pb modes is strong. We therefore use an independent heat bath for each the N opt = 6 highest-lying Pb modes. For the 4ML Pb/Si(111) film, these six phonons (surface and interface modes) of the Pb layer are in the 2.1 to 2.5 THz range (modes labelled 44,45,46,47,48 and 49 in Ref. 14). For the 5ML Pb/Si(111) film, the analogous modes are found at somewhat lower frequencies, in the 2.0 to 2.3 THz range. All the other, lower-lying modes are taken to constitute a common acoustic phonon bath with temperature T 0 . In addition, a constant lattice temperature T sub of the Si substrate acting as a heat sink is part of the description of the vibrational system. As initial condition, all temperatures are set to a base temperature of 100 K at the beginning of the simulation. In general, the population n I (t) of a phonon mode I varies slowly on the scale of electronic relaxation. Yet we allow for energy exchange between the various heat baths. Previous simulations 54 using classical molecular dynamics showed that the surface phonon in a Pb/Si(111) monolayer is damped due to mode conversion on a time scale of τ conv = 30 ps. In the present simulation, we use this time constant to couple each of the baths at temperature T I (t) to the common acoustic bath at T 0 (t). Moreover, the acoustic bath may transfer energy to the Si substrate lattice on an even longer time scale τ sub = 160 ps. This value was adopted from measurements observing the equilibration of Pb films with the Si substrate by Witte et al. 55 . The above considerations lead to the following equations describing the evolution of lattice temperatures: The quantities c acu V and c opt V denote the partial heat capacities of the acoustic phonon bath, and of one optical mode, respectively. In the 'classical' approximation, i.e. when the equipartition theorem holds, we have c opt by the arc length (in k-space) of the solution, approximated by the cumulated lengths of the line segments, as illustrated in Fig. 6. The right hand side of the Master equation is thus made up by the sum over m, I, all grid points and pertaining line segments, in total ∼ 10 6 terms.
v3-fos-license
2018-10-02T13:03:53.362Z
2018-10-02T00:00:00.000
52902498
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2018.00423/pdf", "pdf_hash": "abdcb7f21336a34e8cdd7fd70cb4abb47a0a5d21", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42581", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "abdcb7f21336a34e8cdd7fd70cb4abb47a0a5d21", "year": 2018 }
pes2o/s2orc
Lack of Association of a Functional Polymorphism in the Serotonin Receptor Gene With Body Mass Index and Depressive Symptoms in a Large Meta-Analysis of Population Based Studies The serotonin receptor 5-HTR2C is thought to be involved in the function of multiple brain structures. Consequently, the HTR2C gene has been studied extensively with respect to its association with a variety of phenotypes. One coding variant in the HTR2C gene, Cys23Ser (rs6318), has been associated with depressive symptoms. and adiposity; however, these findings have been inconsistent. The reasons for this mixed picture may be due to low statistical power or due to other factors such as failure to account for possible interacting environmental factors, such as psychosocial stress. Further, the literature around this polymorphism is marked by limited inclusion of persons of African ancestry. The present study sought to overcome these limitations and definitively determine the relationship of this polymorphism with depressive and obesity phenotypes in a large sample meta-analysis. Thus, we harmonized individual level data from 10 studies including the Women’s Health Initiative, CARDIA, ARIC, Framingham Offspring, and the Jackson Heart Study, resulting in a sample of 27,161 individuals (10,457 Black women, 2,819 Black men, 7,419 White women, and 6,466 White men). We conducted a random effects meta-analysis using individual level data to examine whether the Cys23Ser variant—either directly, or conditionally depending on the level of psychosocial stress—was associated with depressive symptoms and body mass index (BMI). We found that psychosocial stress was associated with both depression and BMI, but that Cys23Ser was not directly associated with, nor did it modify the associations of psychosocial stress with depression or BMI. Thus, in the largest study of this polymorphism, we have determined that rs6318 is not associated with depression, or BMI. INTRODUCTION The 5-HTR2C serotonin receptor plays critical roles in numerous human neural circuits (Drago and Serretti, 2009) and is coded by the X chromosome gene HTR2C. Variation in the HTR2C gene has been associated with a wide variety of phenotypes: psychological disorders, feeding behavior, antipsychotic medication-induced side effects, clinical response to antidepressant medications, stress-induced mesoaccumbal dopamine release sensitivity (Drago and Serretti, 2009), pathogenesis of major affective disorders (Yildirim and Derksen, 2013), cortisol response to stress (Brummett et al., 2013;Brummett et al., 2014a;Avery and Vrshek-Schallhorn, 2016;Way et al., 2016), adiposity (Westberg et al., 2002;Hu et al., 2003), and risk for incident cardiovascular events (Brummett et al., 2013). The common HTR2C Cys23Ser (rs6318) variant-a non-synonymous coding single nucleotide polymorphism (SNP)-has been the focus of much of this investigation. This biallelic SNP is likely functional, with the Ser23 C allele having been shown to be constitutively more active than the Cys23 G allele (Okada et al., 2004). It has been associated with differences in blood flow in areas of the brain associated with emotional response, such as the anterior cingulate cortex (Drago and Serretti, 2009), and with variation of cerebral spinal fluid monoamine levels (Lappalainen et al., 1999). Rs6318 also has been associated with two important psychological and biological phenotypesdepressive symptoms (Lerer et al., 2001;Drago and Serretti, 2009;Brummett et al., 2014b) and adiposity (Westberg et al., 2002;Hu et al., 2003;Praschak-Rieder et al., 2005;Drago and Serretti, 2009). In our own work, this SNP was associated with multiple phenotypes including depressive symptoms, cortisol response to stress, and risk of incident cardiovascular events. An in-depth discussion of the role of the 5-HTR2C serotonin receptor and the rs6318 SNP appears in Drago and Serretti (2009). Despite the relatively extensive background suggesting the role of this functional SNP with brain and cardiometabolic diseases, there have been inconsistencies in the observed relationships between rs6318 and those phenotypes. Further, variation in this SNP has not been evaluated in adequate sample sizes of African Americans, nor has the role of brain and heart related environmental factors as effect modifiers been evaluated sufficiently. We attempted to address these challenges by conducting the largest analysis to date of rs6318 by using a large harmonized dataset from 10 population based studies, (four from studies conducted at Duke and six from publicly available dbGaP data (Singh et al., 2018), for a total of 27,161 individuals, of whom 13,276 were self-identified as Black. This large data set afforded the possibility of generating estimates of association that were more likely to be robust to sample instability, and also to more fully examine these associations including men and women of both European and African ancestry. In addition to confirming prior associations between rs6318 with depression and obesity, we hypothesized the presence of a gene-environment interaction between rs6318, chronic psychosocial, stress, and gender. Study Populations We used ten datasets in this study, including 6 large publicaccess datasets and four Duke University Medical Center (DUMC) datasets. Each of the public-access datasets were obtained from the data depository dbGaP/database of Genotypes and Phenotypes/ National Center for Biotechnology Information, National Library of Medicine (NCBI/NLM)/ https://www.ncbi.nlm.nih.gov/gap (Mailman et al., 2007) through an authorized access. Below is a brief description of all the contributing studies. All subjects, in each study, gave written informed consent in accordance with the Declaration of Helsinki. The Women's Health Initiative (WHI) The Women's Health Initiative (WHI) is a long-term national health study dedicated to developing prevention strategies for heart disease, breast and colorectal cancer, and osteoporotic fractures in postmenopausal women (The WHI Study Group, 1998). The available data from this study included only Black and Hispanic participants. Our analysis uses data from only the Black participants from this dataset. The Coronary Artery Risk Development in Young Adults Study (CARDIA) The Coronary Artery Risk Development in Young Adults Study (CARDIA) was designed to study the etiology and natural history of cardiovascular disease beginning in young adulthood (Friedman et al., 1988). CARDIA contains roughly equal numbers of individuals in the subgroups of race, gender, and education. Atherosclerosis Risk in Communities Study (ARIC) Atherosclerosis Risk in Communities Study (ARIC) is a prospective epidemiologic study focused to investigate the etiology and natural history of atherosclerosis and demographic variation in cardiovascular risk factors, medical care, and disease (The ARIC Study Group Investigators, 1989). The study examined atherosclerosis by direct observation and by use of modern biochemistry. The components of the study included identification, investigation, and diagnosis of clinical events through home interviews, clinic examinations, and annual telephone follow-ups. Framingham Offspring Cohort We used the Generation 2 (or Offspring) dataset from the Framingham Heart Study Cohort for this work (Feinleib et al., 1975) because of availability of psychosocial measurements and genetic data. The second-generation cohort included adult children (and their spouses) of the original participants. The cohort is almost entirely White; we elected therefore to use only White participants from this dataset. Because the sample included related participants, we maintained only a single participant (the case with the lowest ID number) from each family cluster. Multi-Ethnic Study of Atherosclerosis (MESA) Multi-Ethnic Study of Atherosclerosis (MESA) was designed to study cardiovascular disease (CVD) risk factors that predict progression of the clinically observable or subclinical cardiovascular disease; (Bild et al., 2002). Jackson Heart Study (JHS) The Jackson Heart Study (JHS) is a large, community-based, observational study that was designed to explore reasons for the prevalence of cardiovascular disease among African Americans (Sempos et al., 1999). The study participants were recruited from urban and rural areas of the Jackson, MS metropolitan statistical area (MSA). Community Health and Stress Evaluation (CHASE) Study The DUMC CHASE Study was designed to determine the role of psychosocial and biobehavioral factors in the etiology of CVD. The study included a lumbar puncture procedure along with medical and psychosocial information survey of Black and White participants from a wide range of socioeconomic status (Burroughs et al., 2003). Studies of a Targeted Risk Reduction Intervention Through Defined Exercise (STRRIDE) We used baseline data from two of the three DUMC STRRIDE clinical trials: STRRIDE -Aerobic Training / Resistance Training (AT/RT), and STRRIDE pre-diabetes (PD). STRRIDE AT/RT study was designed to compare the effects of aerobic training (AT) and resistance training (RT) and the full combination (AT/RT) on central ectopic fat and liver enzymes and fasting insulin resistance by homeostatic model assessment (HOMA) (Slentz et al., 2011). The purpose of the STRRIDE-PD study was to compare the effects of different amounts and intensities of exercise training programs without diet to an exercise and diet program modeled after the first six months of the Diabetes Prevention Program (DPP) (Slentz et al., 2016). During the course of analysis, we observed that in both Black and White men in the STRRIDE AT/RT study there was insufficient variability in the stress measure to model the stress by SNP interaction. Thus, data from this study was excluded from the analysis of males. Duke Caregiver Study (DCS) This DUMC study included data from family caregivers of a relative with Alzheimer's disease or other dementia and a noncaregiving comparison group . The study was designed to examine the interaction of stress and genetic markers as predictors of CVD. Duke Family Heart Study (DFHS) This DUMC study was designed to examine the effect of genetic variation on the relationship between psychosocial and cardiovascular risk factors . As with the Framingham sample above, the initial DFHS sample included related individuals. We maintained only a single participant from each family cluster. The public datasets are available upon request to the database of genotypes and phenotypes; dbGaP/database of Genotypes and Phenotypes/National Center for Biotechnology Information, National Library of Medicine, NCBI/NLM; https://www.ncbi. nlm.nih.gov/gap. For the use of Duke datasets, the authors are willing to establish collaboration subject to the approval from the respective Study Committee and Duke IRB. Genotyping The genotyping platform ABI 7900 Taqman system (Applied Biosystems) was used to genotype SNPs in DCS, DFHS, and CHASE; Taqman (Life Technologies) and the QuantiFast Multiplex PCR+ROX kit (Qiagen) were used for STRRIDE-PD; Affymetrix Mapping250K (Nsp and Sty) Arrays and Mapping50K (Hind240 and Xba240) Arrays was used for Framingham Cohort; and Affymetrix Genome-Wide Human SNP Array 6.0 was used in MESA, CARDIA, WHI, ARIC, and JHS. We chose the candidate HTR2C SNP rs6318 from the genotyping data of these studies. If the SNP was not available in a study, we identified a proxy SNP rs2428722 in high linkage disequilibrium (R 2 > = 0.93 in all 1000 Genomes subpopulations) with the HTR2C SNP (Singh et al., 2018). Phenotypes and Stress Score For the measurement of depressive symptoms, some form of the Center for Epidemiological Studies Depression Scale [CES-D; (Radloff, 1977)] was available for 6 of the 10 studies used in this analysis. For studies that did not include the CES-D, we used either an alternative measure specifically designed to assess depressive symptoms or a measure that could serve as a suitable proxy given the known to be correlated with formal depression measures. In the CHASE Study, the Beck Depression Inventory and Obvious Depression Scale were available. We converted these two scores to z-scores and calculated the mean of those scores. In the ARIC study, the Maastricht Vital Exhaustion Score (Appels et al., 1987) was used, while in the STRRIDE-PD study, the mental health scale (with scoring reversed) from the SF-36 (Ware, 1993) served as the depression measure. All measures were then standardized to z-scores (mean of 0 and standard deviation of 1) within each respective study to create a common depressive symptom variable (Singh et al., 2018). BMI was available in all studies, and was calculated as kg/m 2 . For the chronic stress measure, out of the 10 studies that we included in the present study only two, MESA (Shivpuri et al., 2012) and JHS (Johnson et al., 2016), had self-rated stress measures. In our prior work on data harmonization (Singh et al., 2018), in the eight studies that lacked a self-rated stress measure we constructed a stress variable using an algorithm (Singh et al., 2015) based on proxy indicators of five stress domains: financial, marital, work, health of spouse, and one's own health. These domains were based on the chronic burden items from the MESA study (Shivpuri et al., 2012) that were derived from a composite stress measure originally developed in the Study of Women's Health Across the Nation (Troxel et al., 2003). Briefly, our algorithm (Singh et al., 2015) searched for proxy indicators of each stress domain, scored each proxy item as 1 = stressful, 0 = not stressful. The item scores were then summed resulting in a single score. In some instances, not all indicator domains were available, resulting in varying possible score ranges across studies. In order to harmonize the differently scaled measures, we standardized the score within each study by transforming them to z-scores (mean of zero and a standard deviation of one). For additional details regarding this measure see (Singh et al., 2015(Singh et al., , 2018. Statistical Analyses The background characteristics were described using means and standard deviations for continuous variables, and frequency and percentages for the categories. The primary analyses were carried out using random effects models as implemented in SAS Proc Mixed (SAS Institute, Cary, NC, United States), adjusting a priori for age. Because Ser23Cys is X-linked, and because two studies were exclusively one race (Jackson Heart, Black; Framingham Offspring Study, White) we estimated separate models for each race and sex combination, and for each phenotype (depressive symptom score and BMI). Preliminary analyses with restricted cubic splines also found a strong non-linear association between age and BMI. An additional quadratic term for age served as a reasonable approximation of the non-linear form in all models. Study source was specified as a random effect (intercept only), and the phenotype of interest (the standardized depressive symptoms score or BMI) as the response variable. The models proceeded by first evaluating a stress by SNP interaction including subordinate main effects in the model. If the interaction term was not statistically significant, we re-estimated a model with only main effects. The fixed portion of the interaction model, then, took the following form: Phenotype ij = β 0j + β 1j Age + β 2j Age 2 + β 3j SNP + β 4j Stress+ β 5j SNP * Stress and the random portion was: β 0j = γ 00 + u 0j. The first equation represented the fixed effects component of the model, where the phenotype was either depressive symptoms or BMI, β 0 , the model intercept, and each β represented the slope coefficient for each predictor term in the model. The second equation was the random effects component, in which γ represented the grand mean of the phenotype for each study when the predictors had a value of zero (i.e., the mean of the study intercepts); u was the variance of the intercepts for all studies around the overall mean intercept. Given the relatively large sample size, we also pre-specified the effect sizes that would be considered clinically meaningful. The depression score scale varied considerably across studies and was thus standardized for the analysis. Therefore, the pre-specified effect size was also expressed in terms of a standardized score: we required an effect to be at least 0.5 SDs to be considered clinically significant for the depressive symptom score. BMI, in contrast, was available in the original metric of kg/m 2 in all studies: we therefore specified the clinical significance threshold in those original units. Based on our prior clinical and investigative experience with BMI, we selected 0.5 kg/m 2 as the clinical significance criterion. Age, BMI, depressive symptoms and the stress score were modeled as continuous variables. Prior work with the stress score supports this approach (Singh et al., 2014). Based on an examination of model residuals, the standardized depressive symptom variable was transformed using the square root after adding a constant of 2 to each z-score. For the models with females, the genotype was coded as a 3-level factor: Ser/Ser, Cys/Ser, and Cys/Cys. The major allele homozygotes (Cys/Cys) were used as the reference group. The 2-degree-of-freedom test was used to interpret results involving the genotype. As males have only one copy of a given allele and therefore only two possible genotypes, genotype was coded as binary, Ser/-and Cys/-, with the major allele hemizygotes (Cys/-) as the reference. We used Bonferroni correction for all tests of regression coefficients, applying the same correction to each term in a given model. We based the correction on the number of models. Thus, with 8 separate models (2 phenotypes × 2 sexes × 2 races), the resulting required alpha was 0.006. We conducted two types of sensitivity analyses. In the first we estimated the same models described above for each study that had ancestry markers available (ARIC, Framingham, MESA, WHI), adding the principal component weights to the model as adjustment variables. In the second sensitivity analysis, we reestimated the primary random effects models after converting depressive symptom scores and BMI to ranks, rendering the results parallel to a non-parametric analysis. Finally, we used Cochrane's Q statistic and forest plots from the metafor package 1 in R to assess study heterogeneity for the unstandardized SNP by stress interaction coefficients. Coefficients for the heterogeneity analysis were derived using SAS PROC GLM, estimating a separate model for each study within each race and gender. Descriptive Analyses Descriptive statistic of the background characteristics are displayed in Table 1. The genotype frequencies we observed were consistent with those known for each race and sex (Lappalainen et al., 1995). Similarly, consistent with prior literature, Black women had the greatest BMI, White women the lowest. Black men and women exhibited greater stress scores compared to whites, and women of both races had higher depressive symptom scores compared to men. Model Results Figure 1 displays the model-fitted mean and 95% confidence interval of the depressive symptom score (Panel A) and BMI (Panel B) for each sex/race combination. The p-values and R-squares for the model terms appear in Table 2. Given our adjusted p-values criterion, there were no statistically significant genotype by stress interactions for the outcomes of depression or BMI main effect relationships of the rs6318 genotype with either depression or BMI within any race/sex group. Similarly, for all race/sex groups there were no main effect associations between the genotype and depressive symptoms or BMI. The most pronounced effect involving Ser23Cys was an interaction with stress predicting BMI in Black women. In this case, Ser homozygotes under high stress had lower BMI levels compared to Cys homozygotes under high stress. Conversely, however, Ser homozygotes under low stress had greater BMI compared to Cys homozygotes under low stress. However, the observed differences were below our threshold for clinically important differences, and as noted above, the p-values exceeded the pre-specified alpha level of 0.006. The most consistent result among the models was the main effect for stress on both depressive symptoms and BMI. A strong association was noted in all but one race and sex group, with greater stress predicting more severe depressive symptoms, and higher BMI. The one exception was among Black men, for whom stress was only very weakly related to BMI. Expressed in standardized effect sizes, a one standard deviation increase in the stress score was associated with increases in the depressive symptom score of 0.33 standard deviations in White females, 0.23 in Black females, 0.28 in White males, and 0.29 in Black males. In the parallel model for BMI, a one standard deviation increase in the stress score was associated with a 0.75 standard deviation increase in BMI in White females, 0.95 in Black females, 0.16 in White males, and 0.03 in Black males. Sensitivity analyses adjusting for ancestry markers, where available, did not materially alter the results. Additional supplementary analyses using non-parametric tests also were consistent with the parametric results reported above. Further, using the Q-test, we found no strong evidence of study heterogeneity with respect to the rs6318 by stress interaction for BMI for women and men of either race (Black women, p = 0.39; White women, p = 0.14; Black men, p = 0.15; White men, p = 0.11). The test for heterogeneity was also not significant for depressive symptoms for women of either race (Black women, p = 0.34; White women, p = 0.50). We did observe heterogeneity for both black males (p < 0.0001) and white males (p < 0.007) with respect to depressive symptoms. The effects are displayed for each phenotype by race/sex group in the forest plots in Figure 2. As a check, we also tested a SNP by stress by study interaction term using the random intercepts model. As would be expected, the results were consistent with the above study-by-study heterogeneity analyses. DISCUSSION The present study suggests that the Cys23Ser rs6318 polymorphism is not associated with depressive symptoms or BMI regardless of the presence or absence of psychosocial stress. The lack of association between Cys23Ser and depressive symptoms or BMI is consonant with at least some prior work: results from a 2017 systematic review and meta-analysis (Gonzalez-Castro et al., 2017) that evaluated the role of the variant Cys23Ser (rs6318) in the pathogenesis of suicidal behavior also found no association. Similarly, a recent comprehensive review and meta-analysis (Gressier et al., 2016) found no association for weight gain when examining the prototypic antipsychotic clozapine. Given the small effect sizes we observed for terms involving this single SNP, it also may be that the effects exist, but were simply too small to detect with our sample sizes given our a priori criteria for clinical and statistical significance. It is possible that extending our work to a polygenic model may yield more clinically meaningful effect sizes; multiple SNPs in HTR2C may impact the regulation and function of HTR2C, resulting in variable phenotypic expression. For example, one SNP, rs1414334 that is in high LD with rs6318 (R 2 = 0.92 in European-Americans) was associated with metabolic syndrome related to antipsychotic drug use (Mulder et al., 2009;Risselada et al., 2012). Indeed, two of the largest genome-wide association studies on depression have recently reported 44 (Wray et al., 2018) and 17 loci (Howard et al., 2018) significantly related to depression. In the largest GWAS of major depression to date, Wray et al. (2018)reported odds ratios on the order of 1.03-1.04, indicating very small effect sizes and requiring very large samples to detect these SNP effects. These studies did not report analyses of the X-chromosome and thus the relative importance of rs6318 in these large analyses is unknown. Further, our observation of possible differential effects of rs6318 by race and gender adds to the complexity of the genetic model. The effects we report for women, in particular, suggest that it may be worthwhile to continue investigation of Cys/Ser in future studies, and also highlight the potential importance of considering sex when evaluating SNP effects. This observation coupled with the observed genetic architecture for depression makes it plausible that this coding SNP could have effects on brain-related phenotypes that would require much larger sample sizes to detect. The potential for additional functional variants coupled with the negative findings in the harmonized dataset suggest the need for additional genetic analysis of HTR2C. Regardless, this is the largest study of rs6318 to date and thus provides at least some clarity around the inconsistency of the associations. In contrast to the lack of relationship between rs6318 with depressive symptoms, and BMI, we found consistent and relatively strong associations of chronic stress with both depressive symptoms and BMI. Indeed, with the exception of Black males with respect to BMI, higher chronic stress was associated with higher depressive symptoms scores and higher BMI within each race and sex subgroup. While it is generally understood that chronic psychosocial stress is an important risk factor with regard to physical and mental health outcomes (Juster et al., 2010), it has also been suggested that chronic stress may play a key role in racial/ethnic health disparitiesthrough both biological and psychological mechanisms (Jackson et al., 2010). Body weight has been proposed as a potential mediator in these relations (Kim et al., 2009); in related work we have shown that that associations between adiposity and blood pressure vary for Whites and Blacks (Brummett et al., 2012). Although not the primary focus of the current study, we note that while the magnitude of the association between stress and BMI was fairly large in Blacks and Whites, the overall levels of BMI were observed to be higher for Blacks as compared to Whites. The present study has several limitations. First, as with any meta-analysis, using different measures to represent a single common underlying phenotype, as was the case for our measures of depressive symptoms and psychosocial stress, assumes that the measures are similar enough to adequately represent the same underlying phenotype. Although this assumption is not directly testable using standard psychometric techniques (e.g., common factor analysis) in the data available to us, the depressive symptom measures in each study behaved as expected: average scores for females were consistently higher than those for males in all but one study (Duke Family Heart Study), with White men having the lowest scores across all groups. The content of the psychosocial stress measure also differed across studies. Psychometric analyses conducted on this measure in our prior FIGURE 2 | Forest plot showing the unstandardized coefficient for the interaction between Ser/Ser vs. Cys/Cys and the stress measure for each study for the depressive symptom and BMI outcomes for females (A) and males (B). Cochrane's Q test for between-study heterogeneity was statistically significant only for White males and Black males on the depressive symptom outcome. Coefficients were generated for each study separately using a general linear model. Results were consistent with tests of a SNP by stress by study interaction term in a random intercepts model. In the study-by-study analysis, data were too sparse in the STRRIDE AT/RT dataset to estimate the SNP by stress interaction and thus not included in the heterogeneity analysis. work (Singh et al., 2015) demonstrated that the various indicators of stress were congeneric, meaning that they were essentially exchangeable. We also observed that the correlation between stress measure and depressive symptoms was positive and similar in magnitude in all but the CARDIA study (Singh et al., 2018). Although the psychosocial stress measure represents structural, objective stressors, we cannot know whether these resulted in the subjective perception of stress. This concern is somewhat allayed given the consistent association with depressive symptoms. In addition, we observed significant heterogeneity across studies for both Black and White males with respect to the SNP by stress interaction and depressive symptoms. Thus, there is less certainty regarding the model estimates in these two groups. Finally, ancestry markers were not available for several of the studies. We therefore were unable to include population stratification adjustment in our primary analyses. We did, however, conduct a sensitivity analysis in which only the studies that did include ancestry markers were evaluated and observed that the pattern of results was essentially unchanged with this adjustment. In conclusion, the current findings do not support the hypothesis that the Cys23Ser rs6318 polymorphism is related to depressive symptoms or BMI, regardless of the level of psychosocial stress. However, these data do support prior work indicating that chronic stress is significantly related to both depressive symptoms and adiposity. ETHICS STATEMENT This study was carried out in accordance with the recommendations of 'Duke Institutional Review Board' with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the 'Duke Institutional Review Board'. AUTHOR CONTRIBUTIONS RW, IS, BB, EH, MB, RJ, AS, SS, and WK contributed to the conception and design of the study. BB wrote the initial draft of manuscript. MB, EH, AS, and SS wrote sections of the manuscript. MB performed the statistical analyses. AS, MB, BB, and RJ organized the databases. All authors contributed to the manuscript revision, read, and approved the submitted version.
v3-fos-license
2021-05-28T13:26:26.579Z
2021-05-25T00:00:00.000
235219259
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2021/05/25/2021.05.24.445360.full.pdf", "pdf_hash": "e84a90f22df0fe5ad8a7f5ef68c512ef2df82f46", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42582", "s2fieldsofstudy": [ "Biology" ], "sha1": "ea2887df1abbe11e569057ed285c1eee5ef72e34", "year": 2021 }
pes2o/s2orc
CellMap: Characterizing the types and composition of iPSC-derived cells from RNA-seq data Induced pluripotent stem cell (iPSC) derived cell types are increasingly employed as in vitro model systems for drug discovery. For these studies to be meaningful, it is important to understand the reproducibility of the iPSC-derived cultures and their similarity to equivalent endogenous cell types. Single-cell and single-nucleus RNA sequencing (RNA-seq) are useful to gain such understanding, but they are expensive and time consuming, while bulk RNA-seq data can be generated quicker and at lower cost. In silico cell type decomposition is an efficient, inexpensive, and convenient alternative that can leverage bulk RNA-seq to derive more fine-grained information about these cultures. We developed CellMap, a computational tool that derives cell type profiles from publicly available single-cell and single-nucleus datasets to infer cell types in bulk RNA-seq data from iPSC-derived cell lines. Introduction undifferentiated state and the theoretical ability to differentiate into any cell type under appropriate experimental conditions, iPSCs and their differentiated derivatives represent an attractive alternative to traditional in vitro models that rely on cancer cell lines or rodent material. Barriers that once limited the use of iPSCs, such as costly reagents, complicated culture protocols, or restricted access to high quality, well-characterized iPSC lines, have diminished over the last decade. It has also become increasingly apparent that human biology can diverge significantly from rodent and even non-human primate biology, thus necessitating the use of human cells 3,4 . For in vitro studies using iPSC-derived cells to be meaningful, it is important to understand the relationship of the in vitro generated cells to endogenous cell types. This is particularly true during the development of novel iPSC differentiation protocols when seemingly small changes in culture conditions can lead to divergent cell fates. Evaluating expression of only a few canonical proteins by immunocytochemistry or genes by qPCR may not provide an adequate representation of what is in the cell culture dish and it can be technically challenging by these methods to test for the presence of off-target cell fates. A similar issue arises once a differentiation protocol has been established and one would like to ensure the reproducibility of specific cell type production within the same iPSC line and across different iPSC lines differentiated with the same protocol. While iPSC-derived cell types need not be perfect replicas of the endogenous cell type to be useful for disease modeling, they must reproduce the mechanism of the endogenous function to be evaluated in an appropriate cellular context to be physiologically relevant. A variety of cell types exist in biological tissues performing different functions. When a biological function is altered or deficient, we need to understand the origin and mechanism of this aberration to devise ways of correcting it. However, most of the readily available biological samples are composed of a mixture of cell types. Experimentally separating these cell types and performing single-cell sequencing on them is cumbersome and costly. Therefore, developing computational approaches for cell type deconvolution from bulk RNA-seq data has been a popular and fruitful endeavor during the past decade 5,6 . The earlier methods relied on cell type marker genes or cell-specific signatures obtained from prior publications or derived from lowthroughput experiments, such as gene expression profiling on FACS sorted cells. CellMix was one such useful toolset that provided access to multiple deconvolution methods allowing the user to select the best approach based on the available data 7 . Unfortunately, the support of this public tool has been discontinued and it is incompatible with the latest R libraries. More complex and computationally demanding approaches have also been designed to characterize engineered cells based on inference of tissue-specific gene regulatory networks first from microarray data and then from bulk RNA sequencing data 8,9 . The aspiration to eliminate the dependence on prior knowledge of cell type markers or expression signatures led to efforts to develop de novo deconvolution algorithms. One such example is CellDistinguisher, which mathematically identifies cell type specific patterns in bulk expression data obtained from multiple heterogeneous samples 10 . CellDistinguisher can identify the genes that best distinguish a defined number of cell types or biological processes in the input data. This type of unsupervised deconvolution, however, cannot tell whether the patterns detected are a result of the presence of multiple cell types or subpopulations undergoing different biological processes, e.g., apoptosis, different phases of the cell cycle, etc. Adding even limited amount of prior knowledge in the form of marker gene sets can guide such methods into the desired direction. With the advances of single-cell and single-nucleus (sc/sn) RNA sequencing, generating the much-needed prior knowledge to characterize cell types is gradually becoming a reality 11, 12 . These sequencing approaches are still quite expensive and arduous. Consequently, most laboratories cannot afford to apply them regularly to characterize iPSCs. But, given even a limited number of such datasets for the cell types of interest, they can be leveraged to mathematically characterize many samples for which the much easier and more cost-effective bulk RNA-seq data can be generated. There are multiple deconvolution methods that rely on single-cell information to generate cell type expression profiles. Providing a thorough review of the field is beyond the scope of this work. Here we focus on a few representative examples that we used to benchmark our . CC-BY 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made 13 . One limitation of this approach is that the query bulk needs to have the same cell types as the samples from which scRNA-seq data is derived. A more practical method would allow cell type profiles to be derived from sc/sn RNA-seq datasets of samples with disparate compositions of cell types. Bisque is designed to perform a very fast decomposition using nonnegative least squares (NNLS) regression based on one single-cell dataset serving as a reference 14 . The constraint with this method, like with MuSiC, is that the user should have a single-cell dataset with cell types matching the bulk samples available before applying it. The most recent deconvolution method, SCDC derives expression profiles from multiple scRNA-seq datasets adopting an ensemble framework to implicitly address the batch effects inherent in datasets coming from different sources 15 . It achieves this by applying different weights to different datasets. The reference data most similar to the bulk data overall will have a higher weight. A drawback of this method is that all reference datasets need to have the same cell types. Meeting this condition is unlikely when using publicly available datasets from multiple sources. For accurate disease modeling using human iPSCs, we needed a deconvolution tool designed to overcome the limitations encountered in existing tools. The new tool should be able to characterize a variety of cell types while making it possible to expand the list as needed. It should easily incorporate new reference datasets as they become available, allowing the user to retrain and retest the tool without modifying the code. To mitigate the batch effects or other biases inherent in individual datasets, a requirement has been imposed to have each cell type represented in at least three datasets, but we do not impose a constraint that each reference dataset need contain all cell types of interest. CellMap was developed to meet these requirements. It was aimed mainly at characterizing iPSC-derived cells in terms of their cell type composition, their similarity to previously characterized primary cells or other iPSC-derived datasets, as well as assessing batch-to-batch variability. One important feature of this tool is the ability to regenerate the cell type profiles easily as newer or better-quality single-cell and single-nucleus datasets are produced and . CC-BY 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made published. Here we demonstrate that besides iPSC-derived cells, CellMap can naturally be applied to the deconvolution of any complex samples whose constituent cell types are represented in the provided reference datasets. Workflow CellMap employs NNLS regression to decompose a bulk sample into cell type proportions based on the gene expression values ( ) of a query bulk sample and the cell type specific expression profiles as shown in Eq (1): where is a generating the cell type profiles and deconvolution of the query bulk sample (Fig. 1). While deriving the cell type profiles is the more computationally expensive part, it only needs to be carried out once for a given set of input sc/sn datasets. . CC-BY 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made Generating pseudo-bulk samples The cell type expression profiles were derived from publicly available sc/sn RNA-seq datasets with cell type annotations provided by the authors (Supplementary Table 1). For skeletal muscle, we could only find bulk RNA-seq data and used those as is they were pseudo-bulk generated from single cells to derive the profiles. Since no two cells are identical, even if they are labeled as the same type, we attempted to capture this inherent biological heterogeneity by generating multiple pure cell type pseudo-bulk samples from each dataset. Cells were randomly selected from the existing pool of a given type and their expression values were summed up across the cells for each expressed gene. Similarly, mixed cell type pseudo-bulk samples with known cell type compositions were also generated from each dataset for the profile selection training. After the pseudo-bulk samples were normalized to 1M reads, genes with low expression were filtered out. Low expression genes were defined as having less than 4 counts in more than 20% of pseudo-bulk samples of any cell type. The expression metric used for full length transcript datasets was TPM (Transcripts Per Million), while UMI (Unique Molecular Identifier) was used in the case of datasets obtained with 3' RNA sequencing. . CC-BY 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made Normalization across datasets ComBat normalization was used to eliminate batch effects among pure pseudo-bulk samples across reference datasets that can occur due to the differences in sequencing library preparation and sc/sn sequencing platforms 16 . Depending on the extent of overlapping cell types among datasets, two different strategies were applied. If shared cell types among datasets were sparse ( Fig. 2A), ComBat was applied to the pseudo-bulk pure samples independently for each cell type across their source datasets. However, in the preferred scenario, when the overlap of cell types among the datasets was high (Fig. 2B, C), the normalization was performed across all pseudo-bulk samples generated for the pure cell types in one step. This approach was taken to avoid removing true cell type differences while performing batch correction for unbalanced datasets. Cell type profile genes To increase the sensitivity of cell type detection, only cell type specific genes were included in the cell type profiles. Such genes were more highly expressed in one cell type relative to all others and they were identified by performing pairwise differential expression analysis with edgeR on the batch-corrected pseudo-bulk data 17 . For a balanced approach, an attempt was made to maintain a similar number of profile genes for all cell types within a group (see description of cell type groups under Stepwise Deconvolution below). Thus, fewer genes were kept for the CNS6 profiles than those of Major9, because the more similar cell types in CNS6 resulted in fewer differentially expressed genes. Training The goal of the training was to generate a collection of profiles of pseudo-bulk cell types used in the composition estimation procedure (Eq. 1). The training was performed in iterations until either the desired performance or the maximum number of iterations was reached. Two sequential training steps were taken in each iteration: (1) on mixed pseudo-bulk samples and Deconvolution An ensemble method was adopted to integrate the deconvolution results from different sets of profiles to optimize the match between the profiles and the query data. For each query bulk sample, a final cell type composition was calculated from the top N estimated compositions based on their gene expression goodness-of-fit RMSEs: where is the proportion of i-th cell type in j-th estimated composition; the weight w j was calculated based on the goodness-of-fit RMSE f of each set of profiles (i.e., RMSE from observed and fitted values of gene expression in Equation (1)): The results presented here were generated with N=5, that is the lowest 5 RMSE values from all profile sets. Stepwise deconvolution Attempting to decompose samples into a very large number of cell types can be difficult as multiple small fractions are hard to predict accurately. To avoid this pitfall, we envisioned a multi-step process: in the first step, the major cell types of interest would be queried, followed by more refined deconvolution steps querying a narrower set of cell types or subtypes based on the outcome of the first step. Following this design, we curated two groups of datasets to . CC-BY 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made include the cell types of interest. The first group consisted of 17 datasets to account for 9 major cell types (Major9), including astrocytes, cardiomyocytes, endothelial cells, hepatocytes, macrophages, neurons, oligodendrocytes, pancreatic and skeletal muscle cells. These are the main cell types of interest for neurodegenerative disease research as well as unintended types that might arise as off-target cell fates. The second group of 9 datasets focused on cell types specific for the central nervous system (CNS). From this dataset, we derived two sets of profiles: the CNS6 that included neurons, astrocytes, endothelial cells, microglia, oligodendrocytes and pericytes, as well as Neuron3 representing neuronal progenitors, inhibitory and excitatory neurons (Fig. 2). Comparison to other deconvolution approaches The mixed pseudo-bulk samples with known cell type proportions that were generated as part of the CellMap pipeline were also used to compare the performance of CellMap to three publicly available methods: MuSiC 13 , SCDC 15 and Bisque 14 . Even though multiple sc/sn datasets were used for the profile generation, the input to the CellMap deconvolution step was a reduced data bundle comprised of the cell type profiles. The other three methods (MuSiC, SCDC and Bisque) work directly with sc/sn datasets instead of the pre-generated profiles. To avoid loading all input sc/sn datasets into the memory for the comparisons, an expression matrix was created by merging subsets of cells from them. At least half of the cells were randomly selected for each cell type from each dataset, not to exceed 20M cells of a certain type. Three such input sc/sn expression matrices were created corresponding to the three CellMap profile sets. The implementations of MuSiC, SCDC and BiSque were slightly modified from their published versions deposited in GitHub. The changes included adjustments to the output format and enabling parallel computing. The modified applications are also available from GitHub together with the expression matrices of the pseudo-bulk samples used for the benchmarking. The comparison was performed for each cell type separately. The RMSE values were calculated across each cell type by comparing the expected and predicted compositions of the pure and . CC-BY 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made Training datasets The sc/sn RNA-seq datasets (bulk RNA-seq for muscle cells) used to generate the cell type profiles were also used for the pseudo-bulk training (Supplementary Table 1). Multiple rounds of random selections of cells minimized the overlap between the cells used for profile generation (from pseudo-bulk pure samples) and training (on pseudo-bulk mixed samples). Diseased samples were excluded when appropriate annotations were provided. For consistency, we reran the RNA-seq pipeline on these datasets when the raw data was available, as noted in Supplementary Table 1. In addition to these pseudo-bulk samples, true bulk samples from public repositories were used for the true bulk training. Testing datasets To evaluate the performance of CellMap, we assembled a collection of true bulk RNA-seq datasets of purified primary cells and iPSC-derived cell lines, independent from the training sets used in CellMap. For the datasets with primary cells, the information provided by the authors about the cell type composition of the samples was accepted to be the ground truth. In the case of the iPSC-derived cells, the entire target cell type was used as expected composition for the purpose of computing an RMSE. Even though these RMSE values reflected more on the deviation of cell line from the target cell type than on the performance of CellMap, they were deemed to be useful for detecting the changes in composition and similarity of iPSC derived cells relative to the primary cell types. In addition, we used bulk RNA-seq data from brain tissues of the ROSMAP dataset with matching immunohistochemistry (IHC) and snRNA-seq data as ground truth 18,19 (Supplementary Table 2). . CC-BY 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made . CC-BY 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made Normalization across datasets As revealed by principal component analysis (PCA), the pure pseudo bulk samples without any prior normalization tended to cluster by data sources rather than by cell types (Fig. 3A, B, Supplementary Figures S1-S2). This was not surprising, given the differences in sequencing library preparation and various sc/sn sequencing platforms employed to generate the datasets. This strong batch effect would have adversely affected the performance of the decomposition algorithm. Depending on the presence of overlapping cell types among datasets, two different ComBat normalization strategies were deployed to eliminate batch effects. The overlap of cell types among the reference datasets used to generate the Major9 cell type profiles was sparse ( Figure 2A). In this scenario, ComBat was applied to the pseudo-bulk pure samples for each cell type separately across their source datasets. The pseudo-bulk samples, that clustered by sequencing platforms originally, became clearly grouped by cell types after this batch correction (Fig. 3D). This strategy proved to be necessary because a normalization across all datasets and all cell types tended to remove cell type specific expression patterns (Fig. 3C). On the contrary, the datasets used for the CN6 and Neuron3 profiles had highly overlapping cell types (Fig. 2B, C). In this case, ComBat normalization across all pseudo-bulk samples in one step was sufficient to remove the batch effects without reducing signals due to real cell type differences. As shown on the corresponding PCA plots, the samples that originally grouped by datasets and sequencing technologies were properly clustered by cell types after normalization (Supplementary Figures S1-S2). . CC-BY 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made Performance on the training data Using the Major9 profiles, CellMap predicted the composition of the pseudo-bulk mixed samples with median RMSE values below 0.1 for all the 9 cell types (Fig. 4A). The lowest performance was observed on endothelial cells and macrophages. When computed across cell types in each dataset, the median RMSE was below 0.1 for the pseudo bulk samples generated from all but one dataset (Fig 4B). The median RMSE was the highest for the true bulk samples ( Fig. 4B last column). This latter is not surprising because, even though these true bulk samples contain purified cell types, they are rarely 100% pure and homogeneous. The predictions using the CNS6 profiles were slightly less accurate on average. Having to combine single-cell and single-nuclei datasets for the CNS6 profile set increased the variability of the expression profiles. As a result, predicting the neuronal fractions accurately proved to be more challenging, with RMSE values reaching 0.2 (Fig. 4C). The prediction accuracy across datasets was relatively similar since they tended to have most of the CNS6 cell types: four datasets had median RMSE below 0.1 and five datasets had median RMSE between 0.1 -0.2. The median RMSE of the predictions of the true bulk samples was well below 0.1, but the individual RMSE values fell into a wide range from 0.01 to slightly over 0.3 (Fig. 4D). Predicting the neuronal subtypes in Neuron3 proved to be the most difficult task. The median RMSE values ranged from 0.02 to 0.15 for the three subtypes: inhibitory, excitatory and progenitors (Fig. 4E). On a dataset-by-dataset level, 4 datasets yielded accurate predictions with median RMSEs at or below 0.1, while the other 5 had higher RMSE values with larger spread (Fig. 4F). We attribute this increased difficulty to the often-subtle differences between the expression profiles of inhibitory and excitatory neurons that sometimes result in inconsistent labeling of these neuronal subtypes in different datasets. Cell type identification in scRNA-seq dataset clusters, whether it is done manually or by automated annotation tools, is a challenge because the clusters are not completely homogeneous. Slight differences in the selection of cluster marker genes can lead to discrepancies in cell type annotations, especially when these cell types are very similar to each-other, as pointed out by authors working on methods for automated annotation of cell types in sc/sn RNA-seq data 20,21 . Furthermore, the neuronal . CC-BY 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted May 25, 2021. ; https://doi.org/10.1101/2021.05.24.445360 doi: bioRxiv preprint progenitors correspond to a continuum of cells at different stages of maturity which makes their characterization difficult. This is reflected in the variability of the prediction accuracy of these cell types across datasets (Fig. 4E) Testing results The performance of CellMap was evaluated on independent bulk RNA-seq datasets by computing the RMSE between the predicted and expected compositions. We performed the tests with both sets of profiles that contained the cell types matching those expected to be found in the test datasets, the Major9 and CNS6. The calculated RMSE values were grouped by purified primary cell types, bulk samples from the ROSMAP dataset, and iPSC-derived cell types. Given a lack of quantitative information about the true cell type content, the expected composition of the iPSC-derived cells was set to 100% of the target cell type in order to allow the calculation of the RMSE. On the primary cells, the performance of CellMap, as measured by RMSE, was comparable to the performance on the training datasets. Generally, higher RMSE values were obtained on the iPSC-derived cell lines, reflecting their imperfect resemblance to the target primary cell types. It might also be the case that such cell cultures include cells that are not fully differentiated or have stray fates. Median RMSE values were below 0.2 for the purified primary cell type samples using either set of profiles, except for pericytes, while the median RMSE of iPSC-derived samples were below 0.3, except for a subset of iPSC-derived astrocytes (Fig. 5A, B). The neurons and microglia (or macrophages in the case of Major9) were predicted close to the expected 100% purity. The astrocytes and pericytes proved to be the most challenging. The availability of reference purified pericyte datasets was limited. The ones we identified contained less than 100% pericyte-like cells and expressed genes also considered to be fibroblast and oligodendrocyte markers 22 . The differences in performance with the Major9 and CNS6 profile sets confirmed that the input datasets used for generating the cell type expression profiles had a substantial influence on the outcome. . CC-BY 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made . CC-BY 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made Examples of CellMap applications The iPSC-derived cardiomyocyte dataset (GSE122380) was generated from a time course with 16 time points and 19 human cell lines capturing differentiation from iPSCs to mature cardiomyocytes 23 . Despite variable cardiomyocyte purity and marker gene expression levels across samples, CellMap clearly revealed the increasing cardiomyocyte fraction with time (Fig. 6A). Microglia were derived from iPSCs in-house and the effects of different conditions and treatments during differentiation were tested. CellMap was applied to this dataset to determine the optimal time frame for differentiation and to assess how the treatments that such cell lines might have to be subjected to would alter the outcome. CellMap correctly indicated that the precursors were already microglia-like, and the microglia content increased to nearly 100% by day 7 of the differentiation. Additionally, the cells were resistant to both replating and freeze-thaw cycles as indicated by the unaltered microglia composition (Fig. 6B). Another in-house bulk RNA-seq dataset was generated to assess batch-to-batch variability of iPSC-derived NGN2 neurons similar to those described by Schmid et al. 24 . The CellMap output indicated low batch-to-batch variability and the cells reaching fully differentiated states by day 21 (Fig. 6C). The results showed not only the increasing predicted percentages of neurons as the cultures differentiated, but also the decreases in p-values, indicating that the maturing cells were acquiring transcriptomic profiles more closely resembling those of the primary neurons. We applied CellMap to samples from the ROSMAP dataset that had matching immunohistochemistry (IHC) and snRNA-seq data available that we used as the ground truth regarding their cell type compositions 18 . What is accepted as the ground truth also affects the apparent success of the prediction. In the case of the ROSMAP samples, comparing the predicted composition to the IHC data resulted in better predictions than using the cell type information from the matched snRNA-seq data, as there are cell type biases introduced during nuclei isolation (Fig. 6E, F). Similar observations were made previously by Patrick et al. 19 Based on their analysis and the technical variability of snRNA-seq data, we anticipate the IHC proportions to be closer to the true composition. A good overall correlation was achieved . CC-BY 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted May 25, 2021. ; https://doi.org/10.1101/2021.05. 24.445360 doi: bioRxiv preprint between the predicted cell type fractions and the IHC ground truth. CellMap tended to underpredict the neuronal content, while quite accurately predicted the astrocyte components. The oligodendrocyte and endothelial contents were underestimated, seemingly substituted by other cell types. Endothelial cells are especially easily confounded with other cell types. While they present a set of common features, they also possess considerable heterogeneity depending on their local environment in various organs and tissues 25 . Furthermore, differences in size and RNA content of various cell types can also influence the accuracy of the prediction of cell type proportions 26 . . CC-BY 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made Comparison to other deconvolution tools CellMap performed better than Bisque in each of the categories based on RMSE value, while its performance was very similar or slightly better than that of MuSiC and SCDC in predicting the composition of the pseudo-bulk samples (Fig 7). The range of RMSE values was smallest for CellMap, indicating more consistent predictions across different reference data platforms and cell types. We attribute this robustness to the use of normalization applied to the pseudo-bulk samples as part of the cell type profile generation. In the most difficult task of differentiating between neuronal subtypes (Neuron3 set), CellMap outperformed each of the other three methods (Fig.7C, F). More detailed comparisons by cell types and by input datasets are provided in the supplementary material separated by input datasets used for the Major9, CNS6 and Neuron3 profile sets ( Figures S3, S4). While its performance is on par with other existing methods, CellMap has the major advantage of flexibility in using reference datasets with nonoverlapping cell types and the ability to expand the cell type repertoire with more cell types of interest as reliable and good quality sn/sc RNA-seq datasets become available. . CC-BY 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made Discussion We have demonstrated that CellMap can be applied with good accuracy to evaluate cell composition in bulk RNA-seq data. A key feature of CellMap is its ability to be readily re-trained as new sc/sn RNAseq datasets become available and therefore improve the predictions. This information may come "for free" as CellMap is a downstream analysis on RNA-seq data that may have been generated for other purposes (e.g., differential expression or pathway analysis). Discrepancies between cell type labeling in different sc/sn datasets can result in inaccurate cell type profiles which have a detrimental effect on the deconvolution accuracy 20,21 . Instead of was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made 27,28 . CellMap was designed with the characterization of human iPSC-derived cell types in mind. However, it is necessary to acknowledge that iPSC-derived cells may not match primary cells exactly. Even primary cells kept in culture for a relative short time differ from their freshly isolated counterparts. When we compared the transcriptome profile of our in-house generated iPSC-derived microglia to the primary microglia dataset generated by Gosselin and his colleagues, it was not surprising to find out that our iPSC-derived cells were more similar to the cultured primary cells than the freshly isolated ones 29 . It would therefore be unreasonable to Data Availability and Computer Code The CellMap R package, including the R code to generate the manuscript figures, is available from https://github.com/interactivereport/CellMap. The in-house RNA-seq data generated from iPSC-derived cell lines has been deposited in the Gene Expression Omnibus (GEO tracking number GSE174379). All other datasets are publicly available and are listed in the supplementary material. . CC-BY 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted May 25, 2021.
v3-fos-license
2014-10-01T00:00:00.000Z
2011-09-11T00:00:00.000
16865439
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/au/2012/903931.pdf", "pdf_hash": "9d6dbf6ac3e2a9fd860d6b7033d0aa2bf9e33faa", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42583", "s2fieldsofstudy": [ "Medicine" ], "sha1": "3ae923f07ae5c5336364d0addfa1c916a39913ab", "year": 2011 }
pes2o/s2orc
Biochemical Alterations in Semen of Varicocele Patients: A Review of the Literature Oxidative stress is a mechanism underlying different kinds of infertility in human males. However, different results can be observed in relation to the method used for its evaluation. Varicocele patients show a number of biochemical abnormalities, including an altered distribution of coenzyme Q between seminal plasma and sperm cells and also an apparent defect in the utilization of antioxidants. Moreover, an influence of systemic hormones on seminal antioxidant system was observed too. Finally, the effects of surgical treatment on oxidativestress indexes and the possible usefulness of some medical therapies, like coenzyme Q supplementation, are discussed. In conclusion, published data show a role of oxidative stress in varicocele-related male infertility, but at present we do not know the precise molecular mechanisms underlying these phenomena. Oxidative Stress in Male Infertility An excess of reactive oxygen species (ROS) and other oxidant radicals, in the body but in particular at genital level, has been associated with male infertility [1][2][3][4][5]. The high content of polyunsaturated fatty acids within the spermatozoa plasma membrane and the low concentration of cytoplasmic scavenging enzymes make these cells highly susceptible to peroxidation in the presence of high levels of ROS in seminal fluid [6][7][8]. It has been shown that the time of permanence of spermatozoa in the epididymis is longer in oligozoospermic patients, resulting in a higher exposure to ROS [9][10][11]. The source of ROS in seminal fluid is due to both sperm cells and infiltrating leukocytes [12]. There is also a correlation between leukocyte concentrations, ROS levels, lipid peroxidation, and functional damage [13]. To counteract the potentially damaging effects of oxidative stress (OS), sperm cells and seminal plasma are endowed with some protective antioxidant systems. Spermatozoa have a rather low level of enzymatic antioxidant defence, including catalase, superoxide dismutase (SOD), and glutathione peroxidise. On the contrary, seminal plasma is well endowed with antioxidant buffer capacity [14]. The role of ascorbic acid and uric acid, among the low-molecular-weight antioxidants in seminal plasma, has also been highlighted [14]. Studies in infertile men have shown an impaired seminal plasma nonenzymatic antioxidant capacity [15]. The total oxyradical scavenging capacity (TOSC) is a recently developed assay measuring the overall capability of biological fluids or cellular antioxidants to neutralize the toxicity of various oxyradicals [16,17]. The TOSC assay can discriminate between different forms of ROS, allowing to identify the role of specific antioxidants in the onset of pathological processes. An early application of the TOSC assay in andrology showed a reduced antioxidant efficiency in seminal fluid of infertile men with a significant correlation between the scavenging capacity of hydroxyl radicals and the parameters of sperm cell motility [18]. Another simpler method to measure the total antioxidant system (TAOS), also called total antioxidant capacity (TAC), was conceived by Rice-Evans and Miller [19]. In this method, the system metmyoglobin-H 2 O 2 is used as source of radicals, which interaction with the chromogen 2, 2 I -azinobis-3ethylbenzothiazoline-6-sulphonate (ABTS) generates a radical cation which is spectroscopically detectable. Antioxidants contrast this reaction, so that the lag time in the formation of ABTS +• is proportional to the antioxidant content. In a 2003 study, for the first time we applied this method, with minor modifications, to seminal plasma from varicocele infertile patients [20]. Studies with Varicocele Patients Varicocele (VAR) is a model of male infertility particularly interesting, from the andrological point of view, not only for its prevalence (19-41% of infertile men) [21] but also in relation to an open debate on the real need of surgical or sclerotherapic treatment and on its role in causing infertility. Furthermore, different studies suggest that OS is a common mechanism underlying VAR, as well as Xirradiation, exposure to environmental toxicants, or other physical conditions such as cryptorchidism. All these stress conditions can cause changes in testicular microvascular blood flow and endocrine signalling, eventually leading to germ cell apoptosis and hypospermatogenesis [22]. Other studies suggest that VAR patients have high OS also in the case of normozoospermia. DNA fragmentation index (the percentage of sperm cells with denatured DNA), determined by flow cytometry, and the percentage of TUNEL-positive cells, another estimate of sperm DNA damage, have been found significantly greater in VAR, either with normal or abnormal semen profile, than in control fertile subjects. Likewise, ROS levels were significantly higher in both sub-groups of VAR patients [23]. Sakamoto and colleagues showed a significantly higher hexanoyl-l-lysine (HEL) concentration and SOD activity in seminal plasma in azoospermic and oligozoospermic patients; those with VAR had a significantly higher NO, HEL, and SOD activity in plasma. Oligozoospermic patients with VAR presented significantly higher IL-6 in seminal plasma. After varicocelectomy, a significant increase in sperm concentration was found together with a reduction in NO, HEL, 8-Hydroxydeoxyguanosine (8-OHdG) levels, and SOD activity, a reduction in IL-6, and a decrease in the percentage of apoptosis-positive sperm, evaluated by sperm DNA fragmentation [24]. A prospective study, before and after varicocelectomy, was also performed in subfertile patients. The parameters for evaluating OS changes were (a) 4977 bp deletion of mitochondrial DNA in sperm, as detected by PCR, (b) 8-OHdG content in spermatozoa DNA, measured by HPLC electrochemical method, (c) seminal plasma protein thiols and ascorbic acid. Varicocelectomy had a positive effect on seminal parameters in 22/30 patients, with a decrease in 4977 bp deletion of mitochondrial DNA and 8-OHdG levels, and an increase in plasma thiols and ascorbic acid. Interestingly, also in eight patients in whom semen quality did not improve after surgery, a significant decrease in 8-OHdG in sperm DNA and an increase in thiols and ascorbic acid were observed [25]. A standardized semen quality score, proposed by Pasqualotto and colleagues, applying principal component analysis to nine semen characteristics, did not show differences between infertile and fertile subjects with VAR; however, the mean quality score was lower than that in healthy controls. Moreover, the authors showed significant lower ROS-TAC scores in infertile males compared with control subjects, but the scores of non-VAR fertile subjects were not significantly different from those of fertile men with VAR [26]. Therefore, it can be reasonably hypothesized that the fertility potential in VAR can decline due to OS. A meta-analysis has been published, collecting 23 human studies on the topic of OS in VAR-associated infertility [27], Four studies of these were selected for the similar methods of measuring ROS. The overall estimate showed higher concentrations of ROS and lower total antioxidant capacity in VAR than in controls. In the mentioned paper by Meucci et al. [20], we showed a Lag value (TAC) significantly greater in VAR patients than in non-VAR. But it should be remembered that measuring TAC is different from measuring ROS. Oligospermic VAR patients showed the greatest values of TAC. Lag and sperm motility significantly correlated in VAR normospermic patients. Follicle-stimulating hormone (FSH) showed significant inverse association with Lag in the same group. This partially unexpected results was interpreted as an ineffective utilization of antioxidants in oligospermic VAR, while in normozoospermic VAR the direct correlation between TAC and motility suggested a protective role toward sperm motility. Finally, in the same subgroup, the inverse correlation with FSH suggested a better utilization of antioxidants thanks to the increasing FSH levels, with a compensation with still unknown mechanisms. However, in the light of the last studies, sperm density could be seen as a factor of antioxidant consumption. Another important antioxidant is the lipophilic molecule Coenzyme Q 10 (CoQ), also known as ubiquinone, for its ubiquitous presence in animals and different tissues. It is a component of the mitochondrial respiratory chain, therefore potentially involved in spermatozoa function both for energetic and antioxidant properties. We assayed for the first time CoQ levels both in total seminal fluid and in seminal plasma (obtained by centrifugation) [28]. In this original study, conducted on 77 subjects with normal or pathological findings at a standard semen analysis (according to WHO criteria [29]), CoQ levels, measured by HPLC, showed a significant correlation with sperm count and sperm motility. However VAR patients represented an interesting exception; in fact in these patients the correlation with sperm concentration was preserved, whereas the correlation with sperm motility was lacking. Moreover, they showed an increased plasma to total seminal CoQ ratio in comparison with the other subjects. We suggested a possible molecular defect in VAR; a relative deficiency or a defective utilization of CoQ in sperm cells could contribute to the respiratory chain defect reported in spermatozoa of these subjects, where a reduction in O 2 consumption had been showed [30]. Figure 1: Standard sperm analysis and total antioxidant status (values are medians; data from [33]). In a following study, we also determined CoQ in the cell pellet of spermatozoa, obtained after centrifugation of semen [31]. We found, once again, a differential pattern in VAR and non-VAR subjects. In non-VAR, a higher concentration of CoQ (expressed as ng per million of cells) was present in the spermatozoa of oligo-and asthenozoospermic patients. This relationship was not observed in VAR, who also showed slightly lower intracellular absolute values of CoQ. Since CoQ is involved in the cell defence against free radical damage, a higher intracellular concentration may represent a mechanism of protection for spermatozoa. In VAR patients, this mechanism seems to be defective, leading to higher sensitivity to oxidative damage. The results were confirmed in extended groups of patients, compared to controls, matched with VAR according to seminal parameters (idiopathic oligozoospermia, isolated asthenozoospermia, normal fertile men) [32]. We remarked the significantly higher proportion of CoQ in seminal plasma; as it reflects an interchange between intra-and extracellular compartments, the different distribution in VAR patients could represent a greater sensitivity to peroxidative damage and could suggest its reduced utilization for energy and therefore a defective motility even in patients with normal sperm count. More recently, we studied a group of unselected infertile patients (n = 100; divided into 3 subgroups: VAR, infections, other etiologies) and 31 fertile men, also correlating Lag values (TAC) with circulating hormones: gonadotropins, testosterone, estradiol, fT3, fT4, TSH, and prolactin (PRL) [33]. We confirmed the finding of a higher TAC in VAR than controls, while lower values were in patients with inflammation. The regression analysis between hormones and seminal parameters showed an inverse correlation between PRL and sperm motility and a direct correlation of TAC with PRL or fT4, but not with gonadotropins or gonadal steroids. Interesting was the suggestion that systemic hormones may play a role in regulating seminal TAC, even hormones, such as thyroid ones and prolactin, which are not usually tested in the first-level evaluation of male patients with fertility problems. Figure 1 shows the main results found in that study, with respect to TAC differences in the various groups. In another study, a multivariate analysis, including FSH, Lag, percentage of forward-progressive spermatozoa, oligozoospermia, and VAR, had showed a strong inverse correlation between FSH and motility (r 2 = 0.31, P > F = 0.0007), not modified by Lag (r 2 = 0.31, P > F = 0.002), suggesting that the link between FSH levels and sperm motility is probably represented by the length of Lag phase, suggesting a compensatory role of FSH in modulating antioxidant systems [34]. Effects of Surgical Treatment The effect of varicocelectomy on OS, similarly, is not so univocally clear. Vitamin E has been demonstrated to be positively affected by surgical VAR repair [35]. The relationship between varicocelectomy and plasma OS has been studied also in children (10-16 years) with left-sided VAR and ipsilateral testicular hypoplasia, by evaluating basal (presurgery) thiobarbituric acid reactive substances and plasma peroxidation susceptibility (lag time and slope) both in peripheral blood samples and in samples from the pampinous plexus. Peripheral blood samples were reevaluated 1 year after surgery. Both parameters were comparable in peripheral and pampinous blood and higher compared with controls. After surgery they significantly decreased, again suggesting that surgical varicocelectomy with a venous shunt construction reduces OS. The study highlights that OS is present in children and adolescents with VAR [36]. Our group also studied VAR patients in an attempt to verify whether varicocelectomy had an effect on CoQ distribution in seminal fluid [37]. Only a partial reversion was observed, since the seminal plasma CoQ/total CoQ ratio (higher in VAR vs controls) decreased, but the correlation between total CoQ and motility was not restored; instead, a peculiar correlation between cellular CoQ and motility (an inverse correlation, observed at variance with normal subjects, before surgery) was no longer detectable in postoperative VAR patients. Effects of Exogenous CoQ Treatment The findings mentioned in the previous paragraphs constitute the rationale for treating infertile subjects, particularly VAR, with exogenous CoQ. Lewin and Lavon [38] originally reported the effect of CoQ on sperm motility in vitro: a significant increase in motility had been observed in sperm obtained from asthenozoospermic men, incubated with exogenous CoQ, whereas no significant variation was reported in the motility of sperm cells from normal subjects. The same study also reported the effect of exogenous CoQ in vivo, in a group of patients with low fertilization rates, after in vitro fertilization with intracytoplasmatic sperm injection for male factor infertility. No significant changes were reported in most sperm parameters, but a significant improvement was noticed in the fertilization rate after a treatment with CoQ at the dosage of 60 mg/day for a mean period of 103 days. Advances in Urology To investigate a potential therapeutic role, Balercia et colleagues administered CoQ to a group of 22 idiopathic asthenozoospermic infertile patients [39], classified according to WHO-1999 criteria [29]. Patients were given CoQ, 200 mg/day divided into two doses per os, for 6 months. Semen analysis, including computer-assisted sperm analysis and motility (C.A.S.A.), CoQ, and phosphatidylcholine assays, were performed at baseline and after 6 months of therapy. A semen analysis was further performed after 6 months from interruption of therapy (wash-out). After treatment, an increase in CoQ and phosphatidylcholine concentrations were found both in seminal plasma and in sperm cells. Regarding semen, a significant difference was found in forward motility of sperm cells after 6 months of CoQ oral supplementation. But the main result was the improvement of sperm motility, also confirmed by means of computer-assisted determination of kinetic parameters (significant increase of VCL and VSL). A positive dependence (using the Cramer's index of association) was evident among the relative variations of seminal plasma or intracellular CoQ content and of C.A.S.A. kinetic parameters (Cramer's V 1/4 0.4637; 0.3818; 0.3467; 0.5148, resp.). A significant reduction in sperm forward motility was reported after 6 months of washout, whereas no significant differences were found in sperm cells concentration and morphology. These results constitute the first demonstration that exogenous administration of CoQ increases its levels in seminal plasma and in spermatozoa. The increment was important, especially in seminal plasma where posttreatment levels were three times higher than basal ones. Similar increases of CoQ concentration (two-threefold higher than baseline value) are commonly found in blood plasma after chronic administration of the quinone [40]. As CoQ is a highly lipophylic molecule, we could reasonably hypothesize its diffusion through the phospholipid bilayer of cellular membranes, but we presently do not know whether transport from blood plasma to testicular and accessory male genital glands is passive or involves an active mechanism. Nevertheless, the good degree of association among these variables, according to Cramer's V index of association, supports the hypothesis of a pathogenetic role of CoQ in asthenozoospermia, according to previously reported data [32]. Apparent improvement of the spontaneous pregnancy rate, even though this was not a primary endpoint of the study, also suggests that this therapeutic approach may be beneficial. These results were confirmed by a double-blind placebocontrolled clinical trial, realized by the same group [41], using CoQ dosage similar to that used in the previous open trial on male infertility. The study design was: 1 month run in, 6 months of therapy (30 patients) or placebo (30 patients), and a further 3-month followup. The study confirmed the increase in CoQ levels both in seminal plasma and sperm cells; also CoQH (the reduced form of CoQ, with a higher antioxidant efficacy) showed a similar increase. A significant improvement of sperm cell total motility and forward motility was observed in the treated group after 6 months of CoQ administration. The improvement of sperm cell kinetic parameters was also confirmed after computer-assisted analysis, with a significant increase both in VCL and VSL after treatment. Moreover, patients with lower baseline value of motility and lower levels of CoQ had a statistically significant higher probability to be responders to the treatment. Wash-out data confirmed the expected treatment-related results. Nine spontaneous pregnancies were achieved during the observation period. After opening the randomisation list, it was found that six of the patients who had impregnated their female partner had undergone CoQ therapy (three after 4 months, one after 5 months and one after 6 months of treatment). Three pregnancies occurred in partners of patients undergoing placebo treatment: one after 2 months of treatment and the other 2 after 3 months of washout. A positive effect of CoQ treatment on sperm features was also confirmed in a study by Safarinejad [42], in which 212 infertile men with idiopathic oligoastenospermia were treated either with CoQ (300 mg/day) or placebo. Treatment lasted 26 weeks and was followed by a 30-week washout. Significant improvement in sperm concentration and motility was found with CoQ therapy for both parameters. The Kruger classification sperm morphology revealed an increase in the percent of normal forms in the CoQ group. The CoQ group also showed a significant decrease in serum FSH and LH at 26-week treatment phase. By the end of treatment phase, percentage of acrosome reaction increased in the CoQ group whereas it remained unchanged in the placebo group. Monitoring pregnancy rate was not among the aims of this study. Taken together, these studies show an improved sperm motility upon exogenous CoQ administration, which could be explained on the basis of the well known involvement of CoQ in mitochondrial bioenergetics and of its widely recognized antioxidant properties. The increased concentration of CoQ in seminal plasma and sperm cells, the improvement of semen kinetic features after treatment, and the evidence of a direct correlation between CoQ concentrations and sperm motility strongly support a direct cause/effect relationship. Conclusion Even if it is clear that Oxidative Stress is present in varicocele as a pathogenic mechanism, however the interpretation of analytical data is not so simple. In fact, antioxidants values can be influenced by different modulatory factors, and anyway they always represent the balance between synthesis and utilization. A deeper insight into these molecular mechanisms could lead to a greater knowledge of the so-called unexplained infertility. However, different aspects of varicocele physiopathology need to be still investigated, including the agerelated effect of oxidative stress, the biochemical composition of seminal plasma. These data are waiting for being integrated with the novel approaches to infertility studies including genetic and proteomic investigation, which can further clarify to what extent varicocele can affect sperm function and which predictive values we can dispose to better give an indication to surgical or medical therapy in such a condition.
v3-fos-license
2024-03-31T15:51:53.986Z
2024-03-26T00:00:00.000
268770882
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1660-3397/22/4/146/pdf?version=1711463024", "pdf_hash": "a65b3c12bf3648c4e91df2333f0d567021c26c6e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42586", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "sha1": "8fb083a5becae00fe7dedf468955163ecef46686", "year": 2024 }
pes2o/s2orc
Integrated Process for Schizochytrium Oil Extraction, Enzymatic Modification of Lipids and Concentration of DHA Fatty Acid Esters Using Alternative Methodologies Marine microalgae Schizochytrium sp. have a high content of docosahexaenoic acid (DHA), an omega-3 fatty acid that is attracting interest since it prevents certain neurodegenerative diseases. The obtention of a bioactive and purified DHA fatty acid ester using a whole-integrated process in which renewable sources and alternative methodologies are employed is the aim of this study. For this reason, lyophilized Schizochytrium biomass was used as an alternative to fish oil, and advanced extraction techniques as well as enzymatic modification were studied. Microalgal oil extraction was optimized via a surface-response method using pressurized liquid extraction (PLE) obtaining high oil yields (29.06 ± 0.12%) with a high concentration of DHA (51.15 ± 0.72%). Then, the enzymatic modification of Schizochytrium oil was developed by ethanolysis using immobilized Candida antarctica B lipase (Novozym® 435) at two reaction temperatures and different enzymatic loads. The best condition (40 °C and 200 mg of lipase) produced the highest yield of fatty acid ethyl ester (FAEE) (100%) after 8 h of a reaction attaining a cost-effective and alternative process. Finally, an enriched and purified fraction containing DHA-FAEE was obtained using open-column chromatography with a remarkably high concentration of 93.2 ± 1.3% DHA. The purified and bioactive molecules obtained in this study can be used as nutraceutical and active pharmaceutical intermediates of marine origin. Nevertheless, to efficiently increase the amount of DHA present in neuronal tissues, DHA may be incorporated in the diet via nutritional supplements and nutraceuticals, since conversion from α-linolenic acid to DHA in adults is limited [8,9].In this way, the preferred sources of DHA are fish and krill, but microalgae have emerged as an alternative and ecological source of DHA [8,10] that is in agreement with the sustainability of marine resources.Therefore, bioactive compounds from microalgal oils are gaining importance as a renewable and sustainable source of DHA [8,11,12]. Among different microalgae that are harvested worldwide and accepted as novel food, Schizochytrium sp. is emphasized due to its high lipid content, especially regarding the high amounts of DHA (around 40% DHA), whose accumulation depends on culture conditions [13,14].This microalgal oil is advantageous in the industry because of its Mar.Drugs 2024, 22, 146 2 of 13 heterotrophic culture conditions, which enable elevated DHA production in the form of triacylglycerols (TAG) [15].Moreover, Schizochytrium oil is also composed of three other principal fatty acids, myristic (C14:0), palmitic (C16:0), and docosapentaenoic acid (C22:5 n-6, DPA), the main acids found in its composition [13,14].For their extraction, modern technologies such as pressurized liquid extraction (PLE) arise as an alternative to traditional extraction procedures [16,17].PLE uses high temperatures and pressures that can extract bioactive compounds extremely quickly, avoiding their oxidation and deterioration in a process that follows the principles of green chemistry [18][19][20][21][22]. Thus, the possibility of modifying these parameters enables the optimization of lipid extraction for its consequent hydrolysis by lipases. Lipases (EC 3.1.1.3)are used in a wide range of industrial applications [23][24][25] in the pharmaceutical and food industries.Over the last decade, oil hydrolysis and transesterification by lipases have become more important [26][27][28].However, only a few reports focus on the ethanolysis of microalgae oil using lipases [29][30][31][32].The esterification of LC-PUFA, such as DHA, is developed by immobilized lipases under mild conditions to maintain the labile structure of the fatty acid.The use of immobilized lipases as industrial biocatalysts is the most suitable method for developing more selective, controlled, and rapid procedures in the industry [33][34][35][36].Furthermore, immobilization allows for the industrial reuse of the biocatalyst for several cycles as a result of an increase in its stability, and the easy separation of the desired product, obtaining cost-efficient procedures [37][38][39][40]. In this study, a novel strategy to obtain an enriched DHA oil from Schizochytrium sp. was proposed.For this aim, different extraction conditions using PLE technology were compared and optimized, and the extracted oil was characterized by GC-MS.The enzymatic ethanolysis of microalgal oil was developed using CALB Novozym ® 435 to produce FAEE with the highest concentration of DHA, which can serve as a food supplement or nutraceutical for structured phospholipid synthesis to prevent neurodegenerative diseases.Thus, the hypothesis of this paper was to determine the possibility of producing DHA concentrates from sustainable raw materials as microalgae using environmentally friendly technologies. Lipid Extraction from Schizochytrium sp. by Pressurized Liquids Compared to the Soxhlet Method Lipid extraction of Schizochytrium lipids was compared using two different methods.On the one hand, Soxhlet was used as the traditional procedure for obtaining a reference lipidic yield.On the other hand, PLE was proposed as an alternative method for the advanced and fast extraction of bioactive compounds preserving the bioactivity. The result obtained with Soxhlet was an extraction yield of 24.04 ± 0.25%.Lipid extraction using pressurized liquids was optimized using surface-response methodology.Parameters such as temperature and solvent polarity (hexane and ethanol) were evaluated in a previous study on Nannochloropsis lipids [51].In this study, hexane was used as a non-polar solvent, ethanol as a polar solvent and a mixture of both with mild polarity (1:1) for lipid extraction, as reported for other microalgae species [52].Moreover, different extraction times (5, 10 and 15 min) and different temperatures (from 80 to 120 • C) were also evaluated. Using the Statgraphics 19 program, the surface-response plots shown in Figure 1 were drawn.As can be seen, the influence of different solvent mixtures on the extraction was revealed.Thus, using hexane and ethanol in equal parts (1:1), the yield improved compared to extraction only with hexane or ethanol for all the times studied.With respect to the static extraction time, there was no difference in the oil yield when this parameter was modified (Appendix A).In addition, an increase in the extraction yield was obtained, as previously found in the literature, with oil yields ranging from 17.13 ± 1.17% to a maximum yield of 29.06 ± 0.12% at 120 °C for 15 min, and with a hexane-ethanol 1:1 solvent mixture. Thus, the extraction with a mixture of hexane-ethanol in a 1:1 proportion was used to scale up the method, with a final volume of 70 mL of solvent.In this case, a 26.15 ± 1.13% extraction yield was achieved, notably similar to that obtained by the standard protocol with less volume.In the scale-up, a solvent saving system was configured with up to 10 times the amount of microalgae.In both cases, the results found were also comparable to the yield found using the Soxhlet method, representing the valuable alternative of PLE against traditional methods, decreasing both the amount of non-friendly solvents, such as hexane, and the necessary time and energy.Therefore, the scale-up of the extraction method was useful as it allowed a higher amount of microalgal oil to be produced, leading to the development of further experiments of enzymatic modification. Characterization of Schizochytrium sp. Oils Starting with the HPLC-ELSD characterization (see Section 3.7), all oils used for the experimental development were analyzed, and the results show that they contained only TAG in their composition (see Appendix B), in accordance with the consulted literature [53]. For characterization of fatty acids of the TAG by GC-MS, an analysis of Schizochytrium sp.commercial oil and microalgae oils obtained using PLE in our laboratory was carried out.To compare their fatty acid profiles, as oils contained only TAG in their composition (no free fatty acids), their characterization in a basic medium is an effective method (see Section 3.5).GC-MS was employed to obtain different fractions of decreasing polarity and separate the fatty acid methyl esters.Fatty acids such as myristic (14:0), palmitic (16:0), As can be seen, the influence of different solvent mixtures on the extraction was revealed.Thus, using hexane and ethanol in equal parts (1:1), the yield improved compared to extraction only with hexane or ethanol for all the times studied.With respect to the static extraction time, there was no difference in the oil yield when this parameter was modified (Appendix A).In addition, an increase in the extraction yield was obtained, as previously found in the literature, with oil yields ranging from 17.13 ± 1.17% to a maximum yield of 29.06 ± 0.12% at 120 • C for 15 min, and with a hexane-ethanol 1:1 solvent mixture. Thus, the extraction with a mixture of hexane-ethanol in a 1:1 proportion was used to scale up the method, with a final volume of 70 mL of solvent.In this case, a 26.15 ± 1.13% extraction yield was achieved, notably similar to that obtained by the standard protocol with less volume.In the scale-up, a solvent saving system was configured with up to 10 times the amount of microalgae.In both cases, the results found were also comparable to the yield found using the Soxhlet method, representing the valuable alternative of PLE against traditional methods, decreasing both the amount of non-friendly solvents, such as hexane, and the necessary time and energy.Therefore, the scale-up of the extraction method was useful as it allowed a higher amount of microalgal oil to be produced, leading to the development of further experiments of enzymatic modification. Characterization of Schizochytrium sp. Oils Starting with the HPLC-ELSD characterization (see Section 3.7), all oils used for the experimental development were analyzed, and the results show that they contained only TAG in their composition (see Appendix B), in accordance with the consulted literature [53]. For characterization of fatty acids of the TAG by GC-MS, an analysis of Schizochytrium sp.commercial oil and microalgae oils obtained using PLE in our laboratory was carried out.To compare their fatty acid profiles, as oils contained only TAG in their composition (no free fatty acids), their characterization in a basic medium is an effective method (see Section 3.5).GC-MS was employed to obtain different fractions of decreasing polarity and separate the fatty acid methyl esters.Fatty acids such as myristic (14:0), palmitic (16:0), stearic (18:0), oleic (18:1), linoleic (18:2), arachidonic (20:4), eicosapentaenoic (EPA), docosapentaenoic (DPA n-3 and n-6), and docosahexaenoic (DHA) were found. According to results shown in Table 1, there were some differences between commercial oil and oils extracted using PLE with different solvents.In commercial oil a lack of myristic acid was found in the fatty acid profile, despite the fact that myristic acid is commonly found in other oils of Schizochytrium sp.described in the literature [13].In our case, a ratio of 14:0 fatty acids was present in all microalgae oils obtained using PLE in around 10-13% of the total fatty acid composition. Table 1.Fatty acid profiles of different Schizochytrium sp.oils (commercial, extracted from biomass by PLE with different solvents and extracted from biomass using the Soxhlet method) analyzed by GC-MS.Microalgal oil 1 referred to Schizochytrium oil extracted with PLE using hexane.Microalgal oil 2 referred to Schizochytrium oil extracted with PLE using hexane-ethanol (1:1).Data were calculated as a percentage of fatty acid composition related to total ± standard deviation (SD). Commercial Oil Microalgal Moreover, the composition of palmitic acid was similar in all studied oils (around 20% of total composition) as well as DPA n-6 (around 15% in all cases).Even though the DHA percentage in the composition was higher in the commercial oil, there was an impact on the whole fatty acid profile, as it did not have myristic oil.Therefore, DHA representativeness was diverse, and the results cannot be directly compared.Accordingly, the oil with the highest amount of DHA from all extracted oils tested was that produced by PLE with 1:1 hexane-ethanol.More than 50% of the total fatty acid composition corresponded to DHA, exceeding that extracted by Soxhlet by 5%.This result may be due to the methodology applied, since the Soxhlet method uses high temperatures for several hours.In the case of PLE, the sample is in an environment free of oxygen and light, unlike the Soxhlet method, so there is no oxidation of PUFA during short time extraction, nor when high temperatures such as 120 • C are used [54][55][56]. Regarding the fatty acid composition of Schizochytrium sp.oils extracted with different solvent mixtures, the extraction by PLE influenced extraction yield (see Section 2.1), but it did not have much impact on the fatty acid composition of the extracted oil. Enzymatic Ethanolysis of Schizochytrium Oil To optimize the enzymatic ethanolysis reactions of Schizochytrium oil and produce ethyl esters of the fatty acids, reaction kinetics at different temperatures (30 • C and 40 • C) and different loads of commercial lipase Novozym ® 435 (CALB) were studied. The results represented in Figure 2 show that the conversion of TAG into FAEE was similar at both reaction temperatures of 30 • C or 40 • C.However, remarkable results were obtained for the first two hours of reaction, where large differences between temperatures in the conversion of the initial TAG were observed.In this case, at 30 • C (Figure 2A) there was a 64.35% TAG compared to 20.68% FAEE.Comparing these results with those obtained at 40 • C (Figure 2B), at the same time as the reaction, there was 54.24% of TAG versus 30.18% of FAEE.From this point, there was a higher conversion in FAEE at 40 °C than at 30 °C, while after 6 h of ethanolysis, at 40 °C, the yield of FAEE reached 84.47%; the yield at 30 °C was 69.84%.When considering the 24-h aliquot, in both cases, a yield of 100% was achieved in the conversion of TAG into FAEE.In all cases, no relevant differences in reaction intermediates (DAG and MAG) were observed, either at 30 °C or 40 °C. Therefore, the increase in temperature favors the course of the enzymatic reaction, as was already discussed in the literature on Novozym ® 435 lipase [30].Subsequently, enzymatic ethanolysis was also performed at 40 °C by adding twice the amount of the enzyme to study the influence of this parameter. As seen in Figure 3, the reaction with a double load of CALB evolved more rapidly than the reaction with a normal load at the same temperature.After 4 h, a yield of FAEE of 66.58% was achieved by adding the normal enzymatic load (Figure 2B).In comparison, when adding a double amount of enzyme, the yield obtained was 91.03%.Moreover, 100% FAEE was reached after only 8 h of reaction with a double enzymatic load, while it took 24 h to obtain this yield with a simple load.Therefore, by adding twice the enzymatic load at 40 °C and 200 rpm, 100% FAEE was produced after 8 h, so the time was considerably reduced, but more enzyme was used. With this process, Schizochytrium oil composed by TAG was enzymatically modified in a mild process at 40 °C to produce FAEE that can be separated to produce concentrates of DHA for different purposes.From this point, there was a higher conversion in FAEE at 40 • C than at 30 • C, while after 6 h of ethanolysis, at 40 • C, the yield of FAEE reached 84.47%; the yield at 30 • C was 69.84%.When considering the 24-h aliquot, in both cases, a yield of 100% was achieved in the conversion of TAG into FAEE.In all cases, no relevant differences in reaction intermediates (DAG and MAG) were observed, either at 30 Therefore, the increase in temperature favors the course of the enzymatic reaction, as was already discussed in the literature on Novozym ® 435 lipase [30].Subsequently, enzymatic ethanolysis was also performed at 40 • C by adding twice the amount of the enzyme to study the influence of this parameter. As seen in Figure 3, the reaction with a double load of CALB evolved more rapidly than the reaction with a normal load at the same temperature.From this point, there was a higher conversion in FAEE at 40 °C than at 30 °C, while after 6 h of ethanolysis, at 40 °C, the yield of FAEE reached 84.47%; the yield at 30 °C was 69.84%.When considering the 24-h aliquot, in both cases, a yield of 100% was achieved in the conversion of TAG into FAEE.In all cases, no relevant differences in reaction intermediates (DAG and MAG) were observed, either at 30 °C or 40 °C. Therefore, the increase in temperature favors the course of the enzymatic reaction, as was already discussed in the literature on Novozym ® 435 lipase [30].Subsequently, enzymatic ethanolysis was also performed at 40 °C by adding twice the amount of the enzyme to study the influence of this parameter. As seen in Figure 3, the reaction with a double load of CALB evolved more rapidly than the reaction with a normal load at the same temperature.After 4 h, a yield of FAEE of 66.58% was achieved by adding the normal enzymatic load (Figure 2B).In comparison, when adding a double amount of enzyme, the yield obtained was 91.03%.Moreover, 100% FAEE was reached after only 8 h of reaction with a double enzymatic load, while it took 24 h to obtain this yield with a simple load.Therefore, by adding twice the enzymatic load at 40 °C and 200 rpm, 100% FAEE was produced after 8 h, so the time was considerably reduced, but more enzyme was used. With this process, Schizochytrium oil composed by TAG was enzymatically modified in a mild process at 40 °C to produce FAEE that can be separated to produce concentrates of DHA for different purposes.After 4 h, a yield of FAEE of 66.58% was achieved by adding the normal enzymatic load (Figure 2B).In comparison, when adding a double amount of enzyme, the yield obtained was 91.03%.Moreover, 100% FAEE was reached after only 8 h of reaction with a double enzymatic load, while it took 24 h to obtain this yield with a simple load.Therefore, by adding twice the enzymatic load at 40 • C and 200 rpm, 100% FAEE was produced after 8 h, so the time was considerably reduced, but more enzyme was used. With this process, Schizochytrium oil composed by TAG was enzymatically modified in a mild process at 40 • C to produce FAEE that can be separated to produce concentrates of DHA for different purposes. Open-Column Chromatography Therefore, open-column chromatography was performed to separate and purify DHA from the rest of FAEE produced in the enzymatic ethanolysis described before.The sample was divided into different fractions that were eluted with different mixtures of solvents. The first four fractions were discarded as FAEE was kept inside the column because of its affinity.A second round of hexane-ethyl acetate (95:5), which correspond to fraction 5 (F5) was needed for FAEE elution.F5 was analyzed by GC-MS and the corresponding chromatogram is represented in Figure 4 (blue).It can be seen that a large amount of saturated fatty acids (myristic acid and palmitic acid), as well as squalene and a low amount of DPA, were eluted. Open-Column Chromatography Therefore, open-column chromatography was performed to separate and purify DHA from the rest of FAEE produced in the enzymatic ethanolysis described before.The sample was divided into different fractions that were eluted with different mixtures of solvents. The first four fractions were discarded as FAEE was kept inside the column because of its affinity.A second round of hexane-ethyl acetate (95:5), which correspond to fraction 5 (F5) was needed for FAEE elution.F5 was analyzed by GC-MS and the corresponding chromatogram is represented in Figure 4 (blue).It can be seen that a large amount of saturated fatty acids (myristic acid and palmitic acid), as well as squalene and a low amount of DPA, were eluted.Then, fraction 6 (F6) was eluted with hexane-ethyl acetate in proportion 90:10 and analyzed by GC-MS (Figure 4, red).In this case, the following values of FAEE were obtained: 5.0 ± 0.8% myristic acid, 6.3 ± 1.3% palmitic acid, 1.7 ± 0.1% EPA, 14.9 ± 0.1% DPA, and 72.0 ± 2.2% DHA.The initial proportion of fatty acids is explained in the characterization of microalgal oil 2 (see Section 2.2).Overall, a fraction with almost 90% of LC-PUFA was isolated, constituting mostly DHA.Moreover, in the next fraction of open-column chromatography (F7), there was not any peak corresponding to FAEE.Therefore, all the DHA extracted from Schizochytrium sp. was eluted in fraction 6, with a maximum recovery. In order to attain a DHA-enriched fraction, the protocol of open-column chromatography was slightly modified as expressed in Section 3.9.The first four fractions were discarded.Moreover, either P5 or P6 of the purification protocol (duplicates) were composed of myristic and palmitic acid, squalene, a large part of DPA, and a low fraction of DHA.Subsequently, modifying the relation between hexane and ethyl acetate from 95:5 to 92:8 and using two cycles of elution under the same conditions enabled the total elution of certain FAEE.However, part of DHA was also eluted in this fraction, avoiding the full recovery of DHA. Furthermore, in the last step of purification, hexane-ethyl acetate in proportion 90:10 was used.This protocol was also carried out in duplicate, obtaining a fraction with the following values of FAEE: 2.0 ± 0.4% EPA, 4.8 ± 0.9% DPA, and 93.2 ± 1.3% DHA (Figure 5).In this case, a purity higher than 90% DHA was achieved. Hence, a fraction with 75.0 ± 14.9 mg with a purity of 93.2% of DHA was obtained from 0.3 g of FAEE that was introduced in the open column and produced by enzymatic ethanolysis.If the recovery of each of the duplicates was calculated in terms of average and standard deviation, a yield of 44.72 ± 6.99% DHA was obtained.Taking into account that the percentage of DHA extracted with PLE using hexane-ethanol (1:1) was 51.2% of Then, fraction 6 (F6) was eluted with hexane-ethyl acetate in proportion 90:10 and analyzed by GC-MS (Figure 4, red).In this case, the following values of FAEE were obtained: 5.0 ± 0.8% myristic acid, 6.3 ± 1.3% palmitic acid, 1.7 ± 0.1% EPA, 14.9 ± 0.1% DPA, and 72.0 ± 2.2% DHA.The initial proportion of fatty acids is explained in the characterization of microalgal oil 2 (see Section 2.2).Overall, a fraction with almost 90% of LC-PUFA was isolated, constituting mostly DHA.Moreover, in the next fraction of open-column chromatography (F7), there was not any peak corresponding to FAEE.Therefore, all the DHA extracted from Schizochytrium sp. was eluted in fraction 6, with a maximum recovery. In order to attain a DHA-enriched fraction, the protocol of open-column chromatography was slightly modified as expressed in Section 3.9.The first four fractions were discarded.Moreover, either P5 or P6 of the purification protocol (duplicates) were composed of myristic and palmitic acid, squalene, a large part of DPA, and a low fraction of DHA.Subsequently, modifying the relation between hexane and ethyl acetate from 95:5 to 92:8 and using two cycles of elution under the same conditions enabled the total elution of certain FAEE.However, part of DHA was also eluted in this fraction, avoiding the full recovery of DHA. Furthermore, in the last step of purification, hexane-ethyl acetate in proportion 90:10 was used.This protocol was also carried out in duplicate, obtaining a fraction with the following values of FAEE: 2.0 ± 0.4% EPA, 4.8 ± 0.9% DPA, and 93.2 ± 1.3% DHA (Figure 5).In this case, a purity higher than 90% DHA was achieved. the total fatty acid composition, almost 90% of DHA was recovered from Schizochytrium sp. with a high purity. Traditional Lipid Extraction by Soxhlet Method For Soxhlet extraction [57], 4.0 g of lyophilized Schizochytrium sp. was used, and 150 mL of hexane was added in a Soxhlet extractor with continuous boiling and condensation cycles of the solvent for 6 h (matrix/solvent ratio 1:37.5).Subsequently, the extracts were evaporated with a rotary evaporator (Heidolph Hei-Vap Value HB/G3, Germany, Berlin) at 35 °C followed by nitrogen stream to constant weight.Lipid content was determined gravimetrically and calculated as percentage by weight of dry biomass (Equation ( 1)). Lipid content % Evaporated extract weight mg Dry biomass weight mg 100 The extracts obtained were stored in a nitrogen atmosphere in the dark at 4 °C until analysis.In all cases, the experiments were carried out at least in triplicate. Pressurized Liquid Extraction of Microalgal Biomass PLE was carried out with a DIONEX ASE 350 extractor (Sunnyvale, California) equipped with stainless steel extraction cells (10 mL volume).Lyophilized Schizochytrium sp. was weighed (1 g) and loaded into the extraction cell, adding 2 g of sand at both ends of the cell. The extraction cell was then filled with the different solvents used: hexane, ethanol, and a mixture of both (1:1), different temperatures (80, 100, and 120 °C) and different static extraction times (5, 10, and 15 min) were also tested.For this purpose, a surface-response Hence, a fraction with 75.0 ± 14.9 mg with a purity of 93.2% of DHA was obtained from 0.3 g of FAEE that was introduced in the open column and produced by enzymatic ethanolysis.If the recovery of each of the duplicates was calculated in terms of average and standard deviation, a yield of 44.72 ± 6.99% DHA was obtained.Taking into account that the percentage of DHA extracted with PLE using hexane-ethanol (1:1) was 51.2% of the total fatty acid composition, almost 90% of DHA was recovered from Schizochytrium sp. with a high purity. Traditional Lipid Extraction by Soxhlet Method For Soxhlet extraction [57], 4.0 g of lyophilized Schizochytrium sp. was used, and 150 mL of hexane was added in a Soxhlet extractor with continuous boiling and condensation cycles of the solvent for 6 h (matrix/solvent ratio 1:37.5).Subsequently, the extracts were evaporated with a rotary evaporator (Heidolph Hei-Vap Value HB/G3, Germany, Berlin) at 35 • C followed by nitrogen stream to constant weight.Lipid content was determined gravimetrically and calculated as percentage by weight of dry biomass (Equation ( 1)). Lipid content(%) = Evaporated extract weight(mg) Dry biomass weight(mg) × 100 The extracts obtained were stored in a nitrogen atmosphere in the dark at 4 • C until analysis.In all cases, the experiments were carried out at least in triplicate. Pressurized Liquid Extraction of Microalgal Biomass PLE was carried out with a DIONEX ASE 350 extractor (Sunnyvale, California) equipped with stainless steel extraction cells (10 mL volume).Lyophilized Schizochytrium sp. was weighed (1 g) and loaded into the extraction cell, adding 2 g of sand at both ends of the cell. The extraction cell was then filled with the different solvents used: hexane, ethanol, and a mixture of both (1:1), different temperatures (80, 100, and 120 • C) and different static extraction times (5, 10, and 15 min) were also tested.For this purpose, a surface-response experimental design was created by introducing all of the variables mentioned above, using the Statgraphics 19 statistical program.The volume of solvent used was 20-25 mL, depending on cell temperature and pressure.Finally, the extract was recovered under a stream of nitrogen in 50 mL vials and kept at 4 • C until analysis.Each of the extraction conditions was performed in duplicate [51]. Scale-up of Pressurized Liquid Extraction The extraction procedure was scaled up using 100 mL stainless steel cells.In this case, 20.00 g of lyophilized Schizochytrium sp. was weighed and loaded into the extraction cell, adding sand at both ends of the cell.A solvent saving mode with a flow rate of 1.5 mL/min was used.The static extraction time was 15 min and the volume of the solvent used was 69-75 mL.The scale-up was performed in triplicate. Characterization of Schizochytrium sp. Oils by GC-MS The fatty acid profile of Schizochytrium sp. oil was determined in duplicate, both from commercial oil and from different extractions carried out by PLE.Derivatization in a basic medium was developed following ISO TC34/SC 5 standard method (See Appendix C for explanation) [58].The obtained fatty acid methyl esters (FAME) were analyzed by gas chromatography coupled to a mass spectrometer (GC-MS) using 100 µL of FAMEs and 400 µL of hexane for GC-MS analysis. The fatty acid analysis was performed on an Agilent Technologies (Palo Alto, Cal., USA) 5975 MSD Series gas-mass chromatograph with an automatic injector and He as the carrier gas.An Agilent Technologies HP-88 capillary column was used, with dimensions of 100 m x 0.25 mm x 0.20 µm.The injection temperature was 250 • C. The oven was kept at 175 • C for 8 min.The temperature was then raised at a rate of 3 • C/min to 230 • C, which was maintained for an additional 10 min.The temperature of the detector was 230 • C. The amount of sample injected was 1 µL with a 1:20 split.The mass spectrometer used an ionization potential of 70 eV and an atomic mass range from 30 to 400 µ (atomic mass units). The fatty acids were identified by comparing their retention times and the mass spectra (NIST Mass Spectral Library Version 2.0) with those obtained from the standards, expressing the amounts as percentages of the total FA content.The findings were compared with updated bibliographic references. Enzymatic Ethanolysis of Schizochytrium sp. Oil First, 200 mg of 3Å molecular sieves, which remove water to prevent hydrolysis reactions, and 100 mg of commercial CALB were weighed and placed in a 30 mL capacity glass vial.Then, 2.05 mL of hexane, 150 µL of absolute ethanol, and 300 µL of Schizochytrium sp. oil extracted by PLE were incorporated.Both solvents were dehydrated when in contact with molecular sieves.The reaction was carried out in a Heidolph incubator equipped with a platform shaker (Unimax 1010) and a heating unit, with constant agitation (200 rpm) at different temperatures (30 • C and 40 • C).The reaction was also tested at 40 • C with a double enzymatic load, according to the previously described method. To study the reaction kinetics, 25 µL of the suspension (reaction medium) was taken at different times (0, 20 min, 40 min, 1 h, 2 h, 4 h, 6 h and 24 h), diluted 25 times in hexane and analyzed by high-performance liquid chromatography coupled with a light-scattering detector (HPLC-ELSD).All reactions were conducted in duplicates. The optimal signal and resolution of the ELSD detector were achieved with the following conditions: evaporator and nebulizer temperature of 30 • C, and evaporator gas (N 2 ) at 1.6 SLM. To identify different lipids represented in the sample such as TAG, diacylglycerols (DAG), monoacylglycerols (MAG), and FAEE, standards were individually injected and compared with standards already analyzed in previous studies. Fractionation of Fatty Acid Ethyl Esters by Open-Column Chromatography First, the filler material (6 g of silica) was added to the burette, as it has a glass frit in the lower part so that it remain fixed, adding the necessary hexane to dilute the silica to the burette.Then, more hexane was added to prevent the formation of bubbles in the column, and the sample was loaded into the column (0.3 g of the FAEE obtained from the scale-up of the enzymatic ethanolysis).To start separating the sample components in fractions (F), different mixtures of hexane and ethyl acetate were added to the column: 10 mL of hexane (F1); 10 mL of hexane-ethyl acetate (99:1) (F2); 10 mL of hexane-ethyl acetate (98:2) (F3); 15 mL of hexane-ethyl acetate (95:5) (F4); 5 mL of hexane-ethyl acetate (95:5) (F5); 5 mL of hexane-ethyl acetate (90:10) (F6) and another 5 mL of hexane-ethyl acetate (90:10) (F7).Experiments were made in duplicates.Fractions were kept refrigerated for GC-MS analysis. Analysis of Fatty Acid Composition by GC-MS The fatty acid profile of the different fractions obtained by open-column chromatography was analyzed by GC-MS.Fractions were derivatized as reported in Section 3.5, and FAME were analyzed using GC-MS according to the described previously method.In this case, 200 µL of each fraction was collected on 400 µL of hexane for GC-MS analysis. Conclusions In conclusion, it should be noted that the use of alternative techniques of extraction from microalgal biomass, such as pressurized liquids, with subsequent enzymatic ethanolysis of the produced Schizochytrium oil, enables a sustainable and environmentally friendly procedure to effectively generate FAEE of DHA.Ethyl esters separated by column chromatography were fractionated and two objectives were achieved: On the one hand, an enriched fraction of DHA was obtained with complete recovery.On the other hand, it was possible to produce a fraction with high purity and the recovery of DHA, 93.2% and 86%, respectively, which may be used as a food supplement and to develop nutraceuticals and active pharmaceutical intermediates of marine origin in an integrated process. Figure 1 . Figure 1.Surface-response plots obtained with the Statgraphics 19 program with different extraction times.(A) Corresponds to a static extraction time of 5 min.(B) Corresponds to a static extraction time of 10 min.(C) Corresponds to a static extraction time of 15 min. Figure 1 . Figure 1.Surface-response plots obtained with the Statgraphics 19 program with different extraction times.(A) Corresponds to a static extraction time of 5 min.(B) Corresponds to a static extraction time of 10 min.(C) Corresponds to a static extraction time of 15 min. Figure 3 . Figure 3. Kinetics of enzymatic ethanolysis with a double load of CALB at 40 °C and 200 rpm. Figure 3 . Figure 3. Kinetics of enzymatic ethanolysis with a double load of CALB at 40 °C and 200 rpm. Figure 3 . Figure 3. Kinetics of enzymatic ethanolysis with a double load of CALB at 40 • C and 200 rpm. Figure 4 . Figure 4. Analysis of FAEE profiles in fraction 5 (blue) and fraction 6 (red) obtained by open-column chromatography using GC-MS. Figure 4 . Figure 4. Analysis of FAEE profiles in fraction 5 (blue) and fraction 6 (red) obtained by open-column chromatography using GC-MS. Figure 5 . Figure 5. Analysis of FAEE profile in an enriched DHA fraction isolated by purification in open column chromatography using GC-MS. Figure 5 . Figure 5. Analysis of FAEE profile in an enriched DHA fraction isolated by purification in open column chromatography using GC-MS.
v3-fos-license
2023-03-23T06:17:30.375Z
2023-03-01T00:00:00.000
257664974
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1016/j.jbc.2023.104626", "pdf_hash": "a4532afeadcc8ce796b2fc924f80387ff3722527", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42587", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "sha1": "70ccd834832a667468eb6d32d96f6b7b55ac43e4", "year": 2023 }
pes2o/s2orc
Different transmembrane domains determine the specificity and efficiency of the cleavage activity of the γ-secretase subunit presenilin The γ-secretase complex catalyzes the intramembrane cleavage of C99, a carboxy-terminal fragment of the amyloid precursor protein. Two paralogs of its catalytic subunit presenilin (PS1 and PS2) are expressed which are autocatalytically cleaved into an N-terminal and a C-terminal fragment during maturation of γ-secretase. In this study, we compared the efficiency and specificity of C99 cleavage by PS1- and PS2-containing γ-secretases. Mass spectrometric analysis of cleavage products obtained in cell-free and cell-based assays revealed that the previously described lower amyloid-β (Aβ)38 generation by PS2 is accompanied by a reciprocal increase in Aβ37 production. We further found PS1 and PS2 to show different preferences in the choice of the initial cleavage site of C99. However, the differences in Aβ38 and Aβ37 generation appear to mainly result from altered subsequent stepwise cleavage of Aβ peptides. Apart from these differences in cleavage specificity, we confirmed a lower efficiency of initial C99 cleavage by PS2 using a detergent-solubilized γ-secretase system. By investigating chimeric PS1/2 molecules, we show that the membrane-embedded, nonconserved residues of the N-terminal fragment mainly account for the differential cleavage efficiency and specificity of both presenilins. At the level of individual transmembrane domains (TMDs), TMD3 was identified as a major modulator of initial cleavage site specificity. The efficiency of endoproteolysis strongly depends on nonconserved TMD6 residues at the interface to TMD2, i.e., at a putative gate of substrate entry. Taken together, our results highlight the role of individual presenilin TMDs in the cleavage of C99 and the generation of Aβ peptides. The γ-secretase complex catalyzes the intramembrane cleavage of C99, a carboxy-terminal fragment of the amyloid precursor protein. Two paralogs of its catalytic subunit presenilin (PS1 and PS2) are expressed which are autocatalytically cleaved into an N-terminal and a C-terminal fragment during maturation of γ-secretase. In this study, we compared the efficiency and specificity of C99 cleavage by PS1-and PS2-containing γ-secretases. Mass spectrometric analysis of cleavage products obtained in cell-free and cell-based assays revealed that the previously described lower amyloid-β (Aβ)38 generation by PS2 is accompanied by a reciprocal increase in Aβ37 production. We further found PS1 and PS2 to show different preferences in the choice of the initial cleavage site of C99. However, the differences in Aβ38 and Aβ37 generation appear to mainly result from altered subsequent stepwise cleavage of Aβ peptides. Apart from these differences in cleavage specificity, we confirmed a lower efficiency of initial C99 cleavage by PS2 using a detergent-solubilized γ-secretase system. By investigating chimeric PS1/2 molecules, we show that the membrane-embedded, nonconserved residues of the N-terminal fragment mainly account for the differential cleavage efficiency and specificity of both presenilins. At the level of individual transmembrane domains (TMDs), TMD3 was identified as a major modulator of initial cleavage site specificity. The efficiency of endoproteolysis strongly depends on nonconserved TMD6 residues at the interface to TMD2, i.e., at a putative gate of substrate entry. Taken together, our results highlight the role of individual presenilin TMDs in the cleavage of C99 and the generation of Aβ peptides. γ-Secretase is an intramembrane protease which is known to cleave around 150 different substrates, all of which are type I single-spanning integral membrane proteins (1). Cleavage of C99, a proteolytic fragment of amyloid precursor protein (APP), generates a series of amyloid-β (Aβ) peptides. Since some Aβ peptides are widely believed to cause Alzheimer's disease (AD) (2), C99 cleavage by γ-secretase arguably represents the most intensely investigated case of intramembrane proteolysis. The production of Aβ peptides by C99 cleavage is mainly initiated at alternative ε48and ε49-sites within its TMD (3)(4)(5)(6). This endoproteolytic initial cleavage liberates the C-terminal APP intracellular domain (AICD). Proteolysis continues toward the N-terminus and releases predominantly tripeptides and tetrapeptides by cleaving alternative ζand γsites (7,8). Thus, two alternative product lines can be distinguished, depending on whether cleavage is initiated at the ε48or at the ε49-site. Processive cleavage along these product lines and some cross-over between them (9)(10)(11) generates Aβ peptides of different length and toxicity (12)(13)(14). It had remained unclear, however, how the identity of the PS paralog affected the specificity of ε-cleavage and which protein domains are responsible for the different cleavage activities of PS1 and PS2. The conformational diversity of γ-secretase holding PS1 (30) indicated that PS is a rather dynamic enzyme. Molecular modeling indeed suggested that PS exists in at least two different conformational states distinguished by the distance between both catalytic aspartates located on TMD6 and TMD7, respectively (31,32). Prior to cleavage, a substrate needs to be recognized by the enzyme which is followed by its translocation to the active site aspartates located on TMDs 6 and 7 of PS. The active site contains water required for proteolysis and thus needs to be shielded from the unpolar membrane environment. One of the current challenges in understanding the functional architecture of γ-secretase is to elucidate how its different domains cooperate in recognizing, engulfing, and unfolding of the substrate, thus preparing it for the various cleavage events (discussed in: (33)). Here, we compared several measures of cleavage activity exhibited by γ-secretases harboring PS1 or PS2 in cell-based and cell-free assays. While confirming a lower endoproteolytic activity of PS2 relative to PS1, we also detected a lower Aβ38/Aβ37 ratio of PS2-containing versus PS1-containing γsecretase. We found that the nonconserved amino acids responsible for these differences mainly reside within the transmembrane part of the PS NTF with a minor contribution by the CTF. Further, we identified TMD3 to affect initial ε-cleavage site specificity. By contrast, the efficiency of endoproteolysis is not affected by TMD3, but highly dependent on TMD6, among other TMDs within the NTF. Results The aim of this study was to probe the importance of different PS domains for various aspects of C99 cleavage. C99 cleavage by γ-secretase comprises (i) ε-cleavage efficiency, i.e., the yield of the various AICD species produced by endoproteolysis at all ε-sites, (ii) ε-site specificity, i.e., the relative efficiency of initial cleavage at ε-sites, and (iii) processivity, i.e., the relative efficiency of exopeptidase-like proteolysis across ζ-sites to the γ-sites along both product lines, including cross-over events between the lines. Altogether, these parameters determine amounts and diversity of resulting Aβ peptides. They also define the toxicity of the resulting mixture, as toxicity mainly depends on Aβ42 content (34). Furthermore, recent studies showed the association of higher Aβ38 levels in cerebrospinal fluid with lower risk of AD-related changes (35) and suggested the cerebrospinal fluid Aβ37/Aβ42 ratio as an improved biomarker for AD development (36). Thus, these observations indicate a potentially protective role of shorter Aβ peptides. Our approach was to compare various measures of cleavage activity for the PS1 and PS2 paralogs and to identify protein domains responsible for any differences uncovered. In doing so, we hoped to obtain novel insights into the functional architecture of PS. PS domains shaping the relative abundance of Aβ peptides Here, we asked which parts of PS contribute to its ability to produce the major Aβ peptides. To this end, both PS variants were expressed in human embryonic kidney 293 cells stably expressing Swedish mutant APP (HEK293/sw) within a PSfree genetic background (HEK293/sw PS1/PS2 −/− ) (37). All experiments were performed with pooled stable transfectants of a given PS variant, in order to average potential variations in the expression of individual clones. We initially compared the pattern of Aβ peptides secreted by these cells to the pattern produced by endogenous γ-secretase of HEK293/sw cells by combined immunoprecipitation and MALDI-TOF mass spectrometry (IP-MS) from conditioned media. Expression of PS1 results in a pattern comprising a dominant Aβ40 peptide, minor Aβ37 and Aβ38 peptides at similar amounts, plus less prevalent Aβ39 and Aβ42; this is collectively designated here as "PS1 phenotype". After transfection with PS2, we mainly noted a lower Aβ38/Aβ37 ratio, the "PS2 phenotype" (Fig. S1). The HEK293/sw cells expressing endogenous PS1 and PS2 produced similar ratios of Aβ including an Aβ38/Aβ37 ratio close to unity. This suggests not only that heterologously expressed PSs are principally comparable in their C99 cleavage activities to their endogenous counterparts. It also indicates that endogenous PS1 is mainly responsible for the mixture of secreted Aβ peptides by HEK293/sw cells. Residues that are not conserved between PS1 and PS2 are distributed over the entire sequence (Fig. S2). In the following, we probed the importance of different PS domains for producing the different Aβ38/Aβ37 ratios of PS1 and PS2 by testing a range of chimeric constructs ( Fig. 1A and Table S1) by fusing the PS NTF and CTF at the site of endoproteolysis (38)(39)(40). The exemplary mass spectra of Aβ peptides ( Fig. 1B and Table S2) reveal that the PS1 phenotype results after fusing the complete PS1 NTF to the CTF of PS2, as in construct PS1/2. By contrast, fusing the PS2 NTF to the PS1 CTF in PS2/1 retains the PS2 phenotype. Quantifying the mean Aβ38/Aβ37 ratios from the peak heights of mass spectra from multiple samples confirms this picture (Fig. 1C). In order to map the nonconserved residues being responsible for the differential Aβ production of PS1 versus PS2 more finely, we next examined the importance of NTF subdomains by grafting groups of TMDs from PS1 onto the PS2 template ( Fig. 2A). Both, PS2ρTM1-4 and PS2ρTM3-6 clearly confer a PS1 phenotype, prompting an even more granular mapping of the responsible TMDs. While PS2ρTM1-2 and PS2ρTM4-5 retain the PS2 phenotype, PS2ρTM3-4 is equivalent to PS1 (Fig. 2, B and C). At the level of individual TMDs, we find that PS2ρTM3 indeed behaves like PS1, while PS2ρTM4 and PS2ρTM6 exhibit the PS2 phenotype (Fig. 2, B and C). In a technically different approach, immunoprecipitated Aβ peptides from conditioned media were analyzed by immunoblotting after gel-electrophoretic separation using high-resolution Tris-Bicine-Urea SDS-PAGE (41). In line with the mass spectra, PS1-and PS2-containing γ-secretases show distinct production of Aβ37 and Aβ38. While the amount of Aβ38 produced by PS1-containing γ-secretase even exceeds the amount of Aβ37 (Fig. 2D), the inverse is true for PS2 γ-secretase. Focusing on the most relevant chimeric PS constructs described above, immunoblotting confirms a PS1 phenotype for PS2ρTM3-4 and PS2ρTM3. PS2ρTM4 appears to display similar levels of Aβ37 and Aβ38 on the immunoblot (IB) (Fig. 2D). In designing the chimera presented above, we transferred the nonconserved residues of a given TMD plus both adjoining solvent-exposed loop regions; in case of TMD6, we included the C-terminal region up to the endoproteolytic cleavage site (Table S1). To collectively assess the contribution of membrane-embedded versus loop residues in accounting for PS1/PS2 differences, we expressed a construct where we had transferred all TMDs from PS1 to PS2 while maintaining the loop regions (Fig. 3A). Indeed, a construct holding all membrane-embedded amino acids from PS1 and loop residues from PS2 retains the PS1 phenotype. Further, the dominance of the NTF is preserved upon grafting only the nonconserved membrane-embedded NTF residues onto PS2 while grafting the TMDs of the CTF retains the PS2 phenotype (Fig. 3, B and C). We concluded this set of experiments by asking whether the pattern of secreted Aβ peptides produced in cell-based assays (Figs. 1-3) is influenced by differential access of C99 to PS1containing γ-secretase in the plasma membrane relative to PS2-containing γ-secretase residing in intracellular membranes, such as endosomes (22,23). We thus decided to compare the secreted Aβ pattern to the pattern produced in cell-free assays after membrane solubilization of γ-secretase with the detergent CHAPSO (42) by MALDI-TOF mass spectrometric analysis. In detergent, both PS paralogs are expected to have equal substrate access. As a substrate, we used the recombinant C99-based C100-His 6 construct (17). Our results show that the Aβ38/Aβ37 ratio produced by PS1 in detergent is even higher than that in cell-based assays while the inverse is true for PS2 (Fig. 4, A and B). Also, Aβ37 and Aβ38 obtained from cell-free assays are more abundant relative to Aβ40 than in conditioned media. Given the more pronounced Aβ38/Aβ37 discrimination of the PS1 and PS2 phenotypes under cell-free conditions, we further wanted to characterize PS2ρTM3 which behaves like PS1 in the cellbased assays. Again, PS2ρTM3 exhibits an Aβ38/Aβ37 ratio that is close to that produced by PS1, however differs to both wildtype (wt) PSs. Parallel measurements of control experiments using γ-secretase inhibitor LY-411575 (43) which was shown to equally inhibit PS1 and PS2 (27) assured the observed peaks to be γ-secretase-specific (Fig. S3). Taken together, PS2 produces a lower Aβ38/Aβ37 ratio than PS1. These data confirm and extend recent observations of differential Aβ production by PS1-and PS2-containing γ-secretases (44). They also show that the PS NTF specifies the different phenotypes. Importantly, PS1 TMD3 and to some extent PS1 TMD4 appear to confer a PS1-like phenotype when expressed in the structural context of the PS2 template. The phenotypic differences between PS1 and PS2 appear to be similar in cell-based and cell-free assays, although exopeptidase-like proteolysis leading to Aβ37 and Aβ38 relative to Aβ40 appears to be more efficient in detergent than in the natural membrane environment. PS domains determining ε-site specificity Next, we asked whether the different Aβ38/Aβ37 ratios produced by the PS paralogs can be traced back to differential ε-cleavage at the origin of C99 proteolysis. Since Aβ48 and Aβ49 peptides are successively converted to shorter Aβ peptides and difficult to detect (8), we examined the Figure 1. The presenilin NTF determines differential Aβ37 and Aβ38 production by PS1 and PS2. A, schematic representation of the constructs. PS1 and PS2 are depicted in dark blue or light blue, respectively. B, exemplary total Aβ contents of conditioned media as analyzed by MALDI-TOF mass spectrometry after immunoprecipitation with antibody 4G8. The intensities of the highest peaks were set to 100% and the counts per second (cps) are given. Arrows mark differences in Aβ38 peak intensity. C, Aβ38/Aβ37 ratios change upon substitution of the presenilin NTF. Ratios were calculated from the respective peak intensities from IP-MS analyses, as shown in part (B). Data in (C) represent means ± SEM, n = 3 to 4. Individual replicates are derived from conditioned media collected from independent cultures from our pools of stably transfected cells. Asterisks indicate significant differences (one-way ANOVA with Dunnett's posttest) relative to PS2 (***p < 0.001). Aβ, amyloid-β; IP-MS, combined immunoprecipitation and MALDI-TOF mass spectrometry; NTF, Nterminal fragment; PS, presenilin. corresponding AICD species ε49 and ε48. AICD produced in the cytoplasm of a cell is rapidly degraded, however, and therefore also difficult to detect (45). Thus, we monitored AICD in cell-free assays, having demonstrated a similar behavior of both PSs in cell-free and cell-based assays. AICD peptides ε48 (51 residues), ε49 (50 residues), and ε51 (48 residues) result from cleavages at ε48-, ε49-, and ε51-sites, respectively (Fig. 5A). Assessing these AICD peptides in a detergent-solubilizate showed that PS2 produced more AICDε51 than PS1, at the expense of AICDε48 (Fig. 5B). This manifests itself in a higher mean ε51/ε48 ratio exhibited by PS2 relative to PS1 (Fig. 5C). At the same time, the (ε48+ε51)/ ε49 ratios produced by both PS paralogs are indistinguishable (Fig. 5D). Since cleavages at both the ε48and the ε51-sites result in the Aβ42 product line (9), both PSs enter the Aβ40 and Aβ42 product lines with similar efficiency although initial PS2 cleavage is partially shifted from ε48 to ε51. Controls with γ-secretase inhibitor LY-411575 showed the analyzed peaks to B, exemplary total Aβ contents of conditioned media were analyzed by MALDI-TOF mass spectrometry after immunoprecipitation with antibody 4G8. The spectra are categorized in 'PS1 phenotype' and 'PS2 phenotype'. Data for PS1 and PS2 are reproduced from Figure. 1. C, Aβ38/Aβ37 ratios were calculated from the respective peak intensities from IP-MS analyses (means ± SEM, n = 3-4). Individual replicates are derived from conditioned media collected from independent cultures from our pools of stably transfected cells. Asterisks indicate significant differences (one-way ANOVA with Dunnett's post-test) relative to PS2 (**p < 0.01, ***p < 0.001). We note that substituting TMD3 has the most salient effect of the single TMD substitutions tested here and that the combination of TMDs 3 and 4 in PS2ρTM3-4 has an even stronger effect than TMD3 alone. D, immunoprecipitation of secreted Aβ followed by separation by Tris-Bicine urea SDS-PAGE and immunoblotting confirmed the impact of TMD3 and TMD4 on Aβ generation. Aβ, amyloid-β; IP-MS, combined immunoprecipitation and MALDI-TOF mass spectrometry; PS, presenilin; TMD, transmembrane domain. Taken together, the partial shift of initial cleavage from the ε48-site to the ε51-site by PS2 does not affect the efficacy by which the PS paralogs enter both product lines. This higher preference of PS2 for the ε51-site seems at least partially determined by TMD3. In addition, the PS1 TMD3 within the PS2 framework enhances ε48 and ε51 cleavages relative to both wt PSs. PS domains defining ε-cleavage efficiency In this set of experiments, we compared the efficiency by which γ-secretases holding PS1 or PS2 perform ε-cleavage and tested various chimeric constructs in order to delineate individual domains accounting for differences between both isoforms (Fig. 6A). To this end, we determined total AICD levels obtained after cleavage in CHAPSO-solubilized membranes by immunoblotting (Fig. 6B) and, following quantitation, expressed AICD production by PS2 and chimeric constructs relative to PS1 (Fig. 6C). Figure 6 reveals that PS2 γ-secretase produces only 23% AICD of PS1 γ-secretase, thus confirming previous reports having indicated a higher PS1 activity (22). In assessing the importance of the PS NTF versus its CTF, we employed a gain-of-function approach by using the weaker PS2 as a template onto which PS1 domains were grafted (Fig. 6A). This minimizes potential pleiotropic effects of altered primary structure that are often encountered when studying loss-of-function after deleting domains or fusing them to unrelated proteins. We first compared the impact of transferring all TMDs from PS1 to PS2 while maintaining the solvent-exposed loops. AICD production by the PS2-All-PS1TMDs chimera showed 85% of PS1 activity (Fig. 6C), thus revealing the importance of the TMDs for the efficiency of Figure 3. The membrane-spanning domains of presenilin are responsible for differential Aβ38 production. A, schematic representation of the transfected constructs. B, exemplary total Aβ contents of conditioned media were analyzed by MALDI-TOF mass spectrometry after immunoprecipitation with antibody 4G8. Data for PS1 and PS2 are reproduced from Figure 1. C, Aβ38/Aβ37 ratios were calculated from the respective peak intensities from IP-MS analyses (means ± SEM of n = 3-4). Individual replicates are derived from conditioned media collected from independent cultures from our pools of stably transfected cells. Asterisks indicate significant differences (one way ANOVA with Dunnett's post-test) relative to PS2 (***p < 0.001). Aβ, amyloid-β; IP-MS, combined immunoprecipitation and MALDI-TOF mass spectrometry; PS, presenilin. For a more detailed functional mapping of the PS NTF found to dominate ε-efficiency, we scrutinized several of the chimeric constructs presented above where subdomains of the PS1 NTF are grafted onto the PS2 template. Thereby, we found PS2ρTM1-4 and PS2ρTM3-6 to exhibit 67% or 81% of PS1 efficiency, respectively. Interestingly, PS2ρTM3 produced only 15% AICD, while PS2ρTM6 yielded an impressive 52% AICD (Fig. 6C). Since PS2ρTM3 appears not to account for the Figure 4. Influence of presenilin TMD3 on Aβ production in the cell-free assay. A, exemplary MALDI mass spectra of Aβ peptides generated in the cellfree assay in CHAPSO-solubilized membrane fractions from cells expressing the different γ-secretase complexes and recombinant C100-His 6 as a substrate. Subsequent to immunoprecipitation with antibody 4G8 (final concentration 4 μg/ml), total Aβ was analyzed by MALDI-TOF mass spectrometry. The intensities of the highest peaks were set to 100%. B, Aβ38/Aβ37 ratios were calculated from the respective peak intensities from IP-MS analyses (means ± SEM, n = 3-4). Replicates originate from individual CHAPSO-solubilized membrane fractions from independent cultures of our pooled stable transfectants. Asterisks indicate significant differences (one-way ANOVA with Dunnett's post-test) relative to PS2 (***p < 0.001). Aβ, amyloid-β; IP-MS, combined immunoprecipitation and MALDI-TOF mass spectrometry; PS, presenilin; TMD, transmembrane domain. Figure 5. Influence of presenilin TMD3 on the ε-cleavage specificity of γ-secretase. A, the Aβ and AICD products originating from cleavage at ε48, ε49, and ε51 are displayed schematically. B, exemplary MALDI mass spectra of AICD peptides generated in the cell-free assays. Total AICD from the reaction volumes was analyzed by MALDI-TOF mass spectrometry subsequent to immunoprecipitation with antibody Y188. The spectra were recorded with an instrument different from that used for Figures 1-4, resulting in higher overall cps. C and D, peak heights were quantified, and ratios calculated to investigate preferences for either (C), the initial cleavage site within the Aβ42 product line or for (D), the Aβ40 versus Aβ42 product lines. Data in (C) and (D) represent means ± SEM, n = 4. Replicates originate from individual CHAPSO-solubilized membrane fractions from independent cultures of our pooled stable transfectants. Asterisks indicate significant differences (one-way ANOVA with Dunnett's post-test) relative to PS2 (*p < 0.05, **p < 0.01)). Aβ, amyloid-β; AICD, APP intracellular domain; PS, presenilin; TMD, transmembrane domain. increased activity of PS2ρTM1-4, we assessed a potential influence of PS TMD2 on ε-cleavage efficiency. TMD2 in combination with TMD6 was previously proposed to be involved in substrate entry (32,33,46). To test the importance of this putative entry site for PS ε-cleavage efficiency, we combined PS1 TMD2 and TMD6 in PS2ρTM2/6 ( Fig. 6A). Substrates had also been proposed to enter between TMD6 and TMD9 (47,48) prompting us to examine a PS2ρTM6/9 construct. However, neither PS2ρTM2/9 nor PS2ρTM6/9 showed ε-cleavage efficiencies above that of PS2ρTM6 with its single TMD substitution (Fig. 6C). To control for maturation and concentration of the different γ-secretase complexes, we visualized their subunits by immunoblotting: fully glycosylated NCT m , as indicated by its higher molecular mass relative to core-glycosylated NCT, the PS NTFs and CTFs as an indicator of autoproteolytic cleavage, and PEN-2 (Fig. S4). Taken together, our results reveal domains that specify the difference in PS1/PS2 cleavage efficiency. The nonhomologous membrane-embedded residues of the NTF are more relevant than those of the CTF, and TMD6 makes a strong individual contribution. Discussion The experimental approach outlined here exploits functional differences between PS1 and PS2 in order to identify protein domains governing various aspects of substrate processing, like substrate binding, engulfing, and cleavage. We reasoned that investigating chimera of those highly homologous proteins may cause fewer undesired pleiotropic effects on their structure than truncations, deletions, or fusions with sequences from unrelated proteins. A limitation of our approach is that domains that determine functional properties shared by both paralogs may not be identified. We further acknowledge that the type of expressing cells might affect substrate processing by PS. However, with respect to APP processing, HEK293/sw cells produce Aβ species in ratios resembling those observed in brain-derived cells (e.g., (49)(50)(51)). Two major differences between γ-secretase complexes containing PS1 or PS2 form the basis of our strategy. First, γ-secretase complexes holding PS1 or PS2 exhibit a remarkably different sequence-specificity of C99 processing, that is, a reduced production of Aβ38 by PS2 compared to PS1 being paralleled by an increase in Aβ37. While different Aβ38 levels have recently been reported (28,29), differential Aβ37 production has not been described before. Although low signal intensities of Aβ42 prevented us to calculate reliable Aβ42/ Aβ40 ratios from IP-MS measurements, qualitatively, Aβ42 levels relative to Aβ40 appeared to be similar for PS1 and PS2. This is in line with studies reporting a similar Aβ42/Aβ40 ratio for both PS paralogs (22,23) but inconsistent with other studies describing a higher Aβ42/Aβ40 ratio for PS2containing γ-secretase (52,53). For both paralogs, ε49 appears to be the major cleavage site, followed by ε48, which agrees with previous reports (28, 54, 55). Interestingly, we Figure 6. Relevance of presenilin domains for the efficiency of ε-cleavage. A, schematic representation of the constructs. B, representative immunoblots used for AICD and NCT m quantification from cell-free assays. The cleavage efficiency of recombinant C100-His 6 substrate was analyzed in CHAPSO-solubilized membrane fraction containing the different γ-secretase complexes. Signal intensities of generated total AICD were measured on immunoblots. C, signal intensities were first normalized to signals of NCT m and are expressed relative to the level seen with PS1. Appropriate maturation of the respective γ-secretase complexes was verified prior to analysis (see: Fig. S4). Data in (C) represent means ± SEM, n ≥ 3. Asterisks indicate significant differences (one-way ANOVA with Dunnett's post-test) relative to PS2 (*p < 0.05, **p < 0.01, ***p < 0.001). AICD, APP intracellular domain; PS, presenilin; TMD, transmembrane domain. Functional importance of presenilin transmembrane domains show that PS2 uses the ε51-site more frequently than PS1 and does so at the expense of ε48. To our knowledge, this difference in ε-cleavage has not been shown before. Rather, the ε-cleavage specificity had been reported to be similar for PS1 and PS2 using the less sensitive IB analysis (29). Moreover, AICDε51 had not been determined in some previous studies (28,29). Our data are consistent, however, with previously reported AICD production by HeLa cells expressing both PS paralogs. There, the plasma membrane generated almost exclusively AICDε49, while endosomes produced mostly AICDε51 (56). Since PS2-containing γ-secretase mostly resides in late endosomes and lysosomes (22,23), AICDε51 production most likely had originated from PS2, in line with our current results. Since Aβ51 is first processed to Aβ48, it also enters the Aβ42 product line (9,10,57). Altogether, PS1 and PS2 thus initiate the Aβ42 line with a slightly higher efficiency than the Aβ40 line (Fig. 5D). In turn, this suggests that it is the downstream processing of the Aβ peptides that leads to the observed differences in Aβ38/Aβ37 ratio between PS1 and PS2. At which stage then does Aβ trimming differ between both PSs to produce the lower Aβ38/Aβ37 ratio exhibited by PS2? Assuming that Aβ42 derives from Aβ48, one would expect Aβ42 to be the major Aβ species. However, consistent with previous studies (22,25,28) both, PS1-and PS2-containing γsecretases produce Aβ40 as the major species, thus suggesting frequent crossover from the Aβ42 product line to the Aβ40 line. To date, the generation of Aβ43 from its precursor Aβ48 is the sole reported crossover point that could allow for this product line switch (9, 10) (Fig. 7). Indeed, PS2 might use the Aβ48→Aβ43 switch more frequently than PS1, as Aβ43 generates Aβ37 via Aβ40. Alternatively, PS2 might be more efficient in a previously reported rare direct conversion of Aβ42 to Aβ37 (9) or use additional, so far unidentified, crossover points. Second, γ-secretase holding PS2 is much less efficient in AICD production than PS1-containing γ-secretase. This was shown by our experiments on cleavage efficiency with detergent-solubilized membranes as enzyme source where both types of γ-secretase are expected to have equal access to the substrate. This confirms previous studies where AICD production by γ-secretase holding PS2 relative to PS1 was somewhat reduced in cellulo using a luciferase-based reporter assay conducted on BD8 cells (58) or strongly diminished in a cell-free assay (22,29). Analyzing PS1/PS2 chimeric enzymes shed light on the structural domains defining both, the specificity and efficiency of C99 cleavage. Cleavage specificity, as expressed by the Aβ38/Aβ37 ratio, is mainly accounted for by the NTF. Furthermore, we found the membrane-embedded nonconserved residues to be mainly responsible for this paralogspecific phenotype. The same holds true for the efficiency of AICD production. By implication, the specificity of Aβ37 and Aβ38 production as well as the efficiency of ε-cleavage appear both to be mainly mediated by interactions of the substrate with TMDs of the NTF of PS. That the NTF is responsible for paralog-specific ε-cleavage efficiency had previously been observed in a cell-based assay, albeit without statistical significance (58). At a more granular structural level, the single TMD exchange resulting in chimera PS2ρTM3 produced a PS1-like Aβ38/Aβ37 ratio. How does TMD3 affect Aβ production? On the one hand, one straightforward explanation would imply a role of TMD3 in Aβ trimming after ε-cleavage. After all, the specificity of ε-cleavage is not the root cause of Aβ38/ Aβ37 ratios differing between wt PS1 and PS2, as noted above (Fig. 5D). On the other hand, we found that exchanging TMD3 had profound effects on the specificity of ε-cleavage since PS2ρTM3 partially mimics PS1 in its ε51/ε48 ratio (Fig. 5C) albeit this effect is unlikely to affect the Aβ38/Aβ37 ratio. In addition, PS2ρTM3 produced higher levels of AICDε48 plus AICDε51 relative to AICDε49 than both wt PSs (Fig. 5D). Although this effect may partially explain an elevated Aβ38 Figure 7. Product line usage of PS1 and PS2. The efficiencies of ε-cleavage as well as of proteolysis along both product lines and crossover events between them, as indicated by previous data (9, 10) and our current results, can explain the differential Aβ38/Aβ37 ratios of PS1 and PS2. The width of the boxes containing the peptide species represents the relative amount of peptide at the start (dashed lines) or at the end (solid lines) of the respective Aβ product line. Arrow width indicates the presumed efficiency of respective cleavage steps. In particular, the more frequent use of the Aβ48→Aβ43 transition might explain the higher Aβ37 production by PS2. Aβ, amyloid-β; PS, presenilin. level resulting from a favored entry into the Aβ42 product line (Fig. 7) (9), it is not suited to explain the elevated Aβ38/Aβ37 ratio of PS1 compared to PS2. In other words, TMD3 clearly influences the choice of initial cleavage site; however, the extent to which this effect and an effect of TMD3 on subsequent trimming define the differential Aβ38/Aβ37 ratios of PS1 versus PS2 is currently unknown. Along this line, it may be worth noting that the Aβ38/Aβ37 ratio produced by PS2ρTM3-4 exceeds that of PS2ρTM3. Conceivably, therefore, TMD3 may cooperate with TMD4. That TMD3 affects Aβ production is supported by previous domain-swapping experiments where TMD3 was the only domain to alter Aβ42 ratios, whereas mutation of other TMDs completely abolished substrate cleavage (59). It is likely that additional structural determinants of Aβ production may be identified in the future. How might the different TMDs mechanistically exert their influence? In general, the functional dominance of membraneembedded over loop residues, as found in this study, is not surprising as the formers dictate the noncovalent interactions between the TM helices and thus the structure and dynamics of a membrane protein (60). Indeed, simulations suggest correlated motions of TMDs being responsible for transitions between various conformational states of PS (61). Specifically, a range of studies have elucidated the participation of TMD3 in the formation of the catalytic pore and of the active site in PS (62). The structure of the substrate/enzyme complex has been determined by cryogenic electron microscopy (cryo-EM). In this structure, the C83 substrate is located between TMD2 and TMD3 of PS1 with apparent interactions between W165 of PS1 TMD3 and V44 of C83 as well as between S169 of the PS1 TMD3 and I41 of C83 (63). While these C83-contacting residues are conserved, other TMD3 residues vary between both PSs, possibly affecting the precise positioning of the substrate TMD which determines ε-specificity. That TMD3 contributes to formation of an aqueous catalytic pore was initially suggested by a cysteine labeling approach (64). Follow-up studies using the γ-secretase modulator E2012 further indicated its involvement in Aβ trimming. Binding of E2012 induces conformational changes of TMD3 resulting in an expansion of the catalytic cavity which was reported crucial for Aβ42 reduction (65). Furthermore, the importance of TMD3 for Aβ production is underlined by the fact that 13% of all familial AD mutations with assumed and confirmed pathogenicity are located in TMD3 (https://www.alzforum.org/mutations) albeit this TMD comprises less than 7% of all PS1 amino acids. Among these is the very aggressive L166P mutation, which is associated with an early onset of AD (66). Photoaffinity mapping showed L166P to site specifically alter the efficiency of crosslinking between the PS1 NTF and the ε-sites of C99 (67,68), indicating that TMD3 functionally interacts with the initial cleavage region. Further, TMD3 contacts TMD4 which itself does not contact the substrate in the cryo-EM structure (30,63). Notably however, a residue near the C-terminus of TMD4 can be crosslinked to residues 383 or 387 bordering the catalytic D385 of TMD7 (69). Thus, TMD4 appears to visit the catalytic site at least transiently, which may explain its apparent cooperation with TMD3 in our current study. Surprisingly, TMD3 appears to have no discernible impact on ε-cleavage efficiency as indicated by the PS2-like AICD production exhibited by PS2ρTM3 in cell-based assays. It follows that the final substrate positioning that is thought to govern ε-site specificity is uncoupled from the efficiency of ε-cleavage. Rather, it is likely that an earlier step, such as the ease of substrate translocation toward the catalytic site is crucial for ε-efficiency. Various lines of evidence had previously suggested a role of PS TMDs 2, 6, and 9 in substrate recognition (reviewed in: (33,70)). More specifically, entry gates between TMDs 2 and 3 (71), TMDs 2 and 6 (32,46), and TMDs 6 and 9 (47,48) had been proposed. TMD6 might be a particularly crucial site of initial substrate binding. Exchanging either part of TMD6 or TMD2 of PS for nonrelated TMDs had abolished labeling by a photoprobe based on a peptidic substrate mimic (59). In a comparison of different cryo-EM structures, TMD6 showed the highest tilt angle variations of all TMDs that correlated to the distance between both catalytic aspartates (72). As a small distance between these aspartates is required for cleavage, TMD6 dynamics might explain the ε-cleavage efficiency of PS2ρTM6 which approached that of PS1. Unexpectedly, combining PS1 TMD6 with TMD2 or TMD9 did not further increase ε-efficiency. This result does not challenge a role of TMDs 2, 6, and 9 in substrate recognition. Rather, it is consistent with the idea that TMD2 and TMD9 do not define the differences in PS1 and PS2 activity. It is currently not clear how the PS1 TMDs within PS2ρTM1-4 account for higher-than-PS2 activity of this chimera. A previously reported TMD2/TMD3 entry site (71), a concerted movement of TMD3 and TMD1 (65), or the stabilization of TMD3 by TMD4 during cleavage (62) may help to explain the enhanced efficiency of PS2ρTM1-4. However, the exact mechanism remains elusive and needs further investigation. We conclude that TMD3 is a strong determinant of the choice of the ε-site but not of ε-cleavage efficiency. By contrast, other TMDs, including TMD6, contribute to ε-cleavage efficiency but do not determine the Aβ38/Aβ37 ratio. Thus, TMD6 may have a major impact on substrate acquisition and/or translocation to the active site. It may be noteworthy that previous Michaelis-Menten analyses with detergent-solubilized enzyme uncovered lower K m and V max values of PS2 relative to PS1 (27). One aspect by which PS2 differs from PS1 may thus correspond to a higher substrate affinity, which is approximated by K m . Naturally, a higher affinity does not explain the lower PS2 efficiency found here and elsewhere (22,29,58). Another aspect distinguishing both paralogs appears to a differential substrate turnover, as indicated by the lower V max of PS2. The V max value is influenced by any step downstream of initial substrate binding (73), including the efficiency by which an initially bound substrate TMD is translocated toward the active site. As the nonconserved TMD6 residues are located at the interface between the TMD6 and the TMD2 helices (63), ε-cleavage efficiency may depend on the ease by which this interface transiently breaks to allow substrate translocation between TMDs 2 and 6 (32,46). Alternatively, ε-cleavage efficiency may be governed by the contribution of TMD6 to the active site where Functional importance of presenilin transmembrane domains conformational changes of TMD6 were observed after substrate binding (63,74). Future studies on the substrate specificity of PS1 versus PS2 may benefit from the recently solved cryo-EM structure of PS2 (75) which, although nearly identical to that of PS1, may show differences in yet to be determined substrate-bound forms. In any case, by comparing differential C99 processing by both PSs and chimeric variants thereof, our current study helps to identify important determinants of the specificity and efficiency of cleavage. Finally, we note that the physiological roles of PS1 and PS2 involve substrates other than C99. As with C99, differential cleavage of such other substrates by both PS paralogs has been reported and is likely to depend on their differential subcellular localizations (22,23). For example, the late endosome-and lysosome-localized premelanosome protein and tyrosinerelated protein are mostly cleaved by PS2, but barely by PS1. By contrast, N-cadherin found at the cell surface is predominantly cleaved by PS1 (23,53). cDNA constructs and transfection of mammalian cells cDNA constructs encoding the individual human PS variants, N-terminally tagged with hexahistidine sequences, were generated by Gibson Assembly (80) (New England Biolabs) and cloned into the mammalian expression vector pcDNA4/ HisC (Invitrogen). The required cDNA sequences were amplified via standard PCR. Gibson Assembly was performed according to the manufacturer's protocol. Table S1 lists the identities of the fused fragments. The identity of the TMDs corresponds to the respective annotations in the cryo-EM structure (pdb: 5fn3 (30)). The integrity of all cDNA constructs was verified by DNA sequencing. HEK293/sw and HEK293/sw PS1/2 −/− dKO cells (37) were stably transfected and cultured in the presence of the selection antibiotic Zeocin (InvivoGen) to ensure genomic integration as previously described (81). For the sake of averaging the expression of PS variants on substrate processing, all clones obtained for a given variant were pooled after antibiotic selection and co-cultivated. Protein analysis and cleavage assays PS1, PS2, NCT, and PEN-2 were detected in cell lysates by direct immunoblotting as described (82). IP-MS analysis of Aβ peptides generated in cell-based or cell-free assays was done as described previously (76) using antibody 4G8 and the 4800 MALDI TOF/TOF Analyzer (Applied Biosystems). AICDs generated in cell-free assays were analyzed by IP-MS using antibody Y188 and MALDI-TOF mass spectrometry (rapifleX Tissuetyper, Bruker). MALDI spectra were quantitatively evaluated by determining peak heights and averages were calculated. Individual Aβ species were also analyzed by immunoblotting, using Tris-Bicine-Urea SDS-PAGE (41) after IP with antibody 3552. For separation of Aβ species, we used a 12% stacking gel without urea and an 8% separation gel containing 8 M urea. For cell-free γ-secretase assays, membrane fractions from HEK293/sw PS1/2 −/− cells co-expressing wt PS1, PS2, or chimeric PS variants were prepared as described (3) and subsequently solubilized with 1% CHAPSO [1% CHAPSO, 150 mM sodium citrate pH 6.4, 1x cOmplete protease inhibitor (Roche)]. γ-Secretase activity was assessed as described before (76) except that 1.2 μM LY-411575 (43) (Merck) was used for γ-secretase inhibition. To determine the ε-cleavage efficiency of a given γ-secretase complex, the mixture of AICDs generated in cellfree assays was separated from other proteins on Tris-Tricine gels (83) with subsequent immunoblotting using the Penta-His antibody. Quantification of signal intensities from IBs was performed using a Western Blot imager (Fusion FX, Vilber Lourmat) and the Image Studio Lite Ver. 5.2 software (LI-COR). When quantifying band intensities, we sought to minimize potential variations of AICD production that may result from different levels of functional γ-secretase. Thus, we normalized AICD staining intensity to that of the mature, fully glycosylated NCT m determined in parallel. To determine ε-cleavage specificity, the quantity of individual AICD species was investigated as described above. Likewise, Aβ peptides generated in cell-free assays were investigated as described above. Signal averages were calculated from data obtained from multiple membrane preparations. Statistical analysis Data are presented as the mean value ± SEM and were analyzed using one-way analysis of variance (ANOVA) and post hoc Dunnett's test. Statistical significance was defined as p < 0.05. The data were analyzed using GraphPad Prism 9 (GraphPad Software). Data availability All source data required for determination of mean Aβ and AICD ratios or the cleavage efficiency will be made available upon request. Conflict of interest-The authors declare they have no conflicts of interest with the contents of this article.
v3-fos-license
2017-11-24T23:00:58.452Z
2017-11-23T00:00:00.000
25094397
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://actaneurocomms.biomedcentral.com/track/pdf/10.1186/s40478-017-0496-7", "pdf_hash": "9e7db296e96c2a8cd49047d745c1bc800d1fc79e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42588", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "sha1": "9e7db296e96c2a8cd49047d745c1bc800d1fc79e", "year": 2017 }
pes2o/s2orc
Atypical Creutzfeldt-Jakob disease with PrP-amyloid plaques in white matter: molecular characterization and transmission to bank voles show the M1 strain signature Amyloid plaques formed by abnormal prion protein (PrPSc) aggregates occur with low frequency in Creutzfeldt-Jakob disease, but represent a pathological hallmark of three relatively rare disease histotypes, namely variant CJD, sporadic CJDMV2K (methionine/valine at PRNP codon 129, PrPSc type 2 and kuru-type amyloid plaques) and iatrogenic CJDMMiK (MM at codon 129, PrPSc of intermediate type and kuru plaques). According to recent studies, however, PrP-amyloid plaques involving the subcortical and deep nuclei white matter may also rarely occur in CJDMM1 (MM at codon 129 and PrPSc type 1), the most common CJD histotype. To further characterize the phenotype of atypical CJDMM1 with white matter plaques (p-CJDMM1) and unravel the basis of amyloid plaque formation in such cases, we compared clinical and histopathological features and PrPSc physico-chemical properties between 5 p-CJDMM1 and 8 typical CJDMM1 brains lacking plaques. Furthermore, transmission properties after bioassay in two genetic lines of bank voles were also explored in the two groups. All 5 p-CJDMM1 cases had a disease duration longer than one year. Three cases were classified as sporadic CJDMM1, one as sporadic CJDMM1 + 2C and one as genetic CJDE200K-MM1. Molecular mass, protease sensitivity and thermo-solubilization of PrPSc aggregates did not differ between p-CJDMM1 and classical CJDMM1 cases. Likewise, transmission properties such as incubation time, lesion profile and PrPSc properties in bank voles also matched in the two groups. The present data further define the clinical-pathologic phenotype of p-CJDMM1, definitely establish it as a distinctive CJD histotype and demonstrate that PrP-plaque formation in this histotype is not a strain-specific feature. Since cases lacking amyloid plaques may also manifest a prolonged (i.e. > than one year) disease course, unidentified, host-specific factors likely play a significant role, in addition to disease duration, in generating white matter PrP-amyloid plaques in p-CJDMM1. Electronic supplementary material The online version of this article (10.1186/s40478-017-0496-7) contains supplementary material, which is available to authorized users. Introduction Prion diseases are a group of neurodegenerative disorders of humans and other mammals characterized by misfolding of the cellular prion protein (PrP C ). In the disease, PrP C is structurally converted into a pathogenic isoform, called scrapie prion protein (PrP Sc ), showing an increase in β-sheet content and a partial resistance to proteases in its C-terminal region [27]. As a consequence of PrP C conversion, oligomers and amyloid fibrils of aggregated PrP Sc accumulate in the CNS, leading to neurodegeneration. Sporadic Creutzfeldt-Jakob disease (sCJD), the most common prion disease in humans, can be classified into 6 major phenotypic variants, according to molecular, histopathological, and clinical features [21,24,25]. These variants or histotypes largely correlate at molecular level with the genotype at the polymorphic PRNP codon 129, encoding for methionine (M) or valine (V), and the relative molecular mass of PrP Sc core fragment generated after proteolytic digestion, which can be 21 (type 1) or 19 kDa (type 2) [22]. These are C-terminal fragments that differ from each other for an epitope spanning residues 82-96, which is present in type 1 and removed in type 2. Other physico-chemical properties distinguishing PrP Sc aggregates among sCJD variants, associated with either type 1 or type 2, include the relative amount of the truncated C-terminal fragments, named CTF12-13 based on their molecular mass, and the socalled glycoform ratio, that is the ratio among the three differently glycosylated (e.g. di-, mono-, and unglycosylated) PrP Sc forms [20,22,32]. Five out of six of these major sCJD variants were shown to propagate in syngeneic hosts as distinct prion strains [2,17,23]. These are defined as natural isolates of infectious prions characterized by distinctive clinical and neuropathological features, which are faithfully recapitulated upon serial passage within the same host genotype [3,4]. As the only exception, sCJDVV2 and MV2K converged to a single phenotype/strain after experimental transmission [15,23], suggesting a host-genotypic effect determined by codon 129. Interestingly, the strain isolated from sCJDMV2K and VV2, currently designated as V2, has also been associated with kuru as well many iatrogenic cases of CJD secondary to contaminated growth hormone or dura mater grafts (d-CJD) [13,23,28]. Moreover, at variance with sCJD, iatrogenic CJD patients linked to the V2 strain include subjects carrying MM at codon 129 in addition to those carrying VV or MV [13,14,28]. PrP-amyloid plaques represent a distinctive histopathological feature in CJD since they show a strong correlation with both prion strain and PRNP genotype. The presence of florid plaques is a well-documented signature of vCJD (BSE strain) [31], while kuru-type plaques are the hallmark of the CJD V2 strain, although only in subjects carrying MV or MM at PRNP codon 129, since they are virtually lacking in those carrying VV despite the widespread focal PrP plaque-like deposits [24,28]. Experimental transmissions have linked sCJDMM1 to a distinctive prion strain, named M1 [2,23], which is typically associated with a diffuse, synaptic type of PrP deposition rather than with focal plaque-like protein aggregates. As a significant exception, however, Kobayashi et al. [12] described 3 sCJD cases, all with a relatively long disease duration and quite severe pathology, resembling the MM1 subtype in most features but the presence of PrP-amyloid plaques in both subcortical and deep nuclei white matter. This observation raises questions about the origin of this phenotype, namely the role of disease duration, prion strain and host genetic background in the formation of white matter PrP plaques. To contribute to answering these questions, in this study we report the clinical, histopathological and PrP Sc biochemical characterization of five European MM1 cases with white matter plaques and the results of the experimental transmission to bank voles of one of these cases. Results are compared to those obtained in the typical MM1 subtype. Patients and tissues We studied 5 subjects affected by CJDMM1 associated with PrP Sc plaque-type deposits in white matter (hereafter indicated as p-CJDMM1) and 8 cases affected by typical CJDMM1 (hereafter indicated as np-CJDMM1). All cases were referred for diagnosis to the Laboratory of Neuropathology, University of Bologna, Italy between 2005 and 2016 as part of the National Surveillance program on CJD and related disorders or (one p-CJDMM1) in the context of a collaborative effort with the Dutch Surveillance Centre for Prion Diseases on the molecular characterization of autopsy confirmed prion cases [10]. The 8 selected np-CJDMM1 control cases were representative of the spectrum of clinical and histopathologic features of the sCJDMM1 subtype [21,25] including disease duration (range 1-14 months). Brains were obtained at autopsy, one half, or tissue blocks from representative areas, were immediately frozen at −80°C, whereas the rest was fixed in formalin. Clinical and diagnostic evaluation We collected and reviewed all available medical information from hospital reports, including results of neurologic examination(s), cerebral magnetic resonance imaging (MRI) studies and electroencephalographic (EEG) recordings. We defined the date of disease onset as the time when unexplained progressive neurological or psychiatric symptoms first occurred, and as 'onset symptom(s)' the first neurological disturbance(s) complained by the patient. We measured total tau (t-tau) protein levels in the cerebrospinal fluid (CSF) by quantitative ELISA (INNOT-EST hTAU Ag, Innogenetics) according to the manufacturer's instructions, considering as an optimal cut-off value 1250 pg/mL on the basis of receiver operating characteristic curve analysis, as previously described [16]. Semi-quantitative detection of CSF 14-3-3 protein was performed by western blotting, as previously described [16]. Genetic analysis Genomic DNA was extracted from blood or frozen brain tissue. Genotyping of the PRNP coding region was performed as described [10]. Neuropathology We semi-quantitatively evaluated gray matter spongiform change and astrogliosis in 10 brain regions on hematoxylin and eosin stained sections, as reported [21]. For PrP immunohistochemistry, paraffin sections from formalin-fixed and formic acid treated blocks were processed using the monoclonal antibody (mAb) 3F4 (1:400, Signet Labs), according to published protocols [11,22], with some modifications. Briefly, after de-waxing and rehydration, sections were incubated for 15 min in 8% hydrogen peroxide solution in methanol to block endogenous peroxidase. Sections were then washed, immersed in 98% formic acid for 1 h, rewashed and microwaved in 1.5 mM HCl for 25 min, incubated with reagent A of Histostain-Plus IHC Kit (Thermo-Fisher Scientific) for 10 min and then probed overnight with mAb 3F4. After two sequential incubations with reagent B and C of Histostain-Plus IHC Kit interspersed with washing steps in TBS 1X, sections were treated with Romulin AEC Chromogen (Biocare Medical) for 5 min and Mayer's hematoxylin for 15 s before being dehydrated, cleared and coverslipped. For LFB staining, slides were immersed overnight in LFB solution (final concentration, 0.1% solvent blue and 0.5% acetic acid in 95% alcohol) at 60°C. After immersion in 95% alcohol and washing, sections were immersed 5 s in 0.05% lithium carbonate and rewashed. The latter steps were repeated until suitable gray matter discoloration. The obtained sections were then processed for PAS staining through immersion in periodic acid for 10 min and, after a washing step in deionized water, incubation in dark condition with Schiff 's reagent for 15 min. Subsequently, slides were washed, incubated with Mayer's hematoxylin for 1 min, immersed in warm water and rewashed. Transmission to bank voles Brain tissue from the p-CJDMM1 index case (case #1 described below) and from 4 control cases without plaques (three sCJDMM1 and one sCJDMV1) were homogenized at 10% (w/v) concentration in phosphate buffered saline (PBS) and stored at −80°C. Two genetic lines of bank voles, Bv109M and Bv109I carrying methionine or isoleucine homozygosity at PRNP codon 109, were injected by the intracerebral route (20 μl) into the left cerebral hemisphere under ketamine anesthesia. Beginning one month after inoculation, voles were examined twice per week until the appearance of neurological signs, and evaluated daily thereafter. The animals were sacrificed with carbon dioxide when they reached the terminal stage of the disease. Survival time was calculated as the interval between inoculation and sacrifice, attack rate as the number of animals developing disease with respect to the total number of inoculated animals [18]. The lesion profile was based on the severity of vacuolation, with a score from 0 to 5, in nine grey-matter brain areas on hematoxylin and eosin-stained sections, as previously described [7]. Vacuolation scores derived from at least 6 individual voles per group and were reported as mean ± standard error (SEM). Preparation of white matter total homogenates and PK digestion Frontal and parietal cortical white matter was obtained from case #1 (p-CJDMM1), and one np-CJDMM1. PrP Sc was purified from 350 mg of white matter, following a previously published protocol [29] and re-suspended in 200 μl of lysis buffer at pH 6.9. PK digestion was carried out at a final concentration of 4 U/ml for 1 h at 37°C. PrP deglycosylation N-Linked glycans were removed by using a peptide-Nglycosidase F kit (New England Biolabs) according to the manufacturer's instructions. PK titration curves Grey matter tissues were homogenized (10% w/v) in lysis buffer at pH 8. Total protein concentration was measured by means of a standard colorimetric method based on bicinchoninic acid (Pierce) and then adjusted to a final value of 4200 μg/ml. Samples were digested using serial dilutions of PK activity ranging from 2 to 256 U/ ml, for 1 h at 37°C. Digested samples were treated as previously described. Western blot Samples were run in a 7 or 15 cm long separating gel and transferred to Immobilon-P membranes (Millipore). After blocking in 10% non-fat milk in Tween-Tris-buffered saline, membranes were probed overnight with the monoclonal antibody 3F4 with epitope at PrP residues 108-111 at 1:30000 working dilution (human samples). Immunoblots from bank voles samples were incubated overnight at 4°C with the monoclonal antibody 9A2 (1:8000, PrP residues 99-101) [26] instead of 3F4. In addition, all immunoblots were probed with the C-terminal antibody SAF60 (1:2000, PrP residues 157-161) [20] in order to detect the CTF13. After four washings in Tween-Tris-buffered saline, membranes were incubated for 1 h at room temperature with an anti-mouse secondary antibody conjugated to horseradish peroxidase (GE Healthcare; working dilution, 1:4000) and washed again four times in Tween-Tris-buffered saline. The immunoreactive signal was visualized by enhanced chemiluminescence (Immobilon Western, Millipore) on an LAS 3000 camera (Fujifilm). Quantitative analysis of protein signal Densitometric analysis was performed using the software AIDA (Image Data Analyzer v.4.15, Raytest GmbH). For PK titration, a semi-logarithmic curve was obtained by plotting the percentage of protein remaining after digestion (with respect to the sample digested with 2 U/ml) against the corresponding PK concentration. The ED 50 (i.e. the PK concentration needed to digest 50% of PrP Sc ) for each sample was calculated by means of the equation of the straight line that best fitted the linear portion of the curve (r 2 ≥ 0.95). For TSA, the percentage of protein solubilized after heating treatment (with respect to the sample treated at 95°C) was plotted against the corresponding heating temperature. The T 50 (i.e. the temperature needed to solubilize 50% of PrP Sc ) for each sample was calculated from the equation describing the sigmoidal curve that best fitted the data (r 2 ≥ 0.95). Statistical analyses All statistical analyses were performed with SigmaPlot 12.5 (Systat Software Inc.). Depending on the data distribution, Student's t test or Mann-Whitney test were used to detect differences between two groups, while one-way analysis of variance (ANOVA), followed by Dunn's or Holm-Sidak post hoc tests, was applied for three or more groups comparisons. P value <0.05 was considered statistically significant. Clinical findings and diagnostic investigations Results are summarized in Table 1. The mean age at disease onset in p-CJDMM1 was 58.2 ± 9.9 years (range 48-70) and the mean disease duration 22.0 ± 7.8 months (range 13-34). Clinical features were quite heterogeneous, reflecting the multifocal cerebral involvement, and included severe cognitive impairment, myoclonus along with cerebellar, pyramidal and visual symptoms and signs. Akinetic mutism appeared on average after 8.6 ± 3.8 months from onset. EEG examination revealed periodic sharp-wave complexes in 4 out of 5 subjects (80%), while brain MRI showed typical hyperintense cortical and/or basal ganglia abnormalities on fluid attenuated inversion recovery (FLAIR) or diffusion weighted (DW) sequences in 3 of the 5 cases. Western blot assay for 14-3-3 was positive in 4 out of 5 (80%) cases, while t-tau CSF levels were above the 1250 pg/ml threshold in all the 4 subjects that were tested. Genetic analysis and PrP Sc typing All 5 p-CJDMM1 cases carried MM at PRNP codon 129; in addition case #4 carried the E200K mutation while sequencing of the PRNP coding region excluded pathogenic mutations in the other four. PrP Sc typing showed type 1 in all cases with the co-occurrence of type 2 in case #5 (Fig. 1). Notably, there were no significant differences in PrP Sc electrophoretic mobility between p-CJDMM1 and np-CJDMM1 cases in both gray ( Fig. 1) and white matter (Additional file 1: Figure S1A). Neuropathology The histopathological features largely overlapped with those previously reported for the np-CJDMM1 histotype. Specifically, the pathological changes mainly involved the cerebral cortex, striatum, thalamus and cerebellum, whereas the hippocampus and brainstem were relatively spared (Additional file 2: Table S1). Consistently with the relatively long disease duration, the histopathological changes were, in most cases, rather severe. Indeed, marked atrophy and status spongiosus were the main findings in the most affected areas, whereas typical spongiform change with microvacuolation was best seen in less affected structures (i.e. the hippocampus) (Fig. 2 a,b). As the only exception, the case with the shortest duration (case #5) showed typical spongiform change and, overall, moderate rather than severe histopathological lesions in most affected gray matter areas (Additional file 2: Table S1). Immunohistochemical analysis of PrP Sc showed the typical synaptic deposition pattern in the molecular layer of cerebellum and in the cerebral cortex in all cases (Fig. 2c). At variance with typical CJDMM1, however, all five cases also displayed a variable number of PrP Sc plaquelike deposits in the white matter, not correlating, overall, with the degree of white matter damage (Tables 2 and 3), in virtually all brain regions analyzed (Fig. 2 c,d and Table 2). Furthermore, with the only exception of the case carrying the E200K PRNP mutation (case #4), PAS staining revealed the presence of PrP-amyloid plaques (Fig. 2 e,f ), quantitatively reflecting the number of PrP Sc plaque-like deposits. While in three cases the plaques were immediately noticed given their number and the typical kuru-type morphology, they were much rarer and limited to a small core in the case with the longest duration and the most severe pathology (case #3). Notably, the three cases with the highest number of PrP plaques also showed numerous coarse focal PrP deposits and tract-like PrP deposits in gray matter areas delimiting the white matter (data not shown). These types of deposits were seen in the striatum in proximity of the internal capsule and in the lateral thalamus. Besides plaques, no other PrP deposits were seen in the white matter either intra-or extracellularly. Finally, the sections stained with the anti-APP and anti-synaptophysin antibodies revealed that the deposits of these proteins in the damaged white matter of p-CJDMM1 were, occasionally, co-localizing with PrPamyloid plaques (Fig. 2 e,f ). Biochemical comparison between p-CJDMM1 and np-CJDMM1 Besides the presence of a typical type 1 fragment, the overlap of biochemical PrP Sc properties between p-CJDMM1 and np-CJDMM1 extended to the CTF13, as revealed by antibody SAF60. As the only exception, in case #5, as expected given the mixed types 1 + 2 molecular phenotype [20], the ratio between PrP27-30 and CTF13 differed from that of the other cases (Additional file 3: Figure S2; Additional file 4: Table S3). Analysis of PrP Sc glycoform ratio demonstrated a slightly higher percentage of diglycosylated PrP Sc in p-CJDMM1 than in np-CJDMM1, whilst confirming a similar glycopattern (Additional file 5: Table S3). As in grey matter, the analysis of CTF13 in white matter did not show any significant difference between p-CJDMM1 and np-CJDMM1 cases (Additional file 1: Figure S1B). PK titration curves, performed on gray matter homogenates at pH 8 in both sporadic and genetic (E200K) cases, did not exhibit consistent differences between the two groups. P-CJDMM1 and np-CJDMM1 groups showed an ED 50 (expressed as mean ± standard deviation) of 7.87 ± 1.09 and 9.14 ± 2.92, respectively. No statistically significant differences were observed (Fig. 3 a,c). Case #5 was excluded from this analysis, since MM2-PrP Sc is associated with a higher PK-resistance than MM1-PrP Sc [29]. TSA, performed on THs at pH 6.9, also revealed comparable values among the analyzed samples. The calculated T 50 (expressed as mean ± standard deviation) was 79.68 ± 0.72 in p-CJDMM1 and 80.38 ± 2.90 in np-CJDMM1 (Fig. 3 b,d). Transmission to bank voles We have previously shown that sCJDMM(V)1 and gCJD E200K are transmissible to Bv109M with short incubation time and low or absent transmission barrier [18]. Case #1 (p-CJDMM1) and control case c (np-sCJDMM1) were thus inoculated in Bv109M and their results compared with those previously obtained from np-sCJDMM1 (case a), np-gCJD E200K-MM1 (case b) and np-sCJDMV1 (case d) transmissions. As the PRNP polymorphism at codon 109, coding for methionine or Table 4. Overall, these experiments confirm and extend previous evidence of a very low or absent transmission barrier for CJDMM(V)1 in bank voles, which also applies to case #1 (p-CJDMM1). Interestingly, in both p-CJDMM1 and np-CJDMM(V)1 the survival time was shorter in Bv109M than in Bv109I. Furthermore, in both lines of bank voles the survival time was generally similar for p-CJDMM1 and np-CJDMM(V)1, although with some variations. Case #1 showed the shortest survival time in the 1st passage, and the longest one in the 2nd passage in both Bv109M and Bv109I. Statistically significant differences were sometimes observed in survival times between case #1 and various np-CJD cases (Table 4). However, given that the differences were generally not conserved between 1st and 2nd passage and, above all, that even the comparisons between np-CJDMM(V)1 cases were sometimes statistically significant, the reported differences more likely reflect the PrP Sc amount in the inoculum or other factors rather than a strainspecific feature. PrP Sc extracted from the brains of infected bank voles was subjected to Western blot analysis to detect possible differences induced by the two types of inocula. As for PrP Sc in the CJD brains, PrP Sc fragments were indistinguishable between bank voles inoculated with p-CJDMM1 or np-CJDMM(V)1 (Additional file 6: Figure S3A). At variance with the human brain, vole PrP Sc was characterized by a predominance of the diglycosylated form, as previously reported [18]; however, likewise in Fig. 3 Analysis of PrP Sc PK-resistance (a, c) and thermo-solubility (b, d). Representative WBs of case #3 (upper) and a np-CJDMM1 case (lower) are shown in a and b. Membranes were incubated with the primary antibody 3F4. Relative molecular masses are expressed in kDa. c PK digestion profiles. ED 50 represents the PK concentration needed to digest 50% of PrP Sc (expressed as mean ± standard deviation). d Thermo-solubilization profiles. T 50 represents the temperature needed to solubilize 50% of PrP Sc (expressed as mean ± standard deviation). No statistically significant differences were observed in both ED 50 and T 50 between p-and np-CJDMM1 (ED 50 , Mann-Whitney Rank Sum Test, P = 0.570; T 50 , t-test, P = 0.306) CJD inoculated samples, PrP Sc glycoform ratio in voles did not show any statistically significant difference related to the inoculum (p-CJDMM1 or np-CJDMM(V)1) (Additional file 6: Figure S3B). Similarly, a comparable amount of CTF13 [26] was detectable, after sample deglycosylation, in voles infected with the two inocula (Additional file 7: Figure S4). Neuropathology in bank voles The lesion profiles of case #1 and inoculated controls showed an identical distribution of spongiform change (Fig. 4). In Bv109M, in both first and second passage, spongiform change was more prominent in superior colliculi, thalamus, hippocampus and retrosplenial and cingulate cortices. The other areas presented none or few scattered vacuoles. The Bv109I line displayed a slightly more pronounced spongiform change in the medulla, hypothalamus and septum, which was also maintained upon the two passages. Immunohistochemistry revealed a synaptic pattern of PrP deposition, while no plaquelike deposits were seen in the affected animals of both lines. Discussion The present data (i) add to previous studies reporting the rare occurrence of white matter PrP amyloid plaques in patients with an otherwise classic sCJDMM1 phenotype; (ii) originally report the occurrence of white matter PrP plaque-like deposits in genetic CJDMM1 and the results of the experimental transmission of p-sCJDMM1 to bank voles, and (iii) further address the issue of the molecular basis of amyloid plaque formation in CJD by providing an extensive characterization of the physicochemical properties of PrP Sc aggregates in p-CJDMM1. No data are available on the relative frequency of this peculiar phenotype in the CJD population. Since the 5 cases described here were observed over a 15-20 yearperiod of diagnostic activity involving approximately 1000 CJD-affected brains from Italy and the Netherlands, we can estimate an incidence of p-CJDMM1 in western Europe around 0.5%. A critical unsolved issue concerning the occurrence of white matter kuru-type amyloid plaques in sCJD carrying MM at codon 129 is whether or not this peculiar phenotype is linked to a specific prion strain. Our systematic analyses of PrP Sc properties combined with the results of the experimental transmissions strongly argue for both the classic np-CJDMM1 and the atypical p-CJDMM1 phenotypes being linked to the same (M1) prion strain. Accordingly, amyloid plaque formation in such cases represents a host-derived, likely genetic, effect. To consider an alternative possibility, one would postulate the unlikely scenario of the co-occurrence in p-CJDMM1 of a distinct prion strain besides M1, not inducing a distinctive cerebral grey matter pathology, not affecting PrP Sc properties, and not transmissible to bank voles. Besides the presence of amyloid plaques, another interesting feature, distinguishing the p-CJDMM1 reported by us and Kobayashi et al. [12] from the np-CJDMM1 cases, is their significantly longer mean disease duration (22 months) in comparison to typical np-CJDMM1 cases (4 months). However, disease duration and the associated advanced pathology, although notoriously favoring the extent of plaque formation, cannot be the only causal factors since it is well established that most CJDMM1 patients with prolonged disease duration do not develop plaque-type depositions in the white matter. Moreover, the observations by Gelpi et al. and Berghoff et al. [1,9] in cases characterized by a short disease course, combined with our findings in case #5 and in a similar p-CJDMM1 case we recently obtained, also characterized by mild white matter changes (P. Parchi personal communication), clearly indicate that white matter amyloid plaques may develop early in the disease course and independently from a severe white matter damage. Interestingly, in our p-CJDMM1 cases, the onset and progression of clinical symptoms, including akinetic mutism, seem to be significantly delayed compared to np-CJDMM1 patients with similar disease duration. Taken together, these data support the hypothesis of a protective role of PrP amyloid, possibly by sequestering PrP Sc into large fibrils and partially preventing the molecular interaction between monomeric PrP C and PrP Sc , that is essential for conversion and prion propagation. Since the mechanism of amyloid deposition seems to include the incorporation of lipid molecules into the aggregates [30], white matter appears even more suitable for PrP amyloid plaque formation than the grey matter. In this regard, it is noteworthy that plaque-like PrP deposition in sCJDVV2 and MV2K is often best observed at the boundaries between gray and white matter. Despite the intensive search, we failed to demonstrate a difference in the physico-chemical PrP Sc properties between p-CJDMM1 and np-CJDMM1 that would correlate with plaque formation. Similarly, PrP Sc properties did not differ between bank voles injected with the two CJD inocula. These data combined with the lack of PrP amyloid plaques or plaquelike deposits in the bank voles inoculated with p-CJDMM1 further point to a non-PrP factor of the host affecting PrP aggregation and fibrillation. It is well established that PrP Sc spread within the peripheral and central nervous systems by axonal transport although the cellular mechanism of prion transport in axons and into peripheral tissue is largely unresolved. Thus, one possibility would be a modified molecular interactome for PrP Sc during axonal transport favoring PrP Sc aggregation and amyloid plaque formation. Since PrP-amyloid plaques in p-CJDMM1 cases sometimes colocalize with APP, a well-established marker of axonal damage, PrP Sc deposition in white matter eventually disrupts axon integrity. The opposite scenario, namely axonal damage favoring PrP amyloid plaque formation, previously suggested by Kobayashi et al. [12] seems unlikely given the observation of plaque formation in cases [18]. Brain-scoring positions include medulla (1), cerebellum (2), superior colliculus (3), hypothalamus (4), thalamus (5), hippocampus (6), septum (7), retrosplenial and adjacent motor cortex (8), and cingulate and adjacent motor cortex (9). Strong similarities characterize the lesion profiles of all these transmissions with short disease duration and/or lack of significant white matter damage [1,9] (and present cases #5). Conclusions The present study further establishes the existence of a rare CJD subtype, occurring in approximately 0.5% of CJD cases, designated as p-CJDMM1. The novel histotype largely overlaps with sCJDMM1 but shows, as a very distinctive feature, the presence of PrP-amyloid plaques of kuru-type in both subcortical and deep nuclei white matter. Likewise typical CJDMM1, p-CJDMM1 can also be observed in sCJD cases showing the cooccurrence of PrP Sc types 1 and 2. Moreover, plaque-like PrP deposits in the white matter can be a feature of genetic CJD. Most significantly, p-CJDMM1 share both PrP Sc and transmission properties with classic CJDMM1, strongly pointing to an host-dependent causal factor for amyloid plaque formation in this phenotype.
v3-fos-license
2015-09-18T23:22:04.000Z
2015-05-04T00:00:00.000
1456542
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2218-1989/5/2/270/pdf", "pdf_hash": "48638a997155adc7f04e99d6160d1b2dd5ebb208", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42594", "s2fieldsofstudy": [ "Biology", "Engineering", "Environmental Science", "Materials Science" ], "sha1": "48638a997155adc7f04e99d6160d1b2dd5ebb208", "year": 2015 }
pes2o/s2orc
Essences in Metabolic Engineering of Lignan Biosynthesis Lignans are structurally and functionally diverse phytochemicals biosynthesized in diverse plant species and have received wide attentions as leading compounds of novel drugs for tumor treatment and healthy diets to reduce of the risks of lifestyle-related non-communicable diseases. However, the lineage-specific distribution and the low-amount of production in natural plants, some of which are endangered species, hinder the efficient and stable production of beneficial lignans. Accordingly, the development of new procedures for lignan production is of keen interest. Recent marked advances in the molecular and functional characterization of lignan biosynthetic enzymes and endogenous and exogenous factors for lignan biosynthesis have suggested new methods for the metabolic engineering of lignan biosynthesis cascades leading to the efficient, sustainable, and stable lignan production in plants, including plant cell/organ cultures. Optimization of light conditions, utilization of a wide range of elicitor treatments, and construction of transiently gene-transfected or transgenic lignan-biosynthesizing plants are mainly being attempted. This review will present the basic and latest knowledge regarding metabolic engineering of lignans based on their biosynthetic pathways and biological activities, and the perspectives in lignan production via metabolic engineering. Introduction A tremendous increase in the number of elderly individuals has caused a rapid escalation of medical care expenses. This may eventually lead to a serious disruption in essential medical care systems and national financial burdens. To address these issues, extensive efforts are therefore underway to increase the healthy life expectancy, prevent lifestyle-related diseases, and make progress in medical treatments. Consequently, the consistent and appropriate intake of dietary supplements and the efficient development of clinical drugs are the most promising and effective ways to achieve these goals. Dietary supplements and drug compounds are largely derived from specialized metabolites, previously called secondary metabolites of plants, including alkaloids, flavonoids, isoflavonoids, and lignans. As depicted in Figure 1A, lignans are naturally occurring phenylpropanoid dimers (C6-C3 unit; e.g., coniferyl alcohol), in which the phenylpropane units are linked by the central carbons of the side chains. These specialized metabolites are classified into eight groups based on their structural patterns, including their carbon skeletons; the way in which oxygen is incorporated into the skeletons; and the cyclization pattern: furofuran, furan, dibenzylbutane, dibenzylbutyrolactone, aryltetralin, arylnaphthalene, dibenzocyclooctadiene, and dibenzylbutyrolactol [1,2]. Unfortunately, the amounts of lignans and their precursor molecules in model plants such as Arabidopsis thaliana and Nicotiana tabacum are quite low. Moreover, plant sources of lignans are frequently limited because of the high cost of plant hunting and collection, poor cultivation systems, long growth phase, and the low lignan content [1][2][3][4][5][6][7][8][9][10][11][12]. For instance, sesamin, a multifunctional sesame seed lignan, is extracted from sesame seed oil, the most abundant source of this compound. Nevertheless, sesamin at most constitutes 0.4%-0.6% (w/w) of sesame seed oil. Moreover, sesame seeds are cultivated only once per year, limiting the ability to obtain large amounts of this compound. Likewise, podophyllotoxin (PTOX), a lignan that is a precursor of semi-synthetic antitumor drugs, is isolated from the roots and rhizomes of Podophyllum hexandrum, which is distributed in very limited regions, and is now endangered due to overharvesting and environmental disruption [13]. In addition, the complicated chemical structures of PTOX and the related compounds ( Figure 1A) make stereoselective organic synthesis impractical and costly for producing large supplies of these compounds and the resulting high cost [1][2][3][4][5][6][7][8][9][10][11][12]. These drawbacks indicate the requirement for efficient, stable and sustainable production systems for producing lignans. There has been a growing body of reports on the molecular characterization of the enzymes involved in the biosynthesis of lignans, lignan-production using lignan-rich plants or cultured plant cells, including Linum, Forsythia, and Podophyllum species [14][15][16][17][18], and the physiological analyses of their biological activities in mammals. These findings have allowed us to attempt the metabolic engineering of lignan biosynthesis in Linum, Forsythia, and Podophyllum species. This review article will provide current knowledge of lignan production via metabolic engineering and perspectives in the development of metabolic engineering-based lignan production. Lignan Biological Activity on Mammals Although lignans exhibit a wide variety of bioactivities on plants, insects, and mammals [12,[19][20][21][22][23][24], they are of especial interest due to the unique antitumor-associated activities and reduction of lifestyle-related diseases. Lignans and their glycosides, including pinoresiniol, sesamin, lariciresinol, secoisolariciresinol, and matairesinol, are metabolized by intestinal microflora to yield enterodiol and enterolactone, which are well known as enterolignans or mammalian lignans [25][26][27]. These metabolized lignans elicited their modest estrogen-like activity in mammals. For example, enterolignans bind to the mammalian estrogen receptors (ER), ER or ER, which are key regulatory factors in the sexual maturation of genital organs [28,29]. Consequently, enterolignans, combined with other intestinal flora-generating metabolites of isoflavones and coumestans, are also called phytoestrogens. It should also be noteworthy that low concentration of intact lignans have been detected in the sera of mammals fed with lignan-rich diets, suggesting that non-metabolized lignans are taken up by the mammalian digestive system and manifest ER-independent activities in vivo and in vitro, including tumor growth suppression, angiogenesis inhibition, and reduction of diabetes [6,[30][31][32][33][34][35]. Lignans have also been shown to exhibit positive effects on other lifestyle-related diseases. The anti-oxidative propensity of sesamin is also likely to be involved in protecting the liver from oxidation by alcohols, lipids, and oxygen radicals [39,[42][43][44]. In human intestinal Caco 2 cells, pinoresinol decreased the production of inflammatory factors, such as interleukin-6 and prostaglandin E2, following the down-regulation of Cox-2, an inducible prostaglandin synthase that is responsible for the synthesis of prostaglandin H, a precursor of any other prostaglandins [30]. In contrast, matairesinol increased levels of prostaglandin E2 [30]. These findings proved that pinoresinol and matairesinol have opposite effects in these cells [30]. Of the most prominent epidemiological significance is that intake of lignan-rich foods, such as flaxseeds and sesame seeds, has been found to reduce breast cancer risk and improve the breast cancer-specific survival of postmenopausal women [34,[45][46][47][48][49][50]. Moreover, serum enterolactone levels were positively and significantly correlated with improved prognosis in postmenopausal women with breast cancer [51]. These epidemiological findings suggest the unique suppressive activity of lignans against breast cancer risks in elderly women. Oral lariciresinol was found to suppress tumor growth and angiogenesis in nude mice implanted with human MCF-7 breast cancer via the induction of apoptosis and the up-regulation of ER expression [35]. SDG potently inhibited cell proliferation and induced the apoptosis of breast cancer cells via the down-regulation of ER-and growth factor-mediated gene expression in athymic mice [52]. Sesamin reduced signaling downstream of mitogen-activated protein kinase [53], and is likely to more potently reduce breast tumor growth, compared to SDG [53]. Consistent with the abundance of various lignans in several foods including flax or sesame seeds and oils, these pharmacological effects suggest that lignans are promising dietary compounds for the prevention of breast cancer. PTOX and its structurally related natural compounds exhibit the suppressive activity on mitotic spindle assembly by binding to tubulin, resulting in cell cycle arrest at metaphase [18]. The PTOX semi-synthetic derivatives, etoposide, teniposide, and etopophos ( Figure 1B), are clinically utilized to treat certain types of cancers, including testicular/small-cell lung cancer, acute leukemia, Hodgkin's and non-Hodgkin's lymphoma [53,54]. These PTOX-derived anti-tumor drugs induce apoptosis of tumor cells by binding to topoisimease II, a key enzyme for cell division [53,54]. In addition, other new PTOX derivatives, including GP-11, NK-611, TOP-53, GL-331, and NPF, are undergoing phase I or II clinical trials as novel cancer drugs [54]. Combined with the difficulty in efficient chemical synthesis of PTOX due to its complicated structure, these findings highlight the importance of PTOX as a natural seed material for the production of various anti-cancer drugs. In combination, these epidemiological and physiological studies demonstrate that lignans exert diverse, but specific, beneficial effects as dietary compounds or medicinal agents for the prevention of lifestyle-related diseases, such as cancer and diabetes of note, respective lignans exhibit both similar and differential bioactivities in mammals, leading to requirements for the efficient and specific production of these compounds. To date, two major lignan biosynthesis pathways have been identified. Both of the pathways originate from the coupling of achiral E-coniferyl alcohol, leading to the generation of pinoresinol, a basal lignan ( Figure 2). Although a pinoresinol synthase has yet to be identifed, a dirigent protein (DIR) was shown to participate in the stereo-specific dimerization of E-coniferyl alcohol [55]. In diverse plant species including Forsythia, Linum, and Podophyllum, pinoresinol is stepwisely reduced to lariciresinol and then secoisolariciresinol by pinoresinol-lariciresinol reductase (PLR), a member of the pinoresinollariciresinol/isoflavone/phenylcoumaran benzylic ether reductase (PIP) family [55][56][57][58][59][60][61]. PLR converts pinoresinol to secoisolariciresinol via lariciresinol ( Figure 2). Pinoresinol also undergoes glucosylation by UGT71A18, a UDP-glucose-dependent glucosyltranferase [62]. Such glycosylation is highly likely to suppress the chemical reactivity of a phenolic hydroxyl group of pinoresinol and to potentiate high water solubility of pinoresinol aglycone, resulting in large and stable amounts of pinoresinol [1,2,11,12]. Indeed, approximately 90% of pinoresinol is accumulated in its glucosylated form in Forsythia spp. [63,64]. PLR-catalyzed metabolism and UGT71A18-directed glucosylation are reciprocally competitive pathways ( Figure 2), given that both of them share pinoresinol as a substrate. Intriguingly, PLR shows opposite seasonal alteration in gene expression against UGT71A18; in Forsythia leaves in Japan, PLR gene is intensely expressed from April to August but poorly from September to November, whereas gene expression of UGT71A18 is observed at high level from September to November, but at faint or no level from April to August, at least, in Japan [64]. These findings indicate that PLR and UGT71A18 participate in the competitive regulation of lignan biosynthesis via pinoresinol metabolism. In A. thaliana, AtPrR1 and 2 are only responsible for the reduction of pinoresinol to lariciresinol [60], and lariciresinol and pinoresinol are glucosylated by another novel UDP-glucose-dependent glucosyltranferase, UGT71C1 [65]. A number of key lignan biosynthetic enzymes remain to be identified. Over the past few years, however, the genomes or transcriptomes of lignan-rich plants including Linum [77][78][79], Sesamum [72][73][74], and Podophyllum [71,80,81] have been documented, followed by in silico detection of functional genes. Particularly, Next-generation sequencing (NGS) is a promising approach for the molecular characterization of lignan biosynthetic enzymes; indeed, CYP719A23 and its homolog were identified by NGS-based transcriptome [71]. These findings are expected to remarkably enhance the molecular and functional characterization of lignan biosynthetic enzymes. In addition, it is suggested that a Podophyllum endophyte may produce PTOX [82]. NGS analyses of the genome, metagenome, and transcriptome of Podophyllum and its endophytes are expected to provide crucial clues to understand the PTOX biosynthesis pathways. Metabolic Engineering of Lignan Biosynthesis To date, cell and organ cultures have been employed for metabolic engineering of lignan biosynthesis. Furthermore, a growing body of studies has revealed that lignan biosynthesis is altered by genetic modification, light, and elicitors. This section presents an overview and discussion of recent progress in typical lignan metabolic engineering using plants, plant cells and organ cultures. Gene Transfection or Silencing Stable or transient transfection or gene silencing of a lignan biosynthetic enzyme gene is expected to directly alter the lignin production cascades in host plants, organs, and cells, following the development of methods for gene transfection into hosts of interest. Forsythia is a perennial plant commonly known as the golden bell flower, and is used for a variety of Chinese medicines and health diets [1,2,5,7,12,55]. As shown in Figure 2, Forsythia biosynthesizes pinoresinol, phillygenin, secoisolariciresinol, matairesinol, and arctigenin, with >90% of pinoresinol, > 80% of matairesiol, and 40%-80% of arctigenin accumulated in glucosylated forms [1,2,63,64,83]. Identification of these lignans and the relevant biosynthetic enzymes suggests the potential of Forsythia as a platform for lignan production. Although efficient methods for the generation of transgenic Forsythia species have not yet been established [84], the metabolic engineering of Forsythia culture cells was originally reported. Forsythia suspension cells stably transfected with a PLR-RNA interference (RNAi) sequence (PLR-RNAi) showed complete loss of matairesinol and an approximately 20-fold increase in total pinoresinol (pinoresinol aglycone and glucoside), compared with the wildtype cells [63]. Furthermore, Forsythia transgenic cells, CPi-Fk, which are stably double-transfected with PLR-RNAi and the sesaminproducing enzyme, CYP81Q1 (Figure 2), produced sesamin (0.01 mg/g dry weight of the cell [DW]) (Figure 3), although sesamin is not biosynthesized in native Forsythia [63]. This is the first success in lignan metabolic engineering leading to an exogenous lignan using transgenic plant cells. In addition, the RNAi-based suppression of UGT71A18 (encoding a pinoresinol-glucosylating enzyme) may lead to the dramatic improvement of sesamin production in CPi-Fk cells, given that pinoresinol glucoside cannot be utilized by CYP81Q1 as a substrate [75], and 90% of pinoresinol is glucosylated in Forsythia cells [1,2,63,64,83]. Thus, the Forsythia cell culture system is an efficient and promising platform for producing both endogenous and exogenous lignans by transgenic metabolic engineering. RNAi-based metabolic engineering was attempted in various Linum species to produce endogenous lignans. Hairy roots of L. perenne transiently transfected with PLR-RNAi reduced the production of the major endogenous lignan, justicidin B, to 25%, compared with the untreated hairy roots [58]. Likewise, transient transfection of L. corymbulosum hairy roots with PLR-RNAi resulted in a marked reduction of hinokinin [59]. Combined with the justicidin B and hinokinin biosynthetic pathways, in which PLR converts pinoresinol into secoisolariciresinol (Figure 2), these findings indicate that PLR-directed conversion of pinoresinol into secoisolariciresinol is a rate-limiting step in justicidin B and hinokinin biosynthesis, at least in the hairy roots of L. perenne and L. corymbulosum, respectively. Identification and genetic manipulation of justicidin B and hinokinin synthase will contribute a great deal to the establishment of procedures for the direct metabolic engineering of these lignans. Seed coats of PLR-RNAi-transgenic plants of L. usitatissimum showed the high accumulation of pinoresinol diglucoside and loss of SDG [84]. Intriguingly, these PLR-RNAi-transgenic plants produced the 8-5' linked neolignans, dehydrodiconifnyl alcohol and dihydro-dehydrodiconifnyl alcohol, neither of which was detected in the wildtype plants [84]. Taken together, these findings reinforce the potential of Forsythia and Linum transgenic or transiently gene-transfected cells and plants as the metabolic engineering-based platforms for on-demand production of both endogenous and exogenous lignans. The draft genome and transcriptome of Linum usitatissimum [77][78][79] will accelerate the identification of the enzymes involved in the biosynthesis of Linum lignans, leading to the efficient lignan production using gene-modified plant sources. Two factors should be considered in constructing gene-modified plant platforms for lignan production, the type of host and the use of transgenic or transiently transfected hosts. Host types can include plants, organs, and cell cultures. For example, although the amount of sesamin produced by CPi-Fk cells is 100to 200-fold lower than that by native sesame seeds, CPi-Fk-based lignan metabolic engineering has several advantages. CPi-Fk cells proliferate 10-fold in two weeks in standard culture medium [63], and can be cultivated at all times and locations, whereas sesame seeds are cultivated in limited regions only once a year. Moreover, the conditions used in the culturing CPi-Fk cells, including temperature, light wavelength and intensity, and medium components, can be altered to optimize sesamin production. Forsythia plants have much greater biomass, with higher amount of lignans, than suspension cell cultures, and these plants can grow and propagate from small explants without flowering or seed formation. However, efficient generation of transgenic Forsythia plants still requires further basic research due to the markedly low transformation efficiency by any known gene transfection methods and deviation among Forsythia species [85][86][87]. In contrast, the generation of both stable (namely transgenic) and transient transfectants of Linum species are well established, and thus, the amounts of precursors or intermediates of targeted lignans are major determinants for the employment of cell cultures, organ cultures, or plants as host platforms. Additionally, gene-modified host plants may fail to normally grow or to produce lignans of interest due to pytotoxicity of lignans, although the underlying molecular mechanisms have not fully been elucidated [3,11,12,[18][19][20]22]. Therefore, generation of lignan-producing plants using multiple plant species is occasionally required. The second factor involves construction of either transgenic or transiently transfected hosts. Transgenic plants and cell cultures, once generated, are sustainably used for lignan production and readily up-scaled, whereas generation of transgenic plants, in particular non-model plants, may consume time and costs. Moreover, cultivation of transgenic plants in general requires a closed facility for gene-stably modified plants. Transiently transfected plants require repeated transfections, and transient transfection of multiple genes is likely to reduce the transfection efficiency. In addition, massive transient transfection remains to be fully developed [88]. Further research on lignan metabolic engineering, using transgenic or transiently gene-transfected plants, organ cultures, and cell cultures, is expected to lead to the establishment of both universal and molecular species-specific strategies for gene-regulated metabolic engineering of lignan biosynthesis pathways. Light Irradiation Biosynthesis of several secondary metabolites, including anthocyanin, carotenoid, and shikonin, is affected by wavelength [89][90][91][92]. Light irradiation has also been shown to improve the production of both endogenous and exogenous lignans by CPi-Fk cells. Irradiation of CPi-Fk cells for two weeks with white fluorescent, blue LED, and red LED light increased sesamin production 2.3-, 2.7-, and 1.6-fold, respectively, compared with cells cultured in the dark [93]. Likewise, irradiation of CPi-Fk cells increased pinoresinol (aglycone and glucoside) production 1.5 to 3.0-fold [93]. Intriguingly, expression of the pinoresinol-glucosylating enzyme UGT71A18 was also downregulated in CPi-Fk cells under blue LED or red LED light, leading to the increase of sesamin production [93], given that pinoresinol glucoside is not metabolized into sesamin by CYP81Q1 [12,75]. In Linum species, suspension of L. album cells produced two-fold more PTOX under red light than those in the dark [94]. Compared with white fluorescent light, irradiation of S. indicum leaves 3-5 weeks after sowing with blue LED light increased sesamin content 2.0-fold, whereas irradiation with red LED light reduced sesamin content twofold [9,95]. Although the underlying molecular mechanism has yet to be clarified, light irradiation can also improve lignan productivity by both cell cultures and plants. Elicitation Plants defense systems are triggered upon injury or infection via signaling by the phytohormones, methyl jasmonate (MeJA) and salicylic acid (SA), and treatment with elicitors, including fungi, their extracts and the glycan components, MeJA and SA, also mimic such activation. Moreover, lignans, at least in part, are believed to be involved in host defense systems [12,18,96]. In combination, elicitors are expected to enhance lignan biosynthesis [18,97]. As summarized in Table 1, the effects of various elicitors on lignan production have been examined in a wide variety of cell cultures and hairy roots of Forsythia, Juniperus, and Podophyllum (Table 1). MeJA and SA were found to increase the production of PTOX and the structurally related lignan production or the gene expression of lignan biosynthetic enzymes responsible for synthesis of conifenyl alcohol, phenylalanine ammonialyase (PAL), cinnamoyl-CoA reductase (CCR), and cinnammylalcohol dehydrogenasein (CAD) in cell suspension cultures of L. album [98,99], and L. nodiflorum [98], Podophyllum hexandrum [100] and callus of L. austriacum callus culture [101]. These phytohormones also increased the PTOX production or the relevant gene expression in hairy roots of L. tauricum [102]. Additionally, an increase in production of pinoresinol and matairesinol by MeJA was observed in Forsythia intermedia cell suspension culture [103]. Chitosan, chitin oligomers, and other glycans also enhanced PTOX production or gene expression of lignan biosynthetic enzymes in Juniperus chinensis callus culture [104], L. austriacum callus culture [101], and L. album cell suspension culture and hairy roots [105][106][107]. In particular, comparisons of chitin tetramer, pentamer, and hexamer and chitosan tetramer and pentamer showed that treatment of L. album hairy roots with chitosan hexamer for five days most potently enhanced PTOX and lariciresinol production, as well as upregulating the expression of PAL, CCR, CAD, and PLR genes [107]. Overall, treatment with these elicitors resulted in 2-to 7-fold increases in PTOX synthesis and expression of genes encoding enzymes involved in the early steps of lignan biosynthesis in various plant cells and hairy roots. Fungal co-culturing, extracts, and filtrate exhibited unique effects on the metabolic engineering of lignan production (Table 1). Botrytis cinerea, Phoma exigua and Fusarium oxysporum extracts triggered the accumulation of monolignols, and enhanced PAL activity and gene expression of PAL, CCR and CAD in L. usitatissimum cell suspension cultures [108]. Treatment of in L. album cell cultures with Fusarium graminearum extract for five days increased PTOX 7.0-fold and PAL, CCR, and CAD mRNAs >10-fold, compared with untreated cells, indicating that this extract is a more potent elicitor of PTOX production and PAL, CCR, and CAD expression than treatment with chitosan, chitin, or MeJA treatment for three days [105,106,109]. In contrast, Rhizopus stolonifer and Rhizoctonia solani extract stimulated 8.8-fold and 6.7-fold greater accumulation of lariciresinol, instead of PTOX, in L. album cell cultures after five-day treatment, as compared with untreated cells, and the highest (6.5-fold) PLR gene induction was observed in L. album cell cultures treated with Rhizopus stolonifer extract for two days [109]. Similar data were obtained in L. album hairy roots with the same fungal extracts [106] or L. album cell suspension culture with Fusarium graminearum culture filtrate [110], but the latter exhibited less lignan production. These studies revealed that fungal extract exhibited species-specific effects on the lignan biosynthesis pathways, although investigation of the molecular basis awaits further study. Examination of the regulation of gene expression has thus far been restricted to enzymes responsible for the upstream of lignan biosynthesis pathways, indicating that the effects of elicitors on the gene expression of the enzymes involved in the downstream of lignan biosynthesis such as SIRD and CYP719A23, and lignan glucosyltransferases ( Figure 2) will lead to the identification of more effective elicitors for lignan production. Conclusions There have been many recent advances in metabolic engineering for lignan production by plants, including: (i) the molecular characterization of novel genes encoding enzymes for biosynthesis pathways of dietary and medicinal lignans; (ii) the production of both endogenous and exogenous lignans by transient or stable transfection of lignan biosynthetic genes into cultured cells, tissues and plants; and (iii) the identification of exogenous stimuli such as light and elicitors that increase the production of lignans by cultured cells and plants. Taken together, combination of gene transfection, light, and elicitors is a promising strategy for further improvement of the lignan productivity; e.g., elicitation of CPi-Fk under blue or red LED light is expected to increase the amounts of sesamin. Moreover, bioinformatic analysis based on the aforementioned experimental data will lead to the systematic prediction of optimal lignan production strategy: hosts (cells, organ cultures, plants), light conditions, elicitor types, and transfection types, given that different optimal production of respective lignans is highly likely to occur under different conditions. For example, three Forsythia varieties, F. koreana, F. intermedia, and F. suspensa displayed differential growth and regeneration in a medium componentor selection marker antibiotics-dependent fashions [84], and Linum spp. showed genus-specific sensitivities to different elicitors (Table 1). Public acceptance of dietary products derived from transgenic organisms is limited. Nevertheless, lignans produced by transgenic hosts are chemically identical to natural ones, and free from any recombinant genes or proteins. Thus, their public acceptance is expected to be more easily garnered than that of transgenic foods. Accordingly, more attention should be paid to the establishment of scaling-up and following industrialization of the lignan production systems [111][112][113]. Large-scale lignan production by transgenic plants requires a closed cultivation system to prevent contamination of the environment by transgenic plants. Recently, various closed plant factories have been emerging, which completely shut off a gene flow into the outer environment and enables the transgenic plants-based molecular breeding of genes or compounds of interest under optimal and sterile conditions [111][112][113]. Such advances in the metabolic engineering of lignan biosynthesis, combined with the aforementioned outcomes of a wide range of basic research, will surely pave the way for the conversion of conventional agricultural lignan production to innovative industrial lignan production.
v3-fos-license
2019-11-10T11:16:29.608Z
2019-11-07T00:00:00.000
207989514
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4441/11/11/2336/pdf", "pdf_hash": "a0d890283ed696198c327e88766455fd85dad75e", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42595", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "sha1": "19931ea0364b9385ef7ec5db3995f9cefee7fda3", "year": 2019 }
pes2o/s2orc
Progress Towards Bioelectrochemical Remediation of Hexavalent Chromium : Chromium is one of the most frequently used metal contaminants. Its hexavalent form Cr(VI), which is exploited in many industrial activities, is highly toxic, is water-soluble in the full pH range, and is a major threat to groundwater resources. Alongside traditional approaches to Cr(VI) treatment based on physical-chemical methods, technologies exploiting the ability of several microorganisms to reduce toxic and mobile Cr(VI) to the less toxic and stable Cr(III) form have been developed to improve the cost-e ff ectiveness and sustainability of remediating hexavalent chromium-contaminated groundwater. Bioelectrochemical systems (BESs), principally investigated for wastewater treatment, may represent an innovative option for groundwater remediation. By using electrodes as virtually inexhaustible electron donors and acceptors to promote microbial oxidation-reduction reactions, in in situ remediation, BESs may o ff er the advantage of limited energy and chemicals requirements in comparison to other bioremediation technologies, which rely on external supplies of limiting inorganic nutrients and electron acceptors or donors to ensure proper conditions for microbial activity. Electron transfer is continuously promoted / controlled in terms of current or voltage application between the electrodes, close to which electrochemically active microorganisms are located. Therefore, this enhances the options of process real-time monitoring and control, which are often limited in in situ treatment schemes. This paper reviews research with BESs for treating chromium-contaminated wastewater, by focusing on the perspectives for Cr(VI) bioelectrochemical remediation and open research issues. Introduction Hexavalent Cr(VI) and trivalent Cr(III) chromium are the most common forms of this element in the environment. Cr(VI) is water-soluble in the full pH range and extremely toxic to human health and all living organisms because of its mutagenic and carcinogenic properties [1]. The U.S. EPA has classified Cr(VI) as one of the 17 most dangerous elements for human health [2,3]. In aqueous systems, Cr(VI) can be present in different species: primarily as chromic acid [H 2 CrO 4 ] and its salts, the hydrogen chromate ion [(HCrO 4 ) − ], and the chromate ion [(CrO 4 ) 2− ]. The chemical equilibrium of the different chromium species depends on Cr(VI) concentration, oxidation-reduction potential (ORP), of the Cr(VI) remediation strategy. Cr could be re-mobilized when the donor addition stops and the site returns to natural oxidizing conditions [28]. A promising strategy to be explored for in situ Cr(VI) bioremediation is the application of bioelectrochemical systems (BESs), which is an emerging platform technology combining microbial processes with electrochemical systems. In BESs, the ability of electrochemically active microorganisms (EAM) to use electrodes as inexhaustible electron acceptors/donors are exploited, via a process typically referred to as extracellular electron transfer [30][31][32]. BES reactors essentially consist of electrodes, an anode and a cathode, immersed in an electrolytic medium/solution and an optional ion-exchange membrane to separate the compartments. At the anode, oxidation of reduced species generates a flow of electrons to the cathode, where reduction reactions take place. At least one or both reactions are microbially mediated. Membranes provide a separation structure to isolate different bulk liquids in the anode and cathode chambers, to optimize the operating condition without affecting the microbial community, to prevent undesired substrate transport, and to facilitate transfer of ionic species from one chamber to another for charge balance, increasing, however, the internal resistance of the system [33]. In case of thermodynamically favorable redox reactions, BESs can result in direct electricity production (microbial fuel cells, MFCs) [30,34,35] or, by external energy supply, in enhancement of thermodynamically unfavorable processes [36], with production of less toxic or value-added chemicals (such as hydrogen, H 2 O 2 , methane, or even organic molecules) (in microbial electrolysis cells, MECs) [37]. BESs have been extensively studied and intensively developed, especially during the last 10 years, for wastewater treatment, valorization, and reuse [38,39]. In environmental remediation, BESs, through biologically-mediated oxidation (at the anode) and reduction (at the cathode), potentially provide a flexible platform for treating many pollutants frequently found at contaminated sites, in co-contamination cases [40,41]. Solid electrodes can serve, in fact, either as an electron sink, for the oxidation of petroleum hydrocarbons [42][43][44] or As(III) [45], or as electron donor, for reduction of chlorinated hydrocarbons [46,47], nitrate [48], or oxidized metals, including Cr(VI) [49]. In in situ treatments, it would be possible to directly introduce the electrodes in the aquifer and stimulate biologic activity with no external chemicals or a minimal external chemical supply [50], which creates an advantage in terms of cost-effectiveness and sustainability in comparison to current approaches [51]. The electrical signal generated in BESs provides opportunities for real-time monitoring of Cr(VI) concentration [52][53][54] and in situ microbial activity [55]. Cr(III) deposition next to the electrode theoretically offers the chance of recovering the metal itself through the electrode [39]. Several experimental works investigated Cr(VI) reduction in MFCs with bioanodes and either abiotic cathodes, relying on Cr(VI) electrochemical reduction, or biocathodes, while taking advantage of biological activity. However, no research, to our knowledge, has yet specifically addressed Cr(VI) contaminated groundwater remediation. This review provides a comprehensive analysis of the current knowledge and experiences in bio-electrochemical treatment of Cr(VI) contaminated water streams, in order to explore BESs opportunities for in situ groundwater bioremediation. Principles of Cr(VI) Reduction in Bio-Electrochemical Systems Bio-electrochemical Cr(VI) reduction essentially relies on cathodic reduction (Figure 1), with most research focused on wastewater treatment coupled with energy recovery in MFCs, with biotic anodes and either abiotic or biotic cathodes [56]. Only a single study evaluated Cr(VI) detoxification at the bioanode, via bacterial protection mechanisms [57]. Figure 1. Schematic overview of a BES for Cr(VI) reduction as MFC with energy harvesting or MEC with external supply ( [58], modified). In MFCs, oxidation of the electron donor at the anode is coupled with a reduction of species with comparable or higher redox potential at the cathode. The net potential of the MFC, as the sum of anodic and cathodic potentials, is positive. Therefore, spontaneous electron flow from the anode to the cathode occurs. Conversely, in MEC, thanks to external power input to force electron flow, the oxidation of an electron donor at the anode can be coupled with the reduction of lower redox potential species at the cathode. CEM/PEM: Cation/Proton Exchange Membrane. EAB: Electrochemically Active Bacteria Electrochemical Reduction of Cr(VI) Due to its high standard reduction potential, which is comparable, or, in certain conditions, even higher than those of other commonly used electron acceptors in BESs, Cr(VI) has been initially investigated as a theoretically favorable electron acceptor, to get reduced at abiotic cathodes in a typical MFC configuration for power production. This concept has been demonstrated for the first time in a dual-chamber MFC (2CMFC) by Wang et al. [59] who, using acetate as electron donor and Cr(VI) solution at a pH of 2 as acceptor, observed higher power densities than for O2 and hexacyanoferrate. The half-cell Cr(VI) reduction potential and the stoichiometry of the reaction are, however, strongly dependent on chromium species, concentration, and pH conditions (Table 1). [58], modified). In MFCs, oxidation of the electron donor at the anode is coupled with a reduction of species with comparable or higher redox potential at the cathode. The net potential of the MFC, as the sum of anodic and cathodic potentials, is positive. Therefore, spontaneous electron flow from the anode to the cathode occurs. Conversely, in MEC, thanks to external power input to force electron flow, the oxidation of an electron donor at the anode can be coupled with the reduction of lower redox potential species at the cathode. CEM/PEM: Cation/Proton Exchange Membrane. EAB: Electrochemically Active Bacteria Electrochemical Reduction of Cr(VI) Due to its high standard reduction potential, which is comparable, or, in certain conditions, even higher than those of other commonly used electron acceptors in BESs, Cr(VI) has been initially investigated as a theoretically favorable electron acceptor, to get reduced at abiotic cathodes in a typical MFC configuration for power production. This concept has been demonstrated for the first time in a dual-chamber MFC (2CMFC) by Wang et al. [59] who, using acetate as electron donor and Cr(VI) solution at a pH of 2 as acceptor, observed higher power densities than for O 2 and hexacyanoferrate. The half-cell Cr(VI) reduction potential and the stoichiometry of the reaction are, however, strongly dependent on chromium species, concentration, and pH conditions (Table 1). In water solutions, the dichromate Cr 2 O 7 2− form prevails for total chromium concentrations above approximately 1 g/L [3,17]. At lower concentrations, which typically occurs in groundwater plumes or natural surface water, the dominant species is HCrO 4 − at a pH between 1 and about 6 to 6.5, and CrO 4 2− at neutral or alkaline conditions [60]. High positive standard reduction potentials (Eh 0 vs. SHE) for both Cr 2 O 7 2− and HCrO 4 − indicate a thermodynamically favorable reaction, conducive to high power density generation in BESs, only in acidic environments [39,59,61]. On the contrary, CrO 4 2− lower potential limits chromium electrochemical reduction in the neutral pH range, which often makes the external energy supply necessary. Furthermore, at pH < 4, the predominant form of chromium reduction is dissolved Cr 3+ , whereas in the 5-8 pH range, soluble Cr(OH) 2 + and Cr(OH) 2+ , coexist with Cr(OH) 3 or Cr 2 O 3 precipitates, which are responsible for the progressive deterioration of the reduction rates as Cr(III) deposits onto the cathode surface, especially at a pH above 6.5 [62,63]. Effective abiotic electrochemical reduction requires strongly acidic conditions (optimally at a pH of 2) that greatly limit its applicability to environmental remediation. Biologically mediated Cr(VI) reduction, relying on several microbiological mechanisms, may overcome the current issues of abiotic electrochemical reduction, by offering opportunities to hexavalent chromium treatment in the environmentally compatible neutral pH range. Moreover, the biofilm on the cathode may somewhat protect and improve the long-term efficiency of the electrode by preventing or delaying Cr(III) deposition [64,65]. Microbiological Mechanisms of Cr(VI) Reduction Several mechanisms of bacterial Cr(VI) reduction have been described both under aerobic and anaerobic conditions [66,67]. In the presence of oxygen, the reduction of Cr(VI) is commonly associated with soluble chromate reductases and requires reduced nicotinamide-adenine dinucleotide phosphate (NAD(P)H) as electron donor [66,68]. The mechanisms associated with Cr(VI) reduction can involve a direct, one-step or two-step electron transfer. Escherichia coli YieF Cr(VI) reductase transfers three electrons to Cr(VI) in one step to produce Cr(III), and one to molecular oxygen generating reactive oxygen species (ROS) [69]. The Cr(VI) reductase ChrR from Pseudomonas putida involves a one/two steps mechanism in which one/two electrons are donated from NAD(P)H to generate the intermediate Cr(V)/Cr(IV) that is further reduced to Cr(III) by one/two additional electrons [66,70]. Under aerobic conditions, most Cr(VI)-resistant microorganisms tolerate up to 1500 mg Cr(VI)/L [71]. However, the rate of chromium reduction is directly related to the concentration of the contaminant and physical parameters, such as pH and temperature [71,72]. One of the first studies with Cr(VI)-reducing bacteria, achieved almost 100% of chromate reduction in 2.0 mg/L Cr(VI) solution within 90 h by P. putida PRS2000 and P. fluorescens LB303 [73]. Similar results were obtained by the soil-isolated strains Bacillus sp. E29 and Arthrobacter crystallopoietes strain ES32 that achieved reductions of 82% and 90% of Cr(VI) in less than 6 h and 12 h, respectively [71]. Much higher Cr(VI) concentrations were removed by Serratia proteamaculans. Within 48 h, 100 mg Cr(VI)/L were reduced (corresponding to 100% of dichromate added) under aerobic conditions [72]. In the same study, the authors demonstrated that S. proteamaculans was also able to reduce chromate anaerobically, but the process was more efficient in the presence of oxygen. Under anaerobic conditions, Cr(VI) can serve as the final electron acceptor in a process that usually involves membrane-bound reductases [66], but also soluble enzymes (e.g., soluble cytochrome c 3 from Desulfovibrio vulgaris) were observed to reduce Cr(VI) [75]. The overall reaction is provided in Equation (1), with glucose as electron donor [76]. Cr(VI) reduction in anaerobic conditions was reported in several microorganisms. Both P. dechromaticans and in Enterobacter cloacae are capable to use Cr(VI) as terminal electron acceptor [66,77]. Gene expression of Shewanella oneidensis MR-1 during Cr(VI) reduction was studied [78]. Under Cr(VI) reducing conditions, 83 genes were upregulated. Among the others, genes involved in the reduction of Fe(III) and Mn(IV) were also upregulated. Further studies with mutant strains confirmed the involvement of mtrA, mtrB, mtrC, and omcA in the reduction of Cr(VI) [78,79]. In anaerobic environments, iron(II) and sulphide can also play a role in Cr(VI) reduction [66,80]. Iron-reducing bacteria (IRB) reduce Fe(III) to Fe(II), and biologically produced Fe(II) can be re-oxidized by reducing Cr(VI) to Cr(III) [76]. Sulphate is an electron acceptor widely used by several bacterial groups for the degradation of the organic matter in anaerobic environments [81]. In sulphate-rich environments, Cr(VI) can react with sulphide produced by sulphate-reducing bacteria (SRB) to produce Cr(III) that precipitates [76]. In addition to laboratory studies focused on the elucidation of possible mechanisms used by bacteria for Cr(VI) reduction, the ability to reduce Cr(VI) in soil-aquifer systems has been reported but needs to be further investigated. Clostridium chromiireducens sp. a Cr(VI)-resistant, Gram-positive, spore-forming, obligate anaerobe, was identified for its ability to reduce Cr(VI) at a contaminated site [82]. Many previous reports also confirmed autotrophic reduction of chromium, by mostly using hydrogen as electron donor [83,84]. A Gram-negative bacterium, capable of reducing hexavalent chromium, was also isolated from a contaminated site. 16S rRNA analysis revealed that it belonged to the Pseudomonas genus, with high similarity to P. synxantha [85,86]. Marsh and McInerney [2] demonstrated reduction of Cr(VI) with hydrogen and carbon dioxide/NaHCO 3 as an electron donor and carbon source, respectively, carried out by an anaerobic mixed culture developed from aquifer sediment. As previously reported, the reduction of the Cr(VI) can occur through the action of soluble cytochromes [66], membrane reductase mtrCAB, and omcA [78,79]. Kracke and colleagues [87] report that the mtrCAB terminal reductase complex and the omcA cytochrome are able to interact with the electrode directly or via mediators (flavins). This scientific evidence combined with the well documented ability of IRB (i.e., Shewanella oneidensis MR-1) [78] and SRB (i.e., Desulfovibrio desulfuricans 27774) [88] to reduce Cr(VI) and to exchange electrons with solid materials are good reasons to consider the feasibility of a bio-electrochemical system for reducing Cr(VI). All the previously described biological mechanisms can take place in BESs. The inoculated cathode may act as an electron donor for electrochemical or bio-electrochemical Cr(VI) reduction. Electroactive Cr-reducing microorganisms in the cathodic biofilm or the production of hydrogen at the cathode of a BES could also favor the autotrophic reduction of Cr(VI) by hydrogenotrophic bacteria [2,83,84], or IRB or SRB involved in bio-electrochemical processes may facilitate indirect reduction of Cr(VI). Table 2 summarizes the available research experiences of Cr(VI) reduction with biocathodes, by focusing on the electrode materials and inoculum, cathode potential, pH, Cr(VI) concentrations, the observed Cr(VI) removal rates, and efficiencies. These experiments evaluated Cr(VI) reduction under anaerobic conditions. So far, no literature report a reduction of Cr(VI) in open-air biocathodes. Oxygen is unquestionably the preferred final electron acceptor for microorganisms. However, possible advantages to Cr(VI)-reducing bacteria with an aerobic cathode may exist. Cr(VI)-reducing bacteria may be favored over other species in an environment with specific toxicity, even though tolerance to high Cr(VI) concentrations, up to 1 g Cr(VI)/L or even above, have often been documented [103]. Cr(VI) Biocathodic Reduction The first test with Cr(VI) reducing biocathode was performed by Tandukar et al. [89], who inoculated the cathodic compartment of a 2CMFC (PEM membrane) with a mix of a denitrifying and methanogenic mixed culture, dosing bicarbonate as sole carbon source. Anaerobic mixed culture fed with acetate served as anode inoculum. With graphite plate electrodes and an external 1000 Ω resistor, the authors reported power densities of 7.0 mW/m 2 and 55.5 mW/m 2 depending on initial Cr(VI) concentration (22 and 63 mg/L, respectively). The maximum specific Cr(VI) reduction rate, about 0.46 mgCrVI/g VSS /h, was registered at a 63 mgCr(VI)/L initial concentration. Analysis of the Cr(VI) reduction community by 16S rRNA gene sequences showed a predominance of phylotypes related to Trichococcus pasteurii and P. aeruginosa. Even considering the small amount of substrate that can leak from the anode, even when an ion-exchange membrane is used [104], and organic carbon released in cell lysis, most Cr(VI) reduction was obtained with autotrophic conditions. In a batch-fed 2CMFC, the cathode is inoculated with a mixed microbial consortium from a Cr(VI) contaminated site and 39.2 mg Cr(VI)/L. Huang et al. [90] observed a specific reduction rate of about 2.4 mgCrVI/g VSS /h, and 3.9 W/m 2 maximum power production at a current density of 11.1 mA/m 2 . Anaerobic pure cultures were also tested [91,92,102]. Hsu et al. [91] compared Cr(VI) reduction by six Shewanella strains at the cathode of MFCs in repeated cycles, observing initially the use of the electrode as the sole electron source in all tested strains. The variability in Cr(VI) reduction was associated with different mechanisms of chromium reduction, not identified, for each Shewanella strain evaluated, and other factors such as biofilm attachment to the electrode. Repeated Cr(VI) injections resulted in a general decrease in the MFCs performances and high residual Cr(VI) concentrations, which were explained with microorganisms' finite tolerance limit to Cr(VI) exposure and gradual fouling of the system by biological or reduced chromium species, which limit the active surface area of the cathode. Xafenias et al. [92] inoculated the cathode of an MFC and a MEC with S. oneidensis MR-1 fed with lactate. The combined use of the electrode and lactate as electron donors allowed bio-electrochemical and non-bio-electrochemical Cr(VI) reduction at the same time, even the contribution of the two different mechanisms to the overall process was not recognized. In Wu et al. [102], Bacillus sp. showed efficient Cr(VI)-reducing ability in both heterotrophic and autotrophic environments. The Cr(VI) removal rate reached 2.56 mg/L/h, which was 1.75 times higher than that of the MFC with the sterile control cathode. Effects of pH and Cr(VI) Concentration Extreme pH values (indicatively pH <5 or >8) and/or high chromium concentrations, typically 10-100 mg/L, can inhibit microbial activity. Tandukar et al. [89] reported that initial Cr(VI) concentrations above 80 mg/L inhibited the reduction rates in a denitrifying community. Li et al. [93] observed 10 mg Cr(VI)/L to irreversibly inhibit microbial activity in a single chamber MFC inoculated with municipal wastewater. Below toxic levels, increased initial Cr(VI) concentration, and following thermodynamics, was associated with an improved specific chromium reduction rate and MFC's power production [90,93]. The pH, with its effects on the surface properties of the cells, including cell surface hydrophobicity, net surface electrostatic charge, and biofilm structure, may also heavily affect complex biological and electrochemical reactions at the biocathode. Variation in pH may also affect enzymatic activity, and produced Cr(III) precipitation or bio-adsorption [105]. In Huang et al. [64,65], 50 mg/L initial Cr(VI) concentration inhibited the catalytic activity of electrochemical bacteria in the biocathode, whereas, at a 20 mg/L Cr(VI) concentration, chromium reduction efficiencies increased (+27.3%) and decreased (−21%) in acidic (pH = 5) and alkaline catholyte (pH = 8), with respect to neutral pH. A 0.22 cell net potential increase, from 0.54 V at a pH of 8.0 to 0.76 V at a pH of 5.0, beyond the theoretical value of 0.177 V derived by Nernst's law, was associated with a pH decrease in the cathodic compartment, which actually indicates a positive response of microorganisms' activity associated with a pH decline [64]. Similar effects have been reported for denitrifying biocathodes [106]. Clearly, pH also affected the Cr(III) precipitation, with 9.3 mg/L dissolved Cr(III) at the end of the test at pH 5.0, in comparison to 0.3 mg/L at a pH of 8.0. Effects of Cathode Potential Tests with potentiostatically controlled cathodes pointed out an optimal potential range that typically exists for enhancing Cr(VI) reduction performances in biocathodes [95,107]. Theoretically, from Nernst's law, in MFC with a chromium-reducing cathode and acetate-oxidizing bioanode, the open-circuit voltage at pH 7.0 and 25 • C is about 0.68 V, which results in about 0.4 V theoretical cathode potential [90]. Lower set cathode potentials would promote the Cr(VI) reduction process. Huang et al. [64] compared the behavior of a potentiostatically controlled BES (with cathode operated at 200, −150, −300, and −450 mV vs. SHE) to an MFC operating with 200 Ω external load. Cathode at −150/−300 mV set potential promoted fast start-up time (19 days compared to 26 days in the uncontrolled MFC or 28 days in +200 mV set cathode system) and Cr(VI) reduction, with almost complete removal of 20 mg/L in 24 h, with respect to 43-70% with the other systems. Furthermore, +200 mV and −450 mV poised cathode limited bacterial growth, whereas −150 and −300 mV had beneficial effects. In all the tests, the reduction of Cr(VI) was attributed to microorganisms by directly accepting electrons from the electrode surface and transferring them to Cr(VI), as, even in the test, at the most negative potential, no production of hydrogen gas was observed. Optimal set potential can provide an appropriate selective pressure for adaptation of the microbial community in the system, which leads to enhancements of microbial electrochemical interaction with the cathode. The difference between Cr(VI) reduction potential and the cathode set potential represents the maximum energy to be gained by the cathodic microorganisms. Thus, the lower the set cathode potential is, the more energy microorganisms will potentially obtain. However, in case the cathode potential is set too low and goes beyond the self-regulation capability of microbial consortia, the energy gain by the cathodic microorganisms gets lost. Likely −150 and −300 mV set potentials allowed the biomass to gain more energy than when 200 mV set the potential. Although, theoretically most favorable, −450 mV may have exceeded the self-regulation capability of the microbial consortia, with no positive effect on power generation and Cr(VI) reduction [64]. Xafenias et al. [92] demonstrated the positive impact on Cr(VI) reduction of riboflavin, which is a naturally produced mediator, in potentiostatically-controlled Shewanella oneidensis MR-1 biocathodes. Different configurations, with lactate supplied as electron donor in inoculated and abiotic systems, with or without riboflavin addiction, were tested. At 20 mg/L initial Cr(VI) concentration, in a −300 mV poised biocathode fed with lactate (30 mM, or equivalently 2700 mg/L), up to 45% Cr(VI) reduction was observed in 4 h in comparison with 5% Cr(VI) reduction in a biotic system with no lactate and 15% reduction in abiotic systems with lactate. In 2CMFC with S. oneidensis MR-1 fed with lactate in both anodic and cathodic compartments, Cr(VI) reduction at the cathode (10 mgCr(VI)/L initial concentration) was coupled with 32.5 mA/m 2 maximum current density production [92]. Effects of Materials, Reactor Design, and Other Operational Parameters Huang et al. [90] identified, together with Cr(VI) concentration, high conductivity of the electrolyte, (i.e., improved ion transport between the biofilm and bulk phase), as a key factor for efficient Cr(VI) reduction and power production. Increased conductivity of the solution, from 1.5 mS/cm to 10.6 mS/cm, increased the specific Cr(VI) reduction rate by about 25%, from 2.4 mg/(L g VSS h) to about 3.0 mg/(L g VSS h). As to electrode materials, most experiences tested graphite or carbon-based electrodes. Huang et al. [90] tested graphite electrodes' specific surface, by covering the cathode with graphite granules, to promote bacterial attachment and electrical connection between bacteria and the electrode surface. In a tubular 2CMFC, with a cathode to anode surface ratio (C/A) of 3, at a pH of 7, and 22 • C, the graphite fiber biocathode showed a higher specific Cr(VI) reduction rate and power generation than either graphite felt and granular graphite ones [65]. Specific Cr(VI) reduction rates on the graphite fiber cathodes, 12.4-20.6 mg/ g VSS /h, were about 10-100 folds higher than the values reported for biocatalyzed carbon plate or graphite granule cathode in H-type MFCs with about the same Cr(VI) concentrations [89,90]. These results underline the coordinated role of the cathode surface area and reactor architecture on the biocathode performance. In Wu et al. [108], NaX zeolite-modified graphite felts were used as electrodes (anode and cathode) in 2CMFCs. NaX zeolite proved to enhance the hydrophilicity of the graphite felt by facilitating bacterial adhesion and electrochemical reaction, and by decreasing mass transport resistances. Two different fabrication methods for the NaX zeolite-modified graphite felts were testedin which the first one was tested without any pre-treatment of the felt and the second one was tested with HNO 3 pre-treatment. Both methods, especially the latter, resulted in excellent performance, with significant improvement in both electricity generation and Cr(VI) reduction rates, in comparison with graphite felts MFC. The HNO 3 pre-process remarkably enhanced NaX loading mass on the graphite felt, by decreasing the organic residues on the graphite surface. NaX zeolite-modified graphite felts MFC at an initial Cr(VI) concentration of 20 mg/L resulted in more than 410 mV maximum voltage, 29 mW/m 2 power density, and complete removal of Cr(VI) in 3 h, with an 8.2 times faster rate than simple graphite felts MFC. Nanostructured graphene also reduced Cr(VI) [99]. The maximum power density in an MFC with graphene biocathode was 5.7 times higher than the one produced with graphite felt biocathode. Electricity production, in fact, increased from 28.6 to 164 mW/m 2 . Furthermore, improved efficiency in Cr(VI) reduction was obtained, with 100% reduction in a 40 mgCr(VI)/L solution within 48 h, in comparison to only 58% reduction with graphite felt. The most widely tested configuration with biocathodes is the 2CMFC. Even a study with a single-chamber reactor exists [93]. Organic substrate removal at the anode and cathodic chromium reduction were reflected in the open circuit potential of the system and Cr(III) deposition on the cathode, as revealed by scanning electron microscopy and energy-dispersive X-ray spectroscopy [93]. Cr(VI) conversion efficiencies ranged from 89% to 99% depending on initial Cr(VI) concentrations (89% at 1.1 mg/L, 95% at 3 mg/L, and 99% at 10 mg/L). In the open-circuit control, Cr(VI) conversion efficiency was lower and decreased with Cr(VI) concentrations (88% at 1.1mg/L, 63% at 3 mg/L, and 28% at 10 mg/L). This means the SCMFC took advantage of electroactive bacteria using Cr(VI) as electron acceptor, as the other Cr(VI) reduction mechanisms, including bio-adsorption or bio-reduction by inelectroactive bacteria, as the open-circuit control pointed out, were inhibited at high Cr(VI) concentrations [93]. To increase microbial concentration and prevent premature cathode passivation due to Cr(III) precipitates during the system set-up, Wu et al. [96] proposed an ex situ acclimatization method for Cr(VI)-reducing biocathodes. The electrode was initially enriched with exoelectrogenic biofilm as an MFC anode, and the system was subsequently established using the anode as biocathode. This method allowed for the development of a mature biofilm in a shorter period of acclimatization (<19 days in the authors' experience) compared to traditional in situ methods, with Cr(VI) removal reaching 79% in 24 h, which is about four times higher than the one observed in the MFC with an in situ acclimated cathode. The improved performance was attributed not only to avoidance of premature formation of Cr(III) precipitates on the electrode, during biofilm acclimatization, but also to the enhanced bacterial growth rates in the heterotrophic anodic environment, which leads to high microbial density and bacterial coverage of the electrode. This may limit the effects of Cr(VI) toxicity on the microorganisms, at the anode/cathode inversion. Simultaneous Reduction of Cr(VI) and Other Metals As abiotic cathodes, biocathodes are also being tested for the simultaneous reduction of multiple metals, usually present in a variety of metal-processing wastewaters. Huang et al. [94] demonstrated that bacterial communities in biocathodes could adaptively evolve to utilize solutions containing mixtures of metals. Reduction rates of Cr(VI), Cu(II), and Cd(III) in BESs with biocathodes individually acclimated to the three different metals or acclimated to increased concentrations of a mixture of metalswere compared. In a Cr(VI) acclimated biocathode, the Cr(VI) reduction rate decreased from about 1.21 mg/L/h to 0.49 mg/L/h in the presence of 5 mg/L Cu(II) and 5 mg/L Cd(II). Acclimatization, by gradually increasing concentrations of mixed metals, allowed complete removal of Cr(VI) at a rate similar to that originally obtained with the reactor acclimated to Cr(VI) only. Analysis of bacterial communities showed different communities on the biocathodes of the reactors acclimated to the mixed metal solutions, compared to reactors acclimated only to a single metal. The decrease in diversity of the microbial communities was likely due to the greater toxicity of the mixed metals compared to only single metals. At the phylum level, compared to Cr(VI) acclimated biofilm, the relative abundance of Proteobacteria, Actinobacteria, Firmicutes, and Tenericutes increased in biofilms acclimated to mixed metals. Synergistetes, to the contrary, were absent following acclimation to the mixed metals. Huang et al. [100] examined Cr(VI) and Cd(II) reductions on biocathode in pure culture MFC experiments with known electrochemically active bacteria, Stenotrophomonas sp. YS1, Stenotrophomonas maltophilia YS2, Serratia marcescens YS3, and Achromobacter xylosoxidans YS8. Cr(VI) reduction in the MFCs decreased in the presence of Cd(II) for all the pure cultures, with removals in a 5-h period ranging from 63% to 71%, depending on the species, compared to a range of 73-82% when Cd(II) was absent. Cr(VI) removal in biocathodes was higher than in the abiotic cathode, limited to about 39%. Cd(II) removal, on the contrary, was not impacted by the presence of Cr(VI). Cr(VI) Reduction at Bioanode Yeon et al. [57] enriched electroactive Cr(VI)-reducing bacteria in the anode compartment of MFC with the air cathode, using Cr(VI)-containing sludge from a leather tanning wastewater treatment plant fed with synthetic wastewater. At the end of the enrichment procedure, Cr(VI) removal capability of such biofilm was observed with a 93% reduction of 5 mg/L Cr(VI) and 61% of 25 mg/L Cr(VI). MFC-mediated Cr(VI) removal was attributed to either physical adsorption on the carbon felt anode (about 20%) and biological reactions including biosorption or reduction to Cr(III). The bacterial community analysis by polymerase chain reaction-denaturing gradient gel electrophoresis (PCR-DGGE) of 16S rDNA, after enrichment, pointed out the microbial consortium is composed of both Cr(VI) reducers with either electrochemical activity (such as Clostridium sp.) or not (like Acinetobacter sp.), and non-Cr(VI) reducers with/without electrochemical activity (as Actinobacteria sp). Electroactive bacteria were responsible for electricity production in the MFC. However, reducers with electrochemical activity used Cr(VI) as an electron acceptor instead of interacting with the electrode, which causes a decrease in the current. Cr(VI) reduction was likely performed also by Cr(VI) reducers without electrochemical activity, with the required protons supplied by the metabolism of fermentative bacteria. Cr(VI) Bio-Electrochemical Remediation This review of published research, targeting treatment of Cr(VI) contaminated wastewater/industrial effluents, offers the first proof of concept for the chance of bio-electrochemical Cr(VI) remediation. Experiences with conventional bioreduction processes, under both aerobic and anaerobic conditions, by either pure cultures or mixed consortia, refer of Cr(VI) bioreduction rates in a 0.1-13.5 mg/L/h range [22,62,[109][110][111], which are fully comparable to the values, 0.1-6.6 mg/L/h, observed in Cr(VI) reducing biocathodes. Therefore, with comparable results, bioelectrochemical Cr(VI) remediation over the conventional bioremediation presents the following advantages: (i) the chance of excluding any external chemical supply to support the microorganisms' actions, (ii) ease of monitoring and control of the process, and (iii) the potential recovery of reduced chromium deposited on the electrode. It should be noted that all the reviewed studies were batch laboratory tests, under conditions quite different from natural Cr(VI) contaminated water/groundwater. It would, therefore, be useful to perform evaluations under dynamic water flow conditions as in contaminated aquifers, with real groundwater, as reported by Gregory and Lovley [49] for uranium-contaminated aquifers. For groundwater remediation, it is important to take into consideration specific properties that potentially affect BES operation [20]. Although Cr(VI) contamination likely increases the specific conductivity, typical low specific conductivity value of groundwater (well below 2 mS/cm) can be negatively impacted on BESs by implying higher ohmic and transport losses [30]. Moreover, pH shifts due to electrochemical Cr(VI) reduction in low buffering capacity systems may directly harm the electroactive bacteria and their removal performance [59]. Another challenge for bio-electrochemical treatment of contaminated groundwater is the presence, in addition to the contaminants, of a mixture of various naturally occurring inorganic (calcium, magnesium, carbonate, nitrates and sulphates, metals) and organic chemicals (e.g., humic acids) [112]. Magnesium and calcium can produce precipitates that could passivate the cathode with the consequent reduction of surface exchange active area [113]. Bio-electrochemical reduction has been recently reported for nitrate [97,101,114] and sulphate [115]. So far, the study of co-contaminants with BESs is limited, but theoretically, since reduction potentials of nitrate and sulphate are similar to the reduction potential of several pollutants, they can be electron competitors in the remediation process and affect the microbial community at the biocathode [116]. Wang et al. [101] evaluated the simultaneous autotrophic denitrification and the reduction of Cr(VI) under different pH conditions (6, 7, and 8). The highest removal efficiencies for nitrates (97%) and Cr(VI) (73%) were obtained at a pH of 7. The stable combined reduction was mainly ascribed to Pseudomonas, Halomonas, and Thauera species. Chen and colleagues [97] used a 3 L cylindrical single-chamber reactor with a graphite felt cathode and a central carbon rod anode. The reactor was filled with sulphur granules and inoculated with anaerobic sludge. The reactor was continuously fed with 100 mg/L Cr(VI) synthetic wastewater with no organic C source (16 h of a hydraulic retention time) and run in the galvanostatic mode (current 10-60 mA). Cr(VI) reduction in the effluent ranged between 43% and 97%, which is proportional to the externally supplied current. This observation, together with SO 4 2− in the effluent, highlighted both sulphur and hydrogen autotrophic bacteria were responsible for Cr(VI) reduction by using the S granules in the reactor and H 2 produced by the cathode as electron donors. A similar system, which is a single chamber cylindrical reactor operated in galvanostatic (200 mA) continuous flow mode (20 h HRT), was adopted by Wang et al. [101] for removing Cr(VI) and nitrates from synthetic wastewater. A continuous-flow BES was proposed for the simultaneous removal of p-fluoronitrobenzene (p-FNB), nitrates, and hexavalent chromium from synthetic wastewater as well [97]. In this co-contaminated system, the competition for electrons, for the carbon source and metabolism of microorganisms negatively influenced the degradation rates in comparison with the single pollutant control tests. The biodegradation of p-FNB in the co-contaminated system produced an additional organic carbon source to the microorganisms that promoted Cr(VI) and nitrates removal (nitrate and Cr(VI) removal through degradation of p-FNB). Instead, the p-FNB removal rate was controlled by electron availability (p-FNB degradation increased at currents above 40 mA) [97]. It is also interesting to consider the full cycle of sulphur in BESs. The role of the sulphur cycle during the electro-bioremediation of oil spills has been recently reviewed [43]. Sulphide produced by sulphate reducers can be oxidized to elemental sulphur on the anode surface [117]. Elemental sulphur can be back oxidized to sulphate [118] or can be reduced again to sulphide [119]. The sulphur cycle in BESs can, thus, be effective in enhancing current production (i.e., via sulphide recycling) or in supplying electron acceptors for biodegradation of reduced pollutants, such as hydrocarbons (i.e., via back oxidation of sulphur to sulphate). In this context, it is crucial to understand the possible role of Cr(VI) in environments in which co-contamination occurs. Reduction of Cr(VI) to Cr(III) can represent a sink for sulphide oxidation, which affects the performance of the process. Whether the effect is positive or negative on the anodic oxidation is still an open question. During oxidation of BTEX (benzene, toluene, ethylbenzene, xylenes) mixtures in BESs where sulphate was present in the medium (250 mg/L), the bacterial communities enriched on the anodes were dominated by microorganisms linked to the sulphur cycle. However, bacteria able to oxidize hydrocarbons and to perform direct electron transfer to the electrode (i.e., Geobacter spp.) were also detected [120]. In similar conditions, the competition between the anode and chromium for the scavenging of sulphide could facilitate the enrichment on the electrode of microorganisms not directly linked to the sulphur cycle. No study of Cr(VI) bioelectrochemical remediation in the presence of nitrate and sulphate is available. However, interferences with chromium reduction are likely to occur. Therefore, further understanding of chemical species that coexist with the target pollutant in groundwater is required. In view of in situ applications, the effects of soil particles on pollutant partitioning and bioavailability, as well as system conductance need to be assessed. Soil type and external resistance significantly affected the current and Cr(VI) removal efficiency in soil MFCs tests operated at external resistances of 100 and 1000 Ω for 16 days [121]. The current production and Cr(VI) reduction in red soil and fluvo-aquic soil MFCs were compared. Red soil MFC performed better in the current production, but showed a lower Cr(VI) removal than fluvo-aquic soil MFC, which implies red soil may contain more electron acceptors that competed with the Cr(VI) reduction reaction [121]. About 60% to 90% of Cr(VI) was removed in 16 days of operation of a soil MFCs, while only 32-46% was removed in the open circuit control. Experiences integrating plants, microbes, and electrochemistry revealed promising applications of BESs to shallow contaminations [122,123], since plants can rely on atmospheric CO 2 for photosynthesis and secrete root exudates that can serve as carbon sources and electron donors for microbes in the rhizosphere to promote biodegradation/biostabilization. Conclusions Ever since the discovery of microbial remediation methods for Cr(VI), many technological approaches have been developed, and some are already used in full-scale treatments. As compared to other energy-intensive technologies, bioremediation is considered a promising cost-efficient and sustainable option. Microbial electrochemical systems have been recently proposed as an alternative platform for bioremediation of Cr(VI) and other toxic chemicals. BESs, in comparison with other bioremediation techniques, are particularly appealing for in situ applications, since they do not require relevant chemical addition in the subsurface and may entail a low energy supply. Reducing energy costs and chemical soil amendments implies lower operating costs for the BESs, which is particularly valuable considering the typical timeframe required for groundwater remediation. Moreover, in the Cr(VI) contamination treatment, the deposition of Cr(III) onto the electrode may enable extraction and recovery of Cr(III). In lab-scale studies, BESs were competent in reducing at the cathode Cr(VI) in contaminated water streams with initial concentrations as low as 1 mg/L up to about 1 g/L. Even though positive results have been reported with abiotic cathode systems, biocathodes offer several advantages in Cr(VI) reduction from the perspective of groundwater remediation, such as the effectiveness in the natural waters' pH range and the exploitation of microbial catalysis, which limits cathode passivation due to Cr(III) precipitation. Nonetheless, much work is still needed to improve the Cr(VI) reduction rate at the biocathode to maximize the advantages of biocathode BESs over conventional biological processes. A lack in the long-term pilot and scale-up research suggests that more focus should be given to key factors that need to be considered for fully-practiced feasibility studies and full-scale applications. For example, the stability of processes and equipment and the area of influence that each BES module can effectively cover have to be addressed. In addition, flexible configurations to adapt to different site-specific characteristics (for example, water table depth, soil types, etc.) are required for in situ
v3-fos-license
2019-11-27T14:04:40.904Z
2019-11-21T00:00:00.000
208299950
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2218-273X/9/12/762/pdf", "pdf_hash": "348bb8c9e78b4460c9c2833aaea820938bd9d792", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42597", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "3571f608728500afd73e61067405c97b86b26ddb", "year": 2019 }
pes2o/s2orc
Clerodane Diterpene Ameliorates Inflammatory Bowel Disease and Potentiates Cell Apoptosis of Colorectal Cancer Inflammatory bowel disease (IBD) is general term for ulcerative colitis and Crohn’s disease, which is chronic intestinal and colorectal inflammation caused by microbial infiltration or immunocyte attack. IBD is not curable, and is highly susceptible to develop into colorectal cancer. Finding agents to alleviate these symptoms, as well as any progression of IBD, is a critical effort. This study evaluates the anti-inflammation and anti-tumor activity of 16-hydroxycleroda-3,13-dien-15,16-olide (HCD) in in vivo and in vitro assays. The result of an IBD mouse model induced using intraperitoneal chemical azoxymethane (AOM)/dextran sodium sulfate (DSS) injection showed that intraperitoneal HCD adminstration could ameliorate the inflammatory symptoms of IBD mice. In the in vitro assay, cytotoxic characteristics and retained signaling pathways of HCD treatment were analyzed by MTT assay, cell cycle analysis, and Western blotting. From cell viability determination, the IC50 of HCD in Caco-2 was significantly lower in 2.30 μM at 48 h when compared to 5-fluorouracil (5-FU) (66.79 μM). By cell cycle and Western blotting analysis, the cell death characteristics of HCD treatment in Caco-2 exhibited the involvement of extrinsic and intrinsic pathways in cell death, for which intrinsic apoptosis was predominantly activated via the reduction in growth factor signaling. These potential treatments against colon cancer demonstrate that HCD could provide a promising adjuvant as an alternative medicine in combating colorectal cancer and IBD. Introduction Colorectal cancer (CRC) has a significant health impact worldwide, and is a common cancer type in the United States. In addition, CRC is the third leading cause of cancer deaths for new cases, and of the Bax/Bcl-2 ratio, which is directly expressed at cells undergoing apoptosis [19]. Recently, our studies have demonstrated the therapeutic potential of HCD against various types of cancers [19]. Nevertheless, the therapeutic potency of HCD in treating GI inflammation, e.g., IBD and colorectal cancer, has not been clarified. The aim of this study was to determine the dampening effect of HCD on IBD treatment and anticancer activity. In this work, two platforms containing an azoxymethane (AOM)/dextran sodium sulfate (DSS)-induced colitis IBD model (in vivo) and colorectal carcinoma cells Caco-2 (in vitro) were employed to evaluate the treated efficacy of HCD. Furthermore, the detailed mechanisms of HCD on anti-colorectal cancer were also investigated. Study Design The main focus of this study was to evaluate the potential of HCD against IBD and colon cancer. The experimental design was divided into two parts: IBD induction and colon cancer with HCD treatment. In the IBD model, mice were induced by azoxymethane (AOM)/dextran sodium sulfate (DSS). HCD was intraperitoneally (i.p.) injected, and subsequently GI inflammation was observed. In the colon cancer part, Caco-2 cells were used as the sole platform for the determination of cytotoxicity and cell death characteristics, and the underlying mechanisms of colon cancer cytotoxicity in HCD-treated cells, including cell cycle signaling, growth factor signaling, and inflammatory signaling, were under investigation. Chemicals HCD was kindly provided by Professor Yi-Chen Chia (Department of Food Science and Technology, Tajen University, Taiwan). Isolation and identification of HCD has been described in the previous literature [20]. The reagents and mediums for cell culture were purchased from Thermo-Fisher (Waltham, MA, United States). General chemicals were obtained from Sigma-Aldrich (Merck KGaA, Darmstadt, Germany). The antibodies used in this study are listed in Table 1; these were purchased from Genetex International (Hsinchu, Taiwan), Cell Signaling Technology (Danvers, MA, United States), and Merck Millipore (Burlington, MA, United States). Eight to ten-week-old C57BL/6 male mice were purchased from BioLASCO (Taipei, Taiwan) and kept in controlled environmental conditions (22 ± 2 • C, 55% ± 10% humidity, 12/12 h light/dark cycle). The animals were fed a commercial diet and water ad libitum. Mice experimental protocols were used according to the "Guide for the Care and Use of Laboratory Animals" of National Dong-Hwa University, approved by the National Dong-Hwa University Animal Ethics Committee (approval number 001/2016). Inflammatory Bowel Disease Model Induction and HCD Treatment The in vivo IBD model was induced by the injection of AOM and DSS, following a previous study with slight modification [21]. At day 0, mice were weighed and i.p. injected with 10 mg/kg B. wt. of AOM. The next day, the mice were freely supplied with 2% DSS solution for an additional seven days, and regular water for a further seven days. This induction cycle was repeated once. The induction of IBD was verified by checking the histological change of the colon after mice were sacrificed at day 35. Once IBD induction was confirmed, 15 mg/kg B. wt. of 5-FU, as well as 1.6 and 6.4 mg/kg B. wt. of HCD were i.p. injected once every three days until day 65. Mice were sacrificed by CO 2 anesthesia, and their colons were collected for slicing in the literature [16]. Cell Culture Human colorectal carcinoma cell lines Caco-2 and HT-29 were obtained from the American Type Culture Collection (ATCC, Manassas, MA, United States). Caco-2 and HT-29 cells were cultured with Dulbecco's modified Eagle medium (DMEM) or RPMI-1640, with 20% fetal bovine serum (FBS) and 1% penicillin/streptomycin (PS) supplementation, respectively. The environmental conditions were 37 • C and 5% CO 2 , and the cultured medium was renewed once every two days. Once cells reached 80% confluence, cells were detached by 0.25% trypsin/EDTA for further experiment. All experiments were carried out within 20 passes, with concern for uniformity and reproducibility. Cell Cycle Analysis The protocol of cell cycle analysis followed a previous study, with slight modification [16]. In brief, 7 × 10 4 cells per well of Caco-2 cells were seeded into 12-well plates. Cells were incubated with 0.5, 2.0, and 5.0 µM of HCD for 36 and 48 h, respectively. Treated cells were fixed with 70% freeze ethanol and stained with propidium iodide (PI) at 37 • C for 1 h. The fluorescent intensity of PI within cells was detected by a Cytomics TM FC 500 flow cytometer (Beckman-Coulter, Brea, CA, United States). Data from 10 4 cells in each sample were collected, and the different cell cycles were analyzed. Western Blotting A total of 2.5 × 10 5 cells/well of Caco-2 cells were seeded into a six-well plate and incubated until 80% confluence. Cells were treated with 0.5, 1.0, and 2.0 µM of HCD for 24 and 36 h, respectively. After incubation, cells were homogenized, and the desired protein levels were analyzed according to the protocol described in a previous study [16]. The chemiluminometric intensity of each protein was normalized with GAPDH's chemiluminometric intensity. The protein level change was represented by the ratio of normalized chemiluminometric intensity between treated and untreated groups. Statistical Analysis Data were expressed as mean ± SD from at least three independent experiments. The results were analyzed by one-way analysis of variance (ANOVA) with the Dunnett test. The significant difference (p < 0.05) was labelled "*" on the histogram produced by GraphPad Prism Ver 7.0 (GraphPad Software, La Jolla, CA, United States). The IC 50 of the dose-dependent cytotoxicity was determined using non-linear regression embedded in GraphPad Prism, and the chosen model was the highest R 2 value. Histological Change of Intestine Tissue after AOM/DSS Induction and HCD Treatment To generate the IBD mouse model, mice were chemically induced by AOM/DSS. After AOM/DSS induction, enlarged lymph nodes, lymphocyte infiltration, irregular and shorter villi, and thicker muscle mucous and muscle layers were observed in the intestines of mice, which consequently confirmed that mice were successfully induced with IBD after the AOM/DSS given ( Figures 1B and 2B), compared to the control without induction ( Figures 1A and 2A). In the next experiment, IBD-induced mice were employed to evaluate the amelioration efficacy of 5-FU and HCD on the histopathological signs of IBD. The tissue section showed that the lymphocytes were less or not infiltrated into the lamina propria layer after treatment with 5-FU and HCD ( Figure 1C-E). The arranged villi in neat rows were found in an induced group as a positive control (AOM/DSS alone), and this feature was not observed in the 5-FU and HCD-treated groups. Additionally, the lymph nodes were reduced after treating with 5-FU and HCD ( Figure 2C-E). These histological changes elicited that HCD could reduce IBD symptoms. The next experiments were performed to evaluate the efficacy of HCD on colorectal cancer cells. Cytotoxicity Effects of HCD and 5-Fluorouracil on Colorectal Cancer Cells To check cytotoxicity of HCD against colorectal cancer cells, cells were treated with various concentrations of HCD and 5-FU (conventional chemotherapeutic agent, as a positive control), respectively. When compared to the untreated control (0 µM), cell viability in HCD-treated groups was significantly decreased ( Figure 3A). The IC 50 values of HCD in Caco-2 cells were 4.10 µM (36 h) and 2.32 µM (48 h), which were lower in 5-FU (100 µM for 36 h; 66.79 µM for 48 h) ( Figure 3B). To further validate the potential of HCD for colon cancer treatment, another colon cancer cell, HT-29, was treated with various concentrations of HCD. The results showed that a dose-dependent decrease of cell viability was also observed in HCD-treated HT-29 cells. The IC 50 values of HCD against HT-29 were 10.18 µM (36 h) and 1.39 µM (48 h), and were higher than those of Caco-2 ( Figure 3C). According to these results, we confirmed that the cytotoxicity of HCD in colorectal cancer cells (Caco-2 and HT-29) had a higher potential than 5-FU. Therefore, the subsequent experiments were focused on the investigation of underlying mechanisms in HCD against Caco-2 cells. Characteristics of HCD-Induced Cell Death To identify features of HCD-induced cell death, Caco-2 cells were treated with various concentrations of HCD, and intracellular DNA content was checked using flow cytometry coupling with PI-staining. After 36 h and 48 h of HCD treatment, the sub-G 1 and G 0 /G 1 cell cycle exhibited significant change in a dose-dependent fashion ( Figure 4A), which could be caused by increasing the sub-G 1 ratio, referred to as the apoptotic population. Interestingly, the significant decrease of the G 2 /M phase was found only at 5 µM HCD ( Figure 4B). Concurrently, analyzing apoptotic markers, the increase of cleaved caspase-3, -8, -9, and PARP specified that apoptotic cell death appeared at 24 h and 36 h of treatments, respectively ( Figure 5). These results clearly illustrate that HCD-caused cell death in Caco-2 cells was dominate in apoptosis. Moreover, by observation of the dynamics of apoptotic markers, intrinsic apoptotic inducer caspase-3 expression was enhanced at the first 24 h, and then cleaved at the following 12 h ( Figure 5A). Conversely, Bcl-2, the apoptotic inhibitor, had no significant change until 36 h of treatment ( Figure 5B). Additionally, the extrinsic apoptotic markers, cleaved caspase-8 and -9, also had significantly higher expression at either 24 h or 36 h of treatment ( Figure 5). The data confirmed that both intrinsic and extrinsic apoptotic signaling pathways were involved in HCD-induced apoptosis, and intrinsic apoptosis might be prior to the extrinsic event. The following experiments would be carried out to determine whether the dynamic change of growth signaling pathway is affected by HCD. The protein levels were represented as the ratio of GAPDH-normalized chemiluminometric intensity between the untreated control (0 µM) and treatment. Data (n = 3) were represented as mean ± SD. * p < 0.05 as compared with the untreated control (0 µM). Growth Signal Reduced by HCD Triggered Caco-2 Cell Apoptosis Notably, the intrinsic apoptotic pathway is regulated by the balance of growth signal and anti-growth signal. The subsequent experiments were applied to measure the signals of PI3K/Akt for growth, p53/p21 for anti-growth, and cyclin D1/PCNA for cell division after HCD treatments. Western blot data indicated that the protein levels of Akt and cyclin D1 were significantly decreased in 2 µM of HCD treatment after the first 24 h, and a significant decrease of the cyclin D1 level at 1 µM of HCD treatment was observed ( Figure 6A). Subsequently, the protein levels of p53 and p21 were up-regulated and PCNA was down-regulated at 36 h ( Figure 6B). This result hinted that HCD-induced apoptosis could be the reducing result of growth signaling via Akt mediation. The downregulation of growth signaling caused the elevation of p53/p21 protein expression, and consequently turned down cyclin D1 expression, which was the key for overcoming the G1/S checkpoint. However, upstream of Akt, the protein levels of PI3K were not associated with the mediation of Akt (Figure 6), which meant that Akt might be reduced by other signaling pathways. The next section was performed to test the characteristics of the inflammatory pathway within HCD-induced apoptosis. Inflammation-Suppressing Effect of HCD in Caco-2 Cells From the literature, inflammation has been proven as a promoting cause of colorectal cancer [22], and inflammatory signaling could crosstalk with Wnt/β-catenin, as well as being involved in colorectal cancer growth [23]. Thus, the subsequent experiment examined the regulation of inflammatory-related proteins, including COX-2, NF-κB, and β-catenin. At 24 h of HCD treatment, the p50 subunit of NF-κB showed a significantly decreasing manner ( Figure 7A). The decrease of p50 was diminished, whereas β-catenin was up-regulated at 36 h ( Figure 7B). This result refutes that NF-κB and β-catenin are involved in HCD-mediated apoptosis; however, the underlying mechanisms were not fully interpreted. Of note, inflammatory signaling was proven by the significant characteristics of IBD pathological progression, and then the subsequent experiment will be employed to evaluate whether oral gavage HCD may ameliorate IBD symptom in an AOM/DSS-induced mouse model by the mediation of inflammatory signals. Discussion In chemical-induced IBD and colitis-associated cancer research, this study broadened the application of HCD in medical use, and could provide a new approach for IBD treatments. These experimental results conferred the anticancer effect of HCD against colon cancer, which led to intrinsic and extrinsic regulation for apoptotic cell death by down-regulating Akt-mediated growth signaling. Moreover, the anti-inflammation fashion of HCD in colon cancer might be one reason for the down-regulation of Akt, and is postulated for the IBD curing effect in vivo. Usually, the conventional therapies for IBD can be grouped into the two following approaches: anti-inflammation, e.g., corticosteroids, mesalazine, and cyclosporine; and anti-microbial, including ornidazole and rifaximin [24]. Ordinarily, these drugs are useful for treating mild to moderate IBD. However, side effects, such as drug resistance to antibiotics and opportunistic infection, are still of concern [25]. Likewise, about one-third of chronic IBD patients have failed responses to corticosteroid medication, which are valid for acute IBD [25]. Hence, natural components are potentially considered to be the new strategies or approaches for curing IBD. Five types of phenylpropanoids and four types of lignan glycosides-phytochemicals originated from a warm-season perennial legume, Lespedeza cuneata-were synthesized, and these compounds could ameliorate UC [26]. Curcumin, the primary active compound of turmeric, alleviates CD and UC by inhibiting NF-κB activity [27]. Macrophage infiltration into the intestines could also be impaired by α-eleostearic acid isolated from Momordica charantia [28]. After AOM/DSS induction, polypus and enlargement lymph nodes were found in the colon by histopathological examinations in our study (data not shown). Additionally, the observations of irregular villi arrangement and lymphocyte infiltration into the lamina propia layer in this study are typical characteristics of IBD. The IBD mice treated with HCD had neat rows of villi and lymph nodes that were not enlarged or infiltrated into lamina propia layer, which indicates that HCD could relieve the symptoms of IBD. In our previous study, HCD was demonstrated to be a non-toxic agent to normal cells [16]. When compared to 5-FU, the cytotoxicity of HCD against colorectal cancer was higher than 5-FU (4.10 vs. 100 µM in IC 50 at 36 h treatment), suggesting that HCD had higher efficacy and more potent when applied to colon cancer treatments. In the cell cycle analysis of Caco-2 cells treated with HCD, the ratio of the sub-G 1 phase was significantly increased, and this increase was associated with an increase of HCD concentrations. This result was accompanied by the analysis of pro-apoptotic markers, such as PARP; caspase-3, -8, and -9; and Bcl-2. During observations from 24 h to 36 h of HCD treatment, the protein levels of pro-apoptotic molecules changed, and these signaling transductions could be precisely determined by caspase-8, caspase-3, Bcl-2, and PARP. Again, the caspase-3 expression level was increased in the first 24 h, which also indicated that intrinsic apoptotic pathways were activated at this time. These results imply that both intrinsic and extrinsic apoptotic signaling pathways were simulanteously activated in Caco-2 cells. This was the first evidence that HCD could induce the apoptosis of cancer cells. As we know, extrinsic and intrinsic apoptotic signaling pathways exert themselves with different signaling molecules [29]. In general, an extrinsic apoptotic signaling pathway is started from the activation of death receptor (TRAIL receptor or TNF receptor) and terminated at pro-caspase-3 cleavage via caspase-8 activation [29,30]. Different from extrinsic apoptosis, intrinsic apoptotic signaling is mediated by Bcl-2 to cause the loss of mitochondrial membrane potential or growth signaling depletion, which leads to cytochrome c release, pro-caspase-9 cleavage, and consequently, caspase-3 activation [29,31,32]. Therefore, by observing the altered levels of cleavage caspase-8, caspase-9, and Bcl-2, the type of chemical-induced apoptosis could be putatively addressed. Previous studies of HCD-induced cancer cell death were focused on autophagic cell death and intrinsic apoptosis [15,16,33,34]. To the best of our knowledge, this study is the first to show the involvement of HCD-induced extrinsic apoptosis in colorectal cancer. Moreover, the underlined targets of HCD in extrinsic apoptosis still need to be further explored, because this work only focused on the alteration of caspase-8 proteins. HCD-mediated intrinsic apoptosis was found by down-regulating β-catenin/NF-κB/Akt and activating p53/p21 expression. Previously, HCD had potentiated apoptosis via blocking the PI3K/Akt signaling pathway, promoting Aurora B degradation, and modifying histone-modifying enzymes in leukemia cells [33,34]. Remarkably, the cytotoxicity (cell death) of oral squamous cell carcinoma (OSCC) and glioma cells were also demonstrated in the treatment of HCD through an autophagic manner via Western blotting analysis, without an increase in sub-G 1 [15,16]. Moreover, HCD activated the autophagy in lung cancer (A549) via reducing the protein level of mTOR, PI3K/p85, Akt, and Beclin 1, and suppressed apoptosis by lessening cleaved-PARP formation (Chiu et al., 2019, unpublished data). Thereby, one possibility for cell death is that this difference in apoptosis or autophagy was regardless of different types of cancer cells (adenoma, carcinoma, glioma, or neuroblastoma) and p53 (wild or mutant type). The cell death features of HCD induced in Caco-2 cells and other cancer cells imply that the critical point of the apoptosis/autophagy switch needs to be clarified. The known roles of p53 are in cell cycle regulation, apoptosis induction, DNA repair activation, and aerobic respiratory improvement [35]. When measuring the apoptosis-related protein levels, the p53 protein level showed an increase at 36 h after HCD treatment. In intestine and colon tissues under IBD, p53 was overexpressed by TNF-α induction, and subsequently triggered cell apoptosis [36]. Interestingly, the induction of TNF-α was only observed in mutant p53, but not in wild-type p53 [37]. Over 50% of colitis-induced colorectal cancer and colon neoplasia could be found with the TP53 mutation. This mutation is believed to be the first step of colitis-associated carcinogenesis when compared with sporadic CRC [38]. Of note, p53 in Caco-2 is an aberrant type [39]. HCD caused the alteration of p53 protein expression in CRC cells, which indicated that HCD might affect p53 protein levels in IBD tissues. Therefore, the effects of HCD on intestinal epithelial cells with wild-type p53 might differ from similar Caco-2 cells. In addition, as an apoptosis-inducing feature, HCD also possesses inhibiting activity for Wnt/β-catenin in an anti-inflammation manner. Wnt/β-catenin dysregulation has been reported as a key factor for CRC initiation [40]. The Wnt/β-catenin signaling pathway acts as a central regulator in intestine homeostasis and epithelial stem cell proliferation. The Wnt ligand binds to the Frizzled/LRP receptor, and activates a signal cascade to subsequently result in the stabilization of β-catenin. Stable β-catenin can translocate into the nucleus and initiate gene expression of MYC and its downstream target, CCND1 [41]. Impaired activation of the Wnt/β-catenin signaling pathway could cause uncontrolled cell proliferation and finally, elicit colon cells carcinogenesis. Therefore, the Wnt/β-catenin signaling pathway would become the target of CRC prevention, prognosis, and diagnosis. Numerous studies have sought new compounds or herbal medicine for treating CRC. Fermented culture broth of Antrodia camphorate, hydnocarpin (a natural lignan), and bark extract of Mesua ferrea have been shown to inhibit activity of Wnt/β-catenin in colon cancer cells [42][43][44]. In this study, the reduction of β-catenin levels after HCD treatment showed the inhibited manner of the Wnt/β-catenin signaling pathway, which indicated a possibility of reducing colon cancer initiation. Conclusions This study illustrated anti-colorectal cancer activity from HCD by modifying intrinsic growth signaling and inflammatory modulators, which subsequently triggered both intrinsic and extrinsic signals to induce cell apoptosis. Furthermore, the inflammatory symptoms of AOM/DSS-induced enteritis in the in vivo mouse model were also ameliorated by HCD treatments. This is the first evidence showing the medicinal efficacy of HCD on IBD mice and colon cancer, suggesting that HCD could provide an alternative and complementary regimen for anti-colon cancer and IBD treatments. Funding: This study was supported by Ministry of Science and Technology (Grant number 104-2320-B-259-001-MY3) Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2017-04-03T18:36:03.221Z
2015-07-04T00:00:00.000
14778622
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://jidc.org/index.php/journal/article/download/26142671/1321", "pdf_hash": "4bd88b92b053ea9bcef06a4d9d80d27b7a103f36", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42599", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "4bd88b92b053ea9bcef06a4d9d80d27b7a103f36", "year": 2015 }
pes2o/s2orc
Susceptibility of autoimmune diseases in three polymorphisms of infection-associated gene IRAK 1 1 Nanfang Hospital, Southern Medical University, Guangzhou, China 2 School of Traditional Chinese Medicine, Southern Medical University, Guangzhou, China 3 The First College of Clinical Medicine, Shandong University of Traditional Chinese Medicine, Jinan, China 4 Department of Medicine, Virginia Commonwealth University and Massey Cancer Center, Richmond, Virginia, USA 5 Department of Intensive Care Unit, Chinese PLA General Hospital, Beijing, China 6 School of Biotechnology, Southern Medical University, Guangzhou, China Introduction Autoimmune diseases (ADs) are a group of complex disorders initiated by the loss of tolerance to self-antigen, which results in immune-mediated tissue destruction and chronic disabilities [1].ADs comprise more than 100 diseases and syndromes, and the annual estimated treatment costs for ADs are more than US$100 billion [2].As a group of complex diseases, the precise molecular mechanism of ADs is still not clear.However, the interaction of genes and environment is widely recognized as one of the main causes of ADs [3,4]. The sustained pathology of ADs is directly caused by a specific self-reactive immune response, including innate and adaptive immune response, which can be caused by infection with some kinds of pathogenic microorganisms [5][6][7].Interleukin-1 (IL-1) receptorassociated kinases (IRAKs) are key mediators in the signaling pathways of innate immune response, especially in the Toll-like receptors (TLRs)/IL-1 receptors (IL-1Rs) pathway.There are four kinds of IRAKs: IRAK1, IRAK2, IRAK3, and IRAK4 [8].IRAK1 is the first member identified in the IRAK family.It can be phosphorylated and induce a serious downstream signaling cascade after the activation of TLRs/IL-1Rs stimulation [8,9].The phosphorylation of IRAK1 is associated with the activation of NF-κB in inflammatory disease, and the activity of NF-κB can be inhibited using an IRAK1 inhibitor, resulting in the suppression of the inflammatory conditions [10,11].IRAK1 has been found play an important role in both ADs patients and in an autoimmune animal model [11][12][13][14][15]. Therefore, IRAK1 is recognized as a risk gene in ADs. Single nucleotide polymorphisms (SNPs), or mutations, may alter expression of the gene and influence the susceptibility of some diseases [16][17][18][19].Some researchers have studied the relationship between ADs risk and three polymorphisms of IRAK1: IRAK1 rs3027898 C>A, IRAK1 rs1059702 T>C, and IRAK1 rs1059703 T>C.Most of these studies were conducted in developing countries, so it is very important for these countries to make clear what the role of IRAK1 for ADs is [14,15,[20][21][22][23][24][25][26][27].However, the results among these studies remain conflicting.Therefore, we conducted this study, according the procedure published by MOOSE group [28], to find a clearer association between these three SNPs and ADs risk. Publication search A systematic search was performed in PubMed, OvidSP, and Chinese National Knowledge Infrastructure (CNKI) databases covering all papers published prior to August 2014.The searching strategy was as follows: (autoimmune OR autoimmune disease OR autoimmunity) AND (polymorphism OR polymorphisms OR variation OR variations OR mutation OR mutations OR variant OR variants) AND (IRAK1 OR rs3027898 OR rs1059702 OR rs1059703).The references in the studies were also read to find additional publications on this topic.Articles included had to meet the following criteria: case-control study; evaluation of IRAK1 polymorphisms (rs3027898, rs1059702, or rs1059703) and risk of ADs; and available and usable data of genotype frequency. Data extraction Two authors independently extracted the data from eligible studies.The different data that were extracted were checked.The remaining disagreements were discussed and judged.The following information was extracted: first author, publication year, diseases, country, ethnicity, genotyping methods, number of cases and controls, gender distribution of cases and controls, number of genotypes and alleles, Hardy-Weinberg equilibrium (HWE) in control subjects, and frequency of major allele in controls.Ethnicities were categorized as Caucasian, Asian, African, and Latin-American.Study quality was judged according to the criteria modified from previous publications [29][30][31] (See Supplementary "Table S1 Scale for methodological quality assessment"). Statistical analysis Odds ratios (ORs) and 95% confidence intervals (CIs) were calculated as a measure of the association between the three SNPs (rs3027898, rs1059702, and rs1059703) and ADs risk.An allele model and other types of genetic models (heterozygote, homozygote, dominant, and recessive models) were used.In addition to comparing among all subjects, the stratified comparisons were also used according to different ethnicities and different diseases.The between-study heterogeneity was measured by Cochran's (Q) and Higgins's (I 2 ) tests.If the heterogeneity was considered significant (p < 0.05), the random effects model was used to estimate the pooled OR.Otherwise, the fixed effects model was used.Also, logistic metaregression analysis was carried out, if there was obvious significant heterogeneity, to explore potential sources of heterogeneity.The examined characteristics included publication years, countries, genotyping methods, number of alleles and genotypes, number of females and males in cases, and the frequency of major allele in SNP in controls.The HWE was examined using the Chi-square test with significance set at p < 0.05.Sensitivity analysis was performed to evaluate the effect of each study on the combined ORs by deleting each study in each turn.Potential publication bias was determined using Funnel plots and Begg's test.An asymmetric plot and p value of less than 0.05 was recognized as significance.All statistical analyses were performed using STATA 12.0 software. Study characteristics There were 483 articles matching the search strategy, and an additional article [20] was found by scanning the references of the original papers.After a step-by-step screening of the titles, abstracts, and full texts of the articles, as shown in Figure 1, there were 10 articles appropriate for this meta-analysis, which included 11 studies of rs3027898, 9 studies of rs1059702, and 7 studies of rs1059703. Within all the 10 articles, six kinds of genotyping methods were used.Four races were included: Caucasian, Asian, African, and Latin-American.Four studies were not in HWE in control groups [14,15,23,24].In 484 articles, 34 were found not related to ADs and 145 were found not related to IRAK1 by scanning the titles.After that, 245 articles were recognized as reviews, 32 were found not related to human patients and 14 articles were repeated papers by reviewing the abstracts.The full-text of the left 14 articles were carefully reviewed, in which 1 article was found not include usable data and 3 articles were found not about rs3027898, rs1059702 or rs1059703.At last, 10 articles were remained for this metaanalysis, which included 11 case-control studies for rs3027898, 9 studies for rs1059702 and 7 studies for rs1059703.There was not enough data in another article [25] to generate the HWE in four studies, but the p value of HWE was not less than 0.001 according to the authors' explanation.The detail characteristics are shown in Table 1. Association between IRAK1 rs1059703 T>C polymorphism and ADs risk There was no significant increased risk in overall comparison in any genetic model of association between rs1059703 T>C polymorphism and the risk of ADs.However, the increased risk could be found in subgroup analysis based on ethnicities or diseases.In Caucasians, there was a significant increased risk of C allele with ADs (C versus T: OR = 1.35, 95% CI = 1.24-1.47,p = 0.000) (Table 4 and Figure S3A).In Asians, there was significant increased risk of TT allele with ADs in the heterozygote model (TC versus TT: OR = 0.77, 95% CI = 0.64-0.92,p = 0.005) and dominant model (TC+CC versus TT, OR = 0.77, 95% CI = 0.64-0.91,p = 0.003) (Table 4 and Figure S3C, S3D).In the SLE subgroup, there was an increased disease risk in C allele in the allele model (C versus T: OR = 1.47, 95% CI = 1.33-1.61,p = 0.000) (Table 4 and Figure S3E). Evaluation of heterogeneity The heterogeneities among studies were obvious in the overall comparisons (rs3027898: I 2 = 83.6%,Tau 2 = 0.033, p = 0.000; rs1059702: I 2 = 81.4%,Tau 2 = 0.020, p = 0.000; rs1059703: I 2 = 89.6%,Tau 2 = 0.059, p = 0.000).The meta-regression analyses were conducted to further explore sources of heterogeneity.Several factors were tested as potential sources of heterogeneity, including publication years, countries, genotyping methods, number of genotypes and alleles, number of females and males in cases, and the frequencies of major alleles for each SNP in controls.As a result, the heterogeneities could not be explained by any of the potential sources by meta-regression analysis. Sensitivity and publication bias analysis The sensitivity analysis to test the influence of a single study on the overall meta-analysis was performed by deleting each study one at a time.As a result, the pooled estimate did not show significant difference, which indicated that the results were reliable statistically.No evidence of publication bias was found in current meta-analysis, identified by the Funnel plots, Egger's test (p = 0.986 for rs3027898; p = 0.875 for rs1059702; p = 0.596 for rs1059703), and Begg's test (p = 0.533 for rs3027898; p = 0.917 for rs1059702; p = 0.230 for rs1059703) (Figure 3). Discussion IRAK1 is a protein kinase involved in the Toll/IL-1 receptor (TIR) pathway [32], which plays an important role in the activation of NF-κB.By enhancing the communication of TLR with TNF receptor-associated factor (TRAF) 6, or by engaging into the MyD88-signaling complex, IRAK1 could trigger NF-κB, subsequently increasing the expression level of several inflammatory cytokines, such as TNFα and IL-8 [33][34][35].Several animal experiments showed that the expression level of IL-17 was decreased and inflammatory responses were dampened by depletion of IRAK1 [36], and IRAK1 -/-mice were protected from experimental autoimmune encephalomyelitis (EAE) [13].Moreover, IRAK1 was found to be correlated with ADs risk in several studies of patient cohorts [12,20,22]. Three SNPs of IRAK1 have been found to be related to ADs risk: rs3027898 for RA [15,22], rs1059702 for SSc [23,24] and rs1059703 for SLE [14,25].However, the results remain in conflict.Therefore, we conducted this meta-analysis to better understand whether these three SNPs contribute to susceptibility to ADs. In this meta-analysis, we screened 10 manuscripts and pooled the corresponding data, including 10,705 cases (9,865 controls) for rs3027898, 15,005 cases (14,997 controls) for rs1059702, and 8,115 cases (6,815 controls) for rs1059703.We found that all these three SNPs were related to ADs risk. For rs3027898, C allele or CC genotype were correlated with increased disease risk in most of the genetic models, including the allele model, heterozygote model, and dominant model, both in pooled comparison and in the Caucasian subgroup.Moreover, the increased disease risk of CC genotype was also found in the homozygote model in the Caucasian subgroup.However, in the Asian subgroup, the increased ADs risk of CC genotype could only be found in the heterozygote model.In the stratified analyses based on different types of ADs, the increased susceptibility of CC genotype was found in the heterozygote model in the RA subgroup.Due to the data limitation of the SLE subgroup, we could only compare the association in the allele model, and indeed found the increased risk of C allele. For rs1059702, either in pooled or in stratified analyses, the increased disease risk for T allele or TT genotype was found in the allele, homozygote, and dominant models.In the heterozygote model, the increased disease risk of TT genotype was found both in pooled analysis and in the Asian subgroup.In the recessive model, compared with CC genotype, the TT+TC genotype was found to be associated with increased disease risk in the Asian subgroup and SSc subgroup. For rs1059703, there was not as much association as with rs3027898 or rs1059702.No significant relationship between ADs risk and rs1059703 could be found in pooled analyses in any genetic model.However, there were some associations shown when the stratified analyses were done.In the allele model, increased disease risk with C allele was found in the Caucasian and SLE subgroups.In contrast, increased ADs risk was found to be associated with TT genotype in the Asian subgroup, both in the heterozygote and dominant models.But for the RA subgroup, no association was found in any genetic model. There are some limitations in this study.First, although 10 articles were included, the studies for some stratified analyses were limited.For example, there were only two studies of the Asian subgroup and two studies of the SSc subgroup in analyses for rs1059702, except in the allele model.Second, there is obvious heterogeneity between different groups for some genetic models.Although the meta-regression and sensitivity analyses were conducted and no potential source of heterogeneity was found, the results still must be treated with caution.Third, only three SNPs in IRAK1 were included in this study.However, there are more SNPs in IRAK1 and more genes in the TIR signaling pathway, which would also contribute to susceptibility of ADs.The effect of these SNPs and genes, and also the interaction or network among these genetic locations, should be studied in the future.Furthermore, studies investigating the geneenvironment interactions will also help to make clear of the role of these SNPs in the pathogen of ADs [37][38][39][40].Finally, since ADs comprised diverse diseases, the relationship of these SNPs with other types of ADs, such as inflammatory bowel disease and seronegative spondyloarthropathies, should be investigated in the future. Conclusions The present study demonstrated that three SNPs (rs3027898, rs1059702 and rs1059703) in IRAK1 confer risk of ADs.Moreover, the associations were only within a specific genetic model, specific ethnicities, or specific disease types, not within all types of cohorts or ADs. Figure 1 . Figure 1.Flowchart for identification of studies included in the meta-analysis Figure 2 . Figure 2. Forest plots of overall analysis of ADs risk associated with IRAK1 Figure 3 . Figure 3. Publication bias on the IRAK1 polymorphism and ADs risk Table 1 . Characteristics of published studies of rs3027898, rs1059702 Table 2 . Stratified analysis of association between ADs risk and rs3027898 Table 3 . Stratified analysis of association between ADs risk and rs1059702 Table 4 . Stratified analysis of association between ADs risk and rs1059703
v3-fos-license
2014-10-01T00:00:00.000Z
2010-12-15T00:00:00.000
3767901
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0015708&type=printable", "pdf_hash": "38959e56524fafd39f1cd59701ccb495422ccd62", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42602", "s2fieldsofstudy": [ "Biology" ], "sha1": "38959e56524fafd39f1cd59701ccb495422ccd62", "year": 2010 }
pes2o/s2orc
Evolution of an Agriculture-Associated Disease Causing Campylobacter coli Clade: Evidence from National Surveillance Data in Scotland The common zoonotic pathogen Campylobacter coli is an important cause of bacterial gastroenteritis worldwide but its evolution is incompletely understood. Using multilocus sequence type (MLST) data of 7 housekeeping genes from a national survey of Campylobacter in Scotland (2005/6), and a combined population genetic-phylogenetics approach, we investigated the evolutionary history of C. coli. Genealogical reconstruction of isolates from clinical infection, farm animals and the environment, revealed a three-clade genetic structure. The majority of farm animal, and all disease causing genotypes belonged to a single clade (clade 1) which had comparatively low synonymous sequence diversity, little deep branching genetic structure, and a higher number of shared alleles providing evidence of recent clonal decent. Calibration of the rate of molecular evolution, based on within-species genetic variation, estimated a more rapid rate of evolution than in traditional estimates. This placed the divergence of the clades at less than 2500 years ago, consistent with the introduction of an agricultural niche having had an effect upon the evolution of the C. coli clades. Attribution of clinical isolate genotypes to source, using an asymmetric island model, confirmed that strains from chicken and ruminants, and not pigs or turkeys, are the principal source of human C. coli infection. Taken together these analyses are consistent with an evolutionary scenario describing the emergence of agriculture-associated C. coli lineage that is an important human pathogen. Introduction Members of the genus Campylobacter are among the most common cause of bacterial gastroenteritis worldwide. The species that are principally responsible for human disease, Campylobacter jejuni and Campylobacter coli [1], cause sporadic infection [2] and occasional outbreaks [3] usually associated with consumption of contaminated meat, poultry, water and contact with animals [4,5]. Although C. coli is responsible for fewer food-borne illnesses than C. jejuni, the impact of C. coli is still substantial: of approximately 340000 annual cases of campylobacteriosis in the UK and 2.5 million in the US [6,7,8], approximately 10% are caused by C. coli [9]. Therefore, based upon recent estimates, C. coli infection has an annual cost of £50 million in the UK [10] and $800 million in the USA [11] but despite the economic importance of this pathogen, most Campylobacter research focuses upon C. jejuni. Multi-locus sequence typing (MLST) is a molecular typing technique that has enhanced studies of the population structure and epidemiology of Campylobacter [12]. An MLST scheme has been developed for C. coli [13,14] which characterizes allelic orthologues of the same seven housekeeping gene loci as the original C. jejuni MLST system [12]. The inherent reproducibility of this nucleotide sequence based method and the ability to compare data from different laboratories has enabled the assembly of large archives of isolate genotype data [15] which provide a valuable resource for analysing the epidemiology and evolution of these organisms. To date, studies of the genetic structure of C. coli populations have suggested that there is less genetic diversity than in C. jejuni isolate collections from a comparable sample frame [13,14,16]. However, sufficient variation in MLST alleles exists to identify correlation with the animal host from which the isolate was sampled [17,18] and recent analyses have exploited this host-genotype relationship to investigate potential disease reservoirs. By describing spatiotemporal patterns and lineage associations in animal hosts [19,20] and using computer models for the attribution of clinical isolates to source [21,22,23,24] these studies consistently link genotypes from clinical isolates with the greatest probability to ruminant and especially chicken (55-80% of isolates) sources. In contrast to C. jejuni, where there is little evidence of deep branching phylogenetic structure, C. coli is divided into three clades [25]. There is evidence that the clades are associated with sample source, with clades 2 and 3 more common in environmental waters [26], but little is known about the clades, for example when they arose, what is their ecological significance and, in particular, how they relate to clinical infection caused by C. coli. Using isolate genotype data from a national survey, the Campylobacter MLST project in Scotland -CaMPS (2005/6), we aimed to investigate the disease causing C. coli by analysing the genetic structure of isolates from a variety of sources including clinical infection, farm animals and the environment and using population genetic techniques to characterise the lineages associated with human disease. C. coli was studied by investigating (i) the phylogenetic relationships among isolates (ii) the intrinsic genetic differences between C. coli and C. jejuni lineages, (iii) the quantitative attribution of clinical isolates to different sources, and (iv) the rate of molecular evolution and the time of species and clade divergence. Taken together, these analyses provide evidence for an evolutionary scenario that describes the emergence of clinically important C. coli. Clinical and potential source isolates There were 5,674 isolates from confirmed clinical cases of campylobacteriosis, received from 15 health board regions in Scotland between July 2005 and September 2006. Of these, 427 isolates were excluded because they comprised mixed cultures, incomplete typing or other Campylobacter species such as C. lari and C. upsaliensis. Of the remaining clinical isolates (5247), MLST confirmed that 4747 were C. jejuni and 500 were C. coli. In addition to isolates from human disease, a total of 200 C. coli isolates from samples from potential source populations were typed at 7 loci. These were augmented with archive data (1023 isolates) from published sources [13,17,22,26,27,28,29] to give a total of 1223 isolates from potential sources. Isolates were grouped by source/ host animal to give isolate datasets for phylogenetic and attribution analysis from 98 cattle, 54 sheep, 514 chicken, 380 swine, 110 turkey and 67 riparian (water fowl and environmental waters) sources (Table S1). Genetic diversity There were a total of 451 STs, 103 from clinical isolates and 393 from other sources. The 10 most common STs in clinical (ST-827, ST-825, ST-1774, ST-855, ST-829, ST-1614, ST-872, ST-962, ST-828, ST-1773) and non-clinical (ST-827, ST-825, ST-1068, ST-829, ST-855, ST-854, ST-1101, ST-1614, ST-1017, ST-962) datasets accounted for 72% and 37% of genotypes respectively. Two clonal complexes were present, defined as in the standard definition as groups of STs that share 4 or more alleles in common with the central genotype. Eighty-one percent and 1% of clinical isolates belonged to the ST-828 complex and the ST-1150 complexes respectively and 52% and 4% of non-clinical isolates belonged to these complexes. The remainder of isolates did not belong to a known clonal complex. The clonal complex structure within C. jejuni is greater compared to data sets of a similar magnitude from comparable sources [13], and this allows the identification of clonal complexes with different levels of host association [20,30]. Similar association analysis was not possible for C. coli at the clonal complex level but STs belonging to the ST-828 complex have previously been recovered from clinical disease isolates and from agricultural sources [13,17]. There was some variation in allelic diversity by locus (Table S2) but it was generally low, with the total number of STs (451) approximately equal to the number of alleles (410) suggesting that the variation in genotypes results more from re-assortment of existing alleles than generation of new ones by point mutation which would give more alleles per locus. The clinical isolate population had different genetic properties (lower diversity) with the mean number of alleles per locus (13) lower than in the non-clinical data (55). Clonal frame genealogy The genealogy determined using CLONALFRAME showed a high degree of genetic structuring in isolates sampled from clinical infection animal sources and the riparian environment ( Figure 1). The 3-clade structure that has previously been described [25] was evident. Comparison of genotypes from clinical infection with this genealogy demonstrated that all of the cases of human C. coli infection were caused by lineages belonging to clade 1 ( Table 1). Eighty-four percent of STs from clade 1 belonged to the ST-828 clonal complex. Phylogenetic congruence ML trees were determined for each of the 7 MLST loci for C. jejuni and C. coli (clades 1-3), and C. jejuni and C. coli combined (data not shown). The topology of the trees for C. jejuni sequences showed no evidence of congruence, with alleles frequently changing position amongst trees. For the combined C. jejuni/C. coli trees there was congruence, partitioning alleles in accordance with species. Similarly, with C. coli there was congruence within the three clades. Visual representation of congruence indicated that within C. coli STs, alleles are more likely to be associated with those from the same clade. Quantitative analysis of congruence was performed using the SH test on the ML trees for combined C. coli data, C. jejuni/C. coli, C. jejuni, and the three C. coli clades separately ( Figure 2). Within C. jejuni there was no evidence of congruence with the likelihood values (-ln L) for all of the single locus trees within the range of -ln L values generated for random trees ( Figure 2B). This suggests extensive recombination. The -ln L for combined C. jejuni/C. coli trees provided evidence of tree congruence for aspA, gltA, glyA and tkt ( Figure 2C). This is expected as the likelihood of one single locus tree predicting another is high when they share a distinct two-species distribution. Within C. coli the congruence between single locus trees, suggested by ST restriction within clades, was confirmed with likelihood values for congruence between competing ML trees outside of the -ln L values for random trees for all 7 MLST loci. This suggests relatively low levels of recombination between clades. Analysis within individual C. coli clades showed no evidence of congruence indicative of recombination within clades. Molecular clock estimates of clade divergence The C. jejuni population from the 3 year longitudinal study [21] that was used to calibrate the tree contained sufficient levels of mutation, recombination or coalescence events to estimate the timescale of the genealogy (Neg), where Ne is the effective population size and g is the generation length. The rate of molecular change was analysed using the importance sampler [31] and the results of the three alternative datasets were merged to produce a model average over the datasets. There was negligible uncertainty in the tree topology ( Figure 3), and the topology was as expected from separate analyses [32]. The uncertainty in the scale bar, which represents uncertainty in the calibration of the molecular clock, was (2719-9194 years) for a scale bar of length 5000 years. The point estimates for the divergence of the different Campylobacter lineages were consistent with previous estimates [32] and placed the divergence of C. coli and C. jejuni at 6429 (95% CI, 6280-6579) years ago with C. coli clade 3 diverging approximately 1684 (95% CI, 1659-1709) years ago and clades 1 and 2 diverging approximately 1023 (95% CI, 1005-1041) years ago ( Table 2). The traditional method for dating recent bacterial evolution [33,34] is to calibrate the rate of sequence divergence relative to the split of E. coli and Salmonella typhimurium which Ochman and Wilson estimated at 120-160 million years ago (Ma), based on 1% divergence in the 16S rRNA gene per 50 million years [35]. By this estimate the C. jejuni-C. coli split would have occurred approximately 10 Ma [32], very different to the estimate (6479 years) using our method, based on intra-specific variation. By our estimate, speciation is occurring over thousands rather than millions of years. The root of the tree was dated at 38,269 years ago (95% CI 20198-73764) and the speciation rate (l) was 0.054 (95% CI 0.019-0.14) per lineage per 1000 years [36]. Estimates of mutation rate, m = 0.029, (95% CI 0.016-0.053), the transitiontransversion ratio, k = 2.86 (95% CI 2.47-3.31), and d N /d S ratio, v = 0.011 (95% CI 0.009-0.014) were in good agreement with previous estimates for this genus [32]. To enable comparison of our molecular clock calibration based on intraspecific variation with other estimates, the time represented by the scale bar in figure 3 was determined using other calibration methods. With an empirical estimate, based on published generation times [37,38] and genomic mutation rates [39] for C. jejuni, the scale bar would represent a period of 42,200 years (95% CI 2,690-661,000) and with the Ochman and Wilson method [35] the scale bar would represent 7.6 My [32]. Attribution to putative source Isolates from known sources were used to test the limitations of the attribution model as previously described [22]. Random subsets of the comparison datasets for all putative source populations were used for self-assignment. Test sets of 50% of the swine, ruminant, chicken, turkey and riparian isolates were assigned to host source based on a reduced training set, and the AI model assigned them with 94%, 89%, 95%, 92%, 88% probability to the correct source: swine, ruminant, chicken, turkey and riparian sources respectively. For each clinical isolate the probability of assignment to each potential source was calculated and the sum of these probabilities was used to determine the percentage of all clinical isolates that are attributed to each source. The clinical C. coli isolates were attributed to source in the following percentages: 57% to poultry, 41% to ruminant, 1% to swine, 0.5% to turkey and 0.5% to riparian sources. Discussion The genealogical reconstruction of genotypes from diverse sources demonstrated the clade structure and revealed that all of human C. coli disease cases were caused by STs belonging to clade 1. The 3-clade structure, and its relationship to disease, is poorly understood but for it to arise and be maintained requires differentiated gene pools. The simplest explanation for this is a general reduction in the overall level of recombination but there is evidence of frequent recombination within each clade (Figure 2), and therefore it is likely that recombinational barriers are involved in clade divergence. Three broad classes of barrier can be described, (i) mechanistic barriers -imposed by the homology dependence of recombination [40] or other factors promoting DNA specificity such as restriction/modification systems [41]; (ii) ecological barriers -a consequence of physical separation of bacterial populations in distinct niches; (iii) adaptive barriers - Figure 3. Phylogeny of estimated divergence times in the genus Campylobacter using BEAST. Error bars, in grey, associated with each node indicate relative uncertainty in node height. Calibration of the scale bar was based on intraspecific variation in Campylobacter jejuni and uncertainty associated with the molecular clock calibration is represented by a 95% CI below the scale bar. Empirical and indirect [35] estimates of the rate of molecular evolution would calibrate the same scale bar at 42,200 (95% CI 2,690-661,000) and 7,600,000 years respectively. doi:10.1371/journal.pone.0015708.g003 implying selection against hybrid genotypes [42]. The relative importance of these three different classes of barrier is unclear but evidence from interspecies recombination between C. coli and C. jejuni [25] is consistent with the involvement of factors such as physical proximity, a shared vector (bacteriophage) infecting both lineages, or niche adaptation playing a role. C. coli, clade 1 has several genetic characteristics that distinguish it from other lineages within C. coli. First, this clade has relatively low synonymous sequence diversity in MLST genes (mean D s of 0.006 per nucleotide) compared to C. jejuni (0.016) and clades 2 (0.008) and 3 (0.013). The sequence diversity is higher than that of organisms commonly described as genetically monomorphic (Ds of ,0.0002), for example enteric Salmonella Typhi or Yersinia pestis [43], but is still comparatively low indicating a relatively recent genetic bottleneck. Second, on a tree based on concatenating multiple genetic regions from each strain, in this case MLST data, there is little evidence of deep genetic structure that would indicate long periods of independent evolution of clade 1 (or clade 2). Third, there is more evidence of genetic exchange within C. coli clade 1 as STs often share the same allele at individual MLST loci and as in other bacterial species, this sharing of the majority of alleles is strong evidence of recent clonal descent. The importance of recombination in generating variation also varies among the clades. The relative importance of the reassortment of existing alleles (recombination) and the generation of new ones (mutation), in producing variation can be provided by comparison of the number of alleles at each locus, ranging in this study from 120 for aspA to 221 for pgm, with the number expected for the number of STs (1441) if point mutation generating novel alleles were responsible for the diversity (1441/7 = 206). If the observed number of alleles is lower than this expected value then recombination is involved in generating diversity. Using this method the ratio of observed to expected alleles was 0.5 for C. coli clade 1 and 0.7 for C. jejuni compared to 2.6 and 4.9 for clades 2 and 3 respectively. This indicates that recombination is 5-10 times more important in generating diversity in C. coli clade 1 than in the other clades. Recent work that uses combined population-genetic microevolutionary models demonstrates that recombination generates twice as much diversity as de novo mutation in Campylobacter and has a fundamental role in the evolution of this genus [32]. Our findings for C. coli clade 1 are consistent with these results but demonstrate that the importance of recombination can vary within subgroups of the same species (clades 2 and 3). The numerical dominance of clade 1 C. coli genotypes from clinical disease can be explained, without consideration of intrinsic differences in the ability to infect humans, as a reflection of the dominance of isolates from this clade in disease reservoirs and food chain sources. Like C. jejuni, there are limitations on how well C. coli lineages can be attributed to source because, while there are host-associated alleles [17,30], there is also considerable niche overlap with the same alleles and STs found in isolates from different farm hosts [20,22]. However, alleles from STs clustered into clade 1 are distinct from those in clades 2 and 3 and this level of genetic differentiation allows source attribution. This shows that poultry and ruminants and not pigs and turkeys (clade 1) or the riparian environment (the source of most clade 2 and 3 isolates), are the most likely source of human C. coli infection. This is consistent with evidence of the source of human C. jejuni infection [21,22,44]. In evolutionary terms, agriculture is a new niche and the co-infection of animals including chicken and ruminants by similar strains, despite the differences in the biology of their digestive tracts, suggests that this niche has acquired specifically adapted lineages of bacteria, rather than sharing a common gene pool with a pre-existing natural reservoir. Agricultural environments are highly unusual in terms of host diet, genetic and age structure, density and many other details of habitation. It appears that this novel niche has been recently colonised by clade 1 C. coli as indicated not only by evidence of a smaller historical population size but also by the nature of recently recombined C. jejuni alleles found in C. coli clade 1. These alleles are typical of those found in C. jejuni from farm sources and are almost all identical to those found in the donor species, indicating that the imports occurred recently enough to have not accumulated mutations [25]. Determining the timescale of the evolutionary divergence of the three C. coli clades provides a context for understanding the link between ecological factors, such as the domestication of the host niche, and the population genetic structure. However, there is some debate about the rate of bacterial evolution because unlike multicellular eukaryotes they do not leave a morphologically informative fossil record and, unlike viruses, they do not mutate fast enough for evolution of natural populations to be easily measured in real time. Indirect methods calibrate the molecular clock in bacteria by cross-referencing events that can be dated. For example, the common ancestor of mitochondria and their closest living bacterial relatives [35] or cospeciation (of bacterial endosymbionts) with invertebrate hosts for which a fossil record is available [45]. Estimates based on such methods have been widely used [33,34,46,47] but they can conflict with empirical approaches that are based on laboratory measurements of generation lengths and mutation rates [48,49]. Dating Campylobacter evolution with reference to indirect methods for estimating the split of E. coli and S. typhimurium [35], would place the divergence of the three C. coli clades at approximately 2.5 million years ago. This is incompatible with the hypothesis that agriculture played a part in the divergence of the lineages because modern farming did not begin until around ten thousand years ago (in the Middle East), and was not common throughout Europe until much later (5,000-3,000 BC) [50,51,52]. However, the more recent estimate for the divergence of the clades, based on intra-specific variation in longitudinally sampled C. jejuni isolates, places the divergence of the C. coli clades at less than 2,500 years ago. This is consistent with the introduction of agriculture having had an effect upon the evolution of the genus Camplylobacter, for example by introducing novel opportunities for pathogens to expand into new host species and to spread rapidly to new territories. There are several potential causes for the conflict between short and long term estimates of bacterial evolution and the debate continues but an increasing number of studies support the theory of a far more rapid rate of bacterial evolution [32,53,54,55]. Furthermore, the impact of agriculture upon bacterial evolution has been demonstrated in Staphylococcus aureus where the majority of isolates from chickens are the descendants of a single human-to-poultry host jump that occurred approximately 38 years ago (range, 30 to 63 years ago) in Poland [56]. This demonstrates the strong global effect that agriculture can have on bacterial evolution and may inform understanding of the population genetic structure of Campylobacter in the context of relatively rapid evolution. Taken together the population genetic structure, source attribution of isolates and molecular clock calibration for clade divergence are consistent with an evolutionary scenario where a lineage within the C. coli has diverged into an agricultural niche. This may be related to intrinsic genetic differences associated with adaptation and the observation that some C. jejuni strains are more similar to particular strains of C. coli in gene content than they are to other strains of C. jejuni [57] supports this, but genome-wide studies of patterns of exchange are necessary to investigate the nature of barriers to gene flow and the consequences of recombination for ecological adaptation. Subtypes belonging to the agricultural C. coli lineage (clade 1) are responsible for the majority of human disease. It remains possible that there are genomic differences associated with pathogenicity but these are not required to explain the over representation of this clade among human disease cases. The most likely reason for the dominance of this clade in human disease is that it is an agricultural lineage and human disease is directly linked to food animals in the food chain. Ethics statement Ethical approval (reference: 05/S0802/151) for the collection of the samples and information used in this project was obtained from Grampian Local Research Ethics Committee (Summerfield house, Aberdeen, UK). This was in accordance with government agreements for research ethics committees (July 2001) and in compliance with the standard operating procedures in the UK. Specimens were collected from all 28 NHS clinical diagnostic laboratories in Scotland that agreed to participate. Archived isolate information included submitting laboratory, specimen number and date of collection. In accordance with Grampian Local Research Ethics Committee approval, patients were informed of the survey and had the option to decline. As no information that would allow identification of the patient was collected, individual patient consent was not required from patients that did not decline. Multilocus sequence typing (MLST) DNA was extracted from isolates recultured microaerobically at 37uC (for 48 h) with a CHELEX resin method (BIO-RAD, USA) as previously described [22]. A high throughput 7-locus MLST protocol was used, based upon a 2-phase robotic system for PCR of template DNA arrays and amplification products using published primers, reagent concentrations, template purification protocols and cycle parameters [12,14,58]. This process is described in more detail elsewhere [20,22]. In brief, following electrophoresis (200 V, 10 min) on agarose gel in 1x TAE buffer (1 mM EDTA, 40 mM Tris-acetate) and UV visualization, 5 ml of the original PCR products were precipitated with 20% polyethylene glycol-2.5 M NaCl [60] and nucleotide sequencing PCRs (2 ml of DNA, 6.98 ml water, 1.0 ml 5x buffer, 0.02 ml BigDye Terminator v3.1 mix [Applied Biosystems, UK] and 0.1 mM of primer) were performed in both directions with cycling parameters as follows: 30 cycles of 96uC for 10 s, 50uC for 5 s, and 60uC for 2 min. Sequencing PCR products were precipitated, cleaned with 95% ethanol, and analyzed with an ABI Prism 3730 automated DNA sequencer (Applied Biosystems, UK). Forward and reverse sequences were assembled using the Staden suite of computer programs [61] and allelic orthologs were assigned numbers giving a 7-locus sequence type (ST). Contemporaneous survey data were augmented with data from published studies [13,17,22,26,27,28,29] obtained from the publicly accessible MLST database (http://pubmlst.org), hosted by the University of Oxford [62]. Overview of population genetic analysis A variety of analytical approaches were used to describe the evolutionary history of C. coli. Ancestral relatedness of genotypes was assessed using CLONALFRAME [63] to construct genealogies for inference of the C. coli phylogeny. Lineage clonality was inferred by investigating congruence of maximum likelihood trees [64]. The timescale of C. coli evolution was calibrated using the phylogenetic inference package BEAST [65] and previous estimates of the evolutionary rate in the Campylobacter genus based on longitudinal sequences sampled within C. jejuni [32]. Source attribution of clinical genotypes was determined using the Asymmetric Island (AI) probabilistic genetic attribution model [21]. Ancestral relatedness The genealogy of the C. coli STs was estimated using a modelbased approach for determining bacterial microevolution: CLO-NALFRAME [63]. Using this model, clonal relationships are calculated with improved accuracy compared with standard phylogenetic inference techniques for recombining bacteria because the two major sources of allelic polymorphisms (point mutation and recombination) are distinguished. This model has been used successfully to distinguish clades within C. coli [25]. Analysis was carried out on all the STs from clinical and non clinical sources. In each case, 7-locus STs were concatenated and the program run with a burn-in of 50,000 iterations followed by 50,000 iterations of sampling. The consensus tree represents combined data from three independent runs with 75% consensus required for inference of relatedness. Quantifying clonality The phylogenetic history of bacteria, evolving according to a clonal model, will be the same for all loci within a genome if it represents a single linkage group irrespective of location. The degree of clonality, therefore, can be estimated by measuring the degree of congruence between phlyogenetic trees constructed for different loci from a single genome. This approach has been employed to compare maximum likelihood (ML) trees describing multiple loci from the genome of, for example, Borrelia burgdorferi [64], Neisseria meningitidis [66] and Streptococcus uberis [67]. Twentysix genotypes were selected to produce ML trees as in previous studies [66]. Single locus ML trees were constructed for each locus of sample groups containing C. jejuni, C. jejuni and C. coli, C. coli (12 STs from each clade), C. coli clade 1, C. coli clade 2 and C. coli clade 3 and the congruence between trees was determined using the Shimodaira-Hasegawa (SH) test [66,68]. The difference in congruence log-likelihood (D-ln L) of the tree topologies was determined and compared for the 7 ML trees. If evolution is entirely clonal then there should be no significant difference in phylogenetic congruence. The extent of congruence was tested further using the randomized test [66,68] by comparing the log likelihood for the 7 individual locus ML tree topologies with equivalent values for 200 randomly generated trees of the same size for each gene. If there is more congruence among the ML trees than expected by chance alone then the log likelihood values will fall outside the range calculated for random trees. These analyses were performed using PAUP* version 4 [69]. The timescale of C. coli evolution The phylogenetic history of C. coli was reconstructed, in the context of other species within the genus Campylobacter, for which similar MLST schemes have been developed [12,14,70]. As described previously [32], 4 of the loci (glnA, glyA, tkt and uncA) used in 7-locus STs are common to MLST schemes for all species. STs ST-2681) were tested to confirm the absence of interspecies recombination between selected STs using a permutation test based on the correlation between physical distance and linkage disequilibrium (LD) [71]. These STs were analysed using the Bayesian phylogenetic package BEAST [65], a codon substitution model [72] and the Yule model of speciation rates [36]. On the timescale of Campylobacter evolution all of the STs from isolates in this study were effectively sampled at the same time, therefore there was no data for estimating the rate of evolutionary change. To account for this, we utilized informative prior distributions on the evolutionary parameters comprising the transition-transversion ratio, the dN/dS ratio and the synonymous mutation rate. The priors were taken from the parameters inferred from an analysis of a longitudinal sample of C. jejuni collected over a 3-year period [32] assuming a constant rate of evolution within the genus Campylobacter. Source attribution of clinical genotypes The Asymmetric Island (AI) probabilistic genetic attribution model [21], was used to characterize the population structure from the genetic data and assign individual isolates in the test set of human isolates independently to source using the training data set. This technique has been used previously [21,22] and the limitations on the attribution accuracy achievable from a 7-locus profile have been validated by calculating the probability of correct 'self-assignment' of a randomly selected sub-set of each host species to the correct origin population [22]. The AI program was run with 1,000 iterations of burn-in followed by 10,000 iterations of sampling, for probabilistic assignment. The putative source of 7-locus genotypes from clinical C. coli isolates (500) was assigned by comparison to datasets comprising genotype data from contemporaneous host and environmental/food isolates and genotype data from published sources [13,17,22,26,27,28,29].
v3-fos-license
2019-08-28T15:48:22.581Z
2019-08-28T00:00:00.000
201663860
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-019-48781-2.pdf", "pdf_hash": "683e1850c8bbf00360feda97da2b613705589e83", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42604", "s2fieldsofstudy": [ "Biology" ], "sha1": "683e1850c8bbf00360feda97da2b613705589e83", "year": 2019 }
pes2o/s2orc
A novel and simple formula to predict liver mass in porcine experimental models A primary limitation in hepatic surgery is leaving a remnant liver of adequate size and function. Experimental models have been designed to study processes of liver injury and regeneration in this context, yet a formula to accurately calculate liver mass in an animal model is lacking. This study aims to create a novel and simple formula to estimate the mass of the native liver in a species of pigs commonly used in experimental liver surgery protocols. Using data from 200 male weanling Landrace-Large White hybrid pigs, multiple linear regression analysis is used to generate the formula. Clinical features used as variables for the predictive model are body mass and length. The final formula for pig liver mass is as follows: Liver mass (g) = 26.34232 * Body mass (kg) – 1.270629 * Length (cm) + 163.0076; R2 = 0.7307. This formula for porcine liver mass is simple to use and may be helpful in studies using animals of similar characteristics to evaluate restoration of liver mass following major hepatectomy. Statistical analysis. Data from 200 pigs was used in this study. Using randomized split sample technique 12,13 , 142 pigs (70% of the overall sample) were included in the derivation group and 58 (30%) in the validation group. The following variables were evaluated in the derivation group: body mass (kg), snout-to-rump length (cm), and body surface area (m2). Variables associated with total liver mass on univariate analysis (P < 0.2) were selected for the initial models. Using the allsets tool, seven initial models were generated 14,15 . The final model was selected based on Mallows' Cp and adjusted R2 values, as high adjusted R2 is essential for good predictive-model performance 16 . The selected model was then validated in the validation group. Differential loss (shrinkage value) <10% was deemed necessary to consider the model valid and reliable. Results are presented as frequencies and percentages for categorical variables and median and interquartile range for continuous variables. For univariate analyses, Chi-square test was used for categorical variables, Student's t test or ANOVA for normal continuous variables, and Mann-Whitney or Kruskal Wallis tests for non normally distributed continuous variables. In all statistical analyses, significance was set at P < 0.05. All data analysis was performed using STATA Results The characteristics and features of both samples cohorts are described in Table 1. No significant differences between the two groups were detected. On univariate analysis, body mass, snout-to-rump length, and BSA were all significantly associated with total liver mass. These three variables were introduced in the allsets tool to determine all possible equations for predicting porcine liver mass, and seven different models were identified. Ultimately, body mass and snot-to-rump length were selected as the final predictors. After performing multiple linear regression analysis, the final formula for pig liver mass (PLM, g) was generated: 26.34232 * Body mass (kg) -1.270629 * Length (cm) + 163.0076; R 2 = 0.7307 and adjusted R 2 = 0.7268, Mallows' Cp 10.28, variance inflation factor (VIF) 6.2 ( Fig. 1). For the variables used, Fig. 2 depicts the individual correlations between both of the predictors and total liver mass. The formula was validated in a split group consisting in 30% of the subjects in the original sample. Differential loss of prediction (shrinkage) was 5.56% (R 2 -r 2 Loss = 0.05645025). To further validate our model, we calculated the variance of the residuals of the multiple regression analysis and did not find any significant variation of residuals (Fig. 3). Finally, using the new pig liver mass formula, we used the entire sample to compare true versus calculated liver mass ( Table 2 & Fig. 4). Discussion In liver surgery, accurate calculation of remnant liver size is critically important to reducing postoperative morbidity and mortality, in particular due to PHLF/SFSS 17 . To this end, several formulae have been developed to estimate liver mass or volume in humans preoperatively 18,19 . Such formulae typically include clinical features, such as body mass, height, and/or body surface area (BSA) 20 www.nature.com/scientificreports www.nature.com/scientificreports/ morphological features, namely thoracic and abdominal width 21 . All of these formulae have been created by comparing estimated size with either volumetric calculation obtained through computed tomography (CT) scanning or real liver mass obtained in the context of whole or partial liver transplantation. The overall accuracy and applicability of the aforementioned formulae may be limited to a certain extent by patient gender, age, and race. Also, BSA may be calculated in different manners, and formulae that include BSA as an input variable may vary accordingly. While several formulae are available for liver size prediction in humans, little is available for studies in animals. Experimental models are of critical importance for surgical investigation, as they offer the opportunity to test novel techniques and therapies under relatively stable conditions prior to their application in humans. Animal studies on liver regeneration are particularly relevant in that they can help elucidate mechanisms of liver regeneration in vivo and can be used in the extreme to simulate and attempt to prevent and/or treat pathological regenerative processes 22 . Until now, assessment of total and remnant liver size in porcine liver resection studies has generally required the use of advanced imaging techniques, such as CT or magnetic resonance imaging 23 . Bekheit et al. recently published an assessment of porcine liver anatomy in which volumetric characteristics as well as features of the vascular and biliary trees were described using CT performed on 37 female pigs 24 . While cross-sectional imaging is relatively reliable, especially in clinical practice, it is costly and not always easily accessible for use in animals. We wanted to create a formula for predicting porcine liver mass that was simple to use and that incorporated morphological variables that were easy to obtain. We chose not to include BSA, given that there is no universally accepted formula for calculating BSA nor, for that matter, one that has been validated for pigs. The formula that we have created, based on body mass and length, appears to estimate total liver mass well, with R 2 = 0.73 and differential loss of prediction <10% (5.56%) on external validation. Nonetheless, the study does have limitations. Animals included in the study were all males of common European pig breeds (hybrid Landrace-Large White) and aged between two and four months. This formula needs to be tested in a wider variety of pigs of different ages, breeds, and sex to determine whether it remains accurate. The pig is the most commonly selected large animal for performing pre-clinical studies on the liver. Pigs are robust and readily available; their livers present a similar size to those of humans and can also be divided into www.nature.com/scientificreports www.nature.com/scientificreports/ eight segments based on vascular supply and biliary drainage 25 . An important anatomical aspect of the porcine liver that distinguishes it from that of other species is its intimate relationship with the inferior vena cava (IVC). The porcine IVC is contained within and cannot be separated from the tissue of the caudate and right lateral lobes. As such, the minimum remnant volume after major hepatic resection is greater in pigs than in other species. Different authors have evaluated the percentage of overall liver mass that each liver segment represents, and the caudate and complete right lateral lobes appear to constitute roughly 30% of the entire porcine liver [25][26][27] . Using our formula, however, our hope is that future estimates of the remnant liver in major hepatectomy studies in pigs will be more accurate than those based on rough percentages. In summary, we have created a novel formula to predict pig liver mass in a simple and reproducible manner. This formula should be a useful tool for future liver surgery studies performed in the porcine model. Data Availability The data generated and analyzed during this study are available from the corresponding author on reasonable request.
v3-fos-license
2020-12-31T09:03:54.396Z
2020-12-28T00:00:00.000
231603661
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1155/2020/8861892", "pdf_hash": "24098509e5490540e24f99ccb5904fc3bdedbffc", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42607", "s2fieldsofstudy": [ "Medicine" ], "sha1": "4971a41dac160ca93cbcc86ac232f28fc9ec23e6", "year": 2020 }
pes2o/s2orc
Recurrent Herpes Zoster Ophthalmicus Preceded by Anabolic Steroids and High-Dose L-Arginine Purpose To report a case of a 34-year-old male with recurrent herpes zoster ophthalmicus (HZO) preceded by a 6-week cycle of anabolic steroids and high-dose amino acid supplementation. Case Presentation. A 34-year-old man presented to our institution for left eye pain for one week associated with a vesicular rash in the V1 dermatome, respecting the midline. The patient had no significant past medical or past ocular history, including systemic immunosuppressive agents or HIV. However, prior to the onset of his symptoms the patient had completed a 6-week course of anabolic steroids including trenbolone, deca-durabolin, and testosterone as well as high-dose arginine supplementation averaging more than 40 grams a day. The best-corrected vision was 20/25 OS with slit-lamp examination remarkable for punctate staining and pseudodendrites at 6 o'clock, outside the visual axis. The patient was treated with oral acyclovir 800 mg five times a day for seven days along with prednisolone QID and moxifloxacin QID which was tapered over a month. Four months after resolution, the patient developed a recurrent HZO keratitis preceded by another cycle of anabolic steroids and amino acid supplementation. Conclusion In vitro L-arginine supplementation has been associated with the proliferation and virulence of a variety of herpes viruses. Anabolic steroids have also been demonstrated by various studies to negatively affect cell-mediated immunity necessary to prevent viral infection. Thus, it is possible that anabolic steroids in conjunction with increased L-arginine intake may have precipitated a recurrent HZO in a previously healthy, immunocompetent individual. Introduction Anabolic-androgenic steroids (AAS), or performanceenhancing drugs, have been used by elite professional athletes and general fitness enthusiasts in order to increase strength and muscle growth. However, the prevalence of anabolic abuse within the USA is not known. Several factors may contribute to the underestimation of anabolic steroid users including insufficient drug tests, on and off cycling of AAS, and taboo or illegal connotation associated with AAS. AAS comprises a group of synthetic and natural hormones which are structurally and functionally similar to testosterone. Despite the benefits associated with AAS, there are numerous well-known side effects including psychosis, organ damage (e.g., heart, liver, and kidney), hair loss, severe acne, increased risk of tendinitis or tendon rupture, and liver tumors [1]. Also, certain types of AAS have been documented to have immunosuppressive features, impairing B and T cell functions necessary for viral infections [2,3]. Herpes zoster or shingles is the product of the reactivation of latent varicella zoster virus (VZV). Reactivation of the virus is common in the elderly and immunocompromised, with the rate of incidence increasing in each decade of life and peaking around 80 years old. Approximately 30% of all individuals will develop at least one shingles outbreak in their lifetime [4]. VZV can also affect the eye during the involvement of the ophthalmic division of the fifth cranial nerve (V1), known as herpes zoster ophthalmicus (HZO). Nevertheless, we noticed an unusual presentation of recurrent HZO in a 34-year-old male, preceded by a 6-week cycle of anabolic steroids and high-dose amino acid supplementation prior to each episode. Case Presentation A 34-year-old man with no past medical history presented to our clinic complaining of left eye pain and pressure for one week. He was referred by his primary care doctor for suspected HZO, who had started the patient on acyclovir 800 mg 5 times a day prior to his visit to our clinic. The patient denied any relevant past medical history including diabetes and HIV. He denied smoking, alcohol, or illicit drug use. The patient also denied taking any immunosuppressant agents including prednisone. However, the patient was an active weightlifter who reported having completed a 6-week course of AAS two weeks prior to the onset of his symptoms. His regimen consisted of trenbolone 200 mg, deca-durabolin 200 mg, and testosterone 500 mg a week (Table 1) along with a diet high in protein and supplemental amino acids including arginine. A diet recall averaged above 40 grams of L-arginine a day (Table 2). On external exam, vesicles were noted over the left eyebrow and upper lid. Anterior segment exam was notable for decreased corneal sensation OS. Slit-lamp examination was remarkable for punctate staining and pseudodendrites at 6 o'clock, outside the visual axis OS. No anterior segment inflammation was noted with intraocular pressure within normal limits. The fundus exam was unremarkable for both eyes. The patient was continued on oral acyclovir as well as moxifloxacin 0.5% QID and prednisone 1% QID OS which was tapered over a month. The patient was strongly encouraged to stop his AAS regiment as well as high-dose arginine diet/supplementation. HZO keratitis resolved over the course of a month. Four months later, the patient presented to the clinic with new eye pain OS after completing another cycle of the above AAS and high-arginine diet. Uncorrected visual acuity was 20/40. Slit-lamp examination revealed deep stromal vessels with a 2 × 2 mm infiltrate just inferior to the visual axis at 5 o'clock. Intraocular pressure and fundus exam were unremarkable. The patient was started on acyclovir 800 mg 5 times a day for the next seven days with prednisolone 1% QID. The patient was counseled on stopping AAS as well as the need for a lifelong prophylaxis dose of acyclovir. However, after interval improvement, the patient was lost to follow-up. Discussion HZO can occur with or without ocular involvement in 10-20% of patients with a vesicular rash in the V1 dermatome [5]. Ocular findings most commonly include conjunctivitis, episcleritis/scleritis, keratitis, uveitis, and less commonly retinal necrosis which is often seen in immunocompromised patients [6]. VZV is considered a disease of late adulthood, with the rate of incidence increasing in each decade of life, especially when combined with immunocompromised individuals [7]. Although the majority of individuals affected by herpes zoster are over the age of 60 [8], recent trends show a decreasing mean age of onset which is likely due to widespread childhood varicella vaccination [9]. However, other risk factors have been documented for earlier age of onset including immunosuppression from either pharmacological or pathological etiologies (e.g., leukemia/lymphoma, HIV, transplant recipients, and corticosteroids or chemotherapy agents), autoimmune disorders, female gender, smoking, and diabetes [9]. In particular, smoking was the strongest risk factor for earlier age of onset, presenting on average 11.5 years earlier than nonsmokers [9]. Although approximately 30% of all individuals will develop at least one shingles outbreak in their lifetime, recurrent zoster infections are uncommon, ranging from 1.3% to 6.2% [10]. Our patient was a young, healthy 34-year-old individual who developed a recurrent HZO keratitis. He was not an active smoker, diabetic, or immunocompromised via the numerous etiologies listed above. Thus, it is possible that other factors may have contributed to his clinical presentation. As an active weightlifter, our patient supplemented his diet with high-dose amino acids as well as anabolic steroids. We previously documented a case of recurrent HZO in a young, immunocompetent individual, preceded by highdose L-arginine at 46.5 g/day [11]. L-arginine is a semiessential amino acid which has been found to have beneficial effects in wound healing, immune function, and metabolism at 4.2 to 20 g/day [12]. Nevertheless, the adverse effects of long-term supplementation above these recommended values have not been well studied [12]. L-Arginine also serves as an essential role in viral replication. Herpes simplex viruses (HSV-1 and HSV-2), varicellazoster virus (VZV), cytomegalovirus (CMV), and adenovirus require arginine to replicate [13][14][15][16]. In vitro studies have demonstrated a decrease in HSV 1-2 replication, cell-to-cell transmission, and virus-mediated cytopathic effects in mediums deficient in L-arginine [13,17]. Arginase, an enzyme that degrades L-arginine to L-ornithine and urea, was also found to be increased tenfold in a murine model [17], potentially linking arginine concentrations with symptom resolution. Other researchers also found an accumulation of arginine in the corneal epithelium during herpetic infections, a finding which was attributed to increased viral virulence and replication [17]. Additionally, the use of topical arginase led to a resolution of the herpetic episode, supporting arginine's role in infection and disease progression [18]. Despite these instances where arginine was associated with viral proliferation, other studies have shown that arginine and its derivatives can be viricidal. Naito et al. showed that arginine was effective against inhibiting HSV-1 replication in moderate concentrations while Yamasaki et al. reported that the arginine derivative Nα-Cocoyl-L-arginine ethyl ester was effective at inhibiting HSV extracellular virus particles and replication [19,20]. The contrasting results of these studies may be due to the variances of arginine being utilized. Isomers of arginine versus arginine complexes are likely to affect the structure and thus function of the amino acid. In most studies, supplementation of arginine did not exceed 30 g/day. Thus, there may be a critical threshold in which arginine dosage can lead to adverse effects which has not been clearly documented in the literature. The other variable we believe may have influenced the recurrent episodes of HZO in our patient is his use of AAS. Anabolic steroids are not a commonly associated risk factor in HZV outbreaks. Nevertheless, these hormones have been documented to cause immunosuppressive effects. Grossman and Roselle found that one of the biological actions of androgen steroid hormones is the modulation of the immune system through the regulation of T lymphocyte function [2]. In particular, AAS with intact steroid nucleus exhibited a persistent immunosuppressive effect while those with nuclear alterations elicited a delayed immunostimulatory effect [2]. This distinction is important in this case, as our patient reported only taking AAS that had intact steroid nuclei. The cells most adversely influenced by AAS with an intact steroid nucleus are lymphocytes (T cells) whose differentiation and proliferation are curtailed by reduced natural killer cytotoxic activity and the diminished production of certain cytokines such as IL-2, interferon-gamma (IFN-γ), and corticotropin [21]. These findings are supported in a sep-arate study examining the impact of oxymetholone, another AAS with an intact steroid nucleus. Researchers found a 15% decrease in cytotoxic T cell activity measured in mice taking 300 mg/kg of oxymetholone for fourteen days, indicating that cell-mediated immunity was impaired following exposure [22]. Another study analyzed the role of testosterone in response to the influenza vaccine in animals. Furman et al. found that testosterone was immunosuppressive in vivo, downregulating transcription factors (such as FOS and JUN) implicated in immune activation [3]. One group of experiments demonstrated that orchiectomies of mice led to increased protection against viral, fungal, bacterial, and parasitic infestations [23]. Orchiectomized animals also rejected allografts more rapidly and had accelerated graftversus-host reactions [23]. Lastly, the role of VZV reactivation within immunosuppressed patients has been analyzed. Patients with depressed cell-mediated immune function from hematopoietic stem cell transplantation were assessed for VZV reactivation while receiving intravenous γ-globulin antibodies. It was observed that the incidence of shingles markedly increased in the study patients, highlighting the importance of cell-mediated immune function in preventing viral reactivation [24]. Therefore, we hypothesize that AAS used by the patient may have compromised his cell-mediated immunity necessary to prevent VZV reactivation. Also, high-dose Larginine supplementation may have also facilitated increased viral replication and virulence within a potentially immunocompromised individual. Nevertheless, we have only documented two similar cases and require further studies to deem if a real association exists. Conclusion In summary, we present a 34-year-old, healthy individual who developed recurrent HZO preceded by anabolic steroid abuse and high-dose L-arginine supplementation. In vitro experiments have demonstrated conflicting results between stimulatory versus inhibitory effects on herpes virus replication and virulence with L-arginine. Although data on arginine supplementation is limited, no long-term studies have documented the effects of high-dose arginine supplementation, greater than 20 grams a day, on immune function. Additionally, the immunosuppressive effects of AAS with intact steroid nucleus on the cell-mediated immune response have been documented in the literature. Increased rates of zoster reactivation have been witnessed in individuals who lack cell-mediated immune systems. Nevertheless, larger, prospective studies are needed to deem if a true association exists between recurrent HZO, L-arginine, and AAS. Consent The patient consented to publication of the case in writing. This report does not contain any personal information that could lead to the identification of the patient. Disclosure All authors attest that they meet the current ICMJE criteria for Authorship. Conflicts of Interest The following authors have no financial disclosures or conflicts of interest: SAL, AG, RAG, SMC, MB, SA, TDL. The abstract was presented as a digital poster at the 2020 American Society of Cataract and Refractive Surgeons.
v3-fos-license
2021-05-10T12:07:53.530Z
2021-03-31T00:00:00.000
234101071
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://journal.eu-jr.eu/health/article/download/1664/1552", "pdf_hash": "860f9685c1adcd6cab01eb12a75c4bd253fc59c1", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42608", "s2fieldsofstudy": [ "Medicine" ], "sha1": "860f9685c1adcd6cab01eb12a75c4bd253fc59c1", "year": 2021 }
pes2o/s2orc
THE METHOD OF RANDOM BALANCE FOR STUDYING THE INFLUENCE OF EXCIPIENTS QUANTITIES ON TECHNOLOGICAL PARAMETERS OF TABLETS BASED ON ORIGANUM VULGARE L. DRY EXTRACT The aim. The aim of the research was to study the influence of excipients amount on the technological parameters of the compression mixture and tablets based on dry extract of Origanum vulgare L. herb using the method of random balance. Materials and methods. Objects of the study – Origanum vulgare L. herb dry extract, 8 excipients that have been studied at two quantitative levels. The tablets were prepared by direct compression method. The formulations were designed according to the method of random balance. The technological parameters of the compression mixture and tablets based on Origanum vulgare L. herb dry extract have been studied as a function of quantitative factors: silicon, magnesium carbonate basic, dioxide magnesium aluminometasilicate (Neusilin S1®), isomalt (GalenIQTM720), F-melt® Type C (co-spray dried excipients), sucralose, berry flavor and citric acid. Results and discussion. The increase in the amount of Neusilin S1®, GalenIQTM720 and F-melt®, and the decrease in the amount of magnesium carbonate basic and silicon dioxide improved the flowability expressed by the Hausner ratio. Results of bulk density and tapped density of the compression mixture depended on the quantities of GalenIQTM720 and F-melt®. All formulations of the prepared tablets had the rapid disintegration and ranging from 6 to 15 minutes. Resistance for crushing and friability tablets’ were improved with a decrease in the amount of silicon dioxide and increase in the amount of Neusilin S1®, F-melt® and sucralose. Higher resistance to moisture of tablets based on Origanum vulgare L. dry extract was obtained by using Neusilin S1®, F-melt® and sucralose on the upper levels. Conclusions. The tablets based on Origanum vulgare L. herb dry extract were successfully manufactured by direct compression method. The random balance method enabled us to identify the most significant quantitative factors to optimize their composition in the tablets based on the dry extract of Origanum vulgare L. herb. Introduction Seasonal influenza which is characterized by a fever, headache, dry cough, muscle and joint pain, feeling unwell, a runny nose and sore throat can cause severe illness or death especially in people at high risk. According to the WHO worldwide statistics, seasonal influenza annual epidemics result in about 3 to 5 million cases of severe illness, and about 290 000 to 650 000 respiratory deaths [1]. An acute sore throat may be caused by an inflammatory process in the pharynx, tonsils or nasopharynx at the acute respiratory disease due a bacterial agent [2]. The local use of the tablets with antimicrobial substances has the direct influence on the viral and inflammatory agents, which can cause local pain relief, rapid pharmacological and prolonged action, and effectiveness at the different types of sore throat that can help at the emotional problems associated with Original Research Article: full paper (2021), «EUREKA: Health Sciences» Number 2 Pharmacology, Toxicology and Pharmaceutical Science a sore throat [3]. Origanum vulgare L. contains flavonoids, ascorbic acid, tannins, essential oil and has anti-inflammatory, wound-healing, hemostatic and antimicrobial properties [4][5][6][7][8]. We have developed technology of dry extract of this herb and have proved its antimicrobial activity and the anti-inflammatory effect [9]. According to results of analysis of the Ukrainian pharmaceutical market of medicines for the treatment of throat diseases the solid dosage forms are most commonly used dosage forms [10]. Thereof tablets based on Origanum vulgare L. herb dry extract are perspective for use in the treatment of the mucous membrane of the upper respiratory tract [11]. The aim of the research. The aim of our study was to analyze the influence of the amount of excipients on the technological parameters of compression mixture and tablets based on dry extract of Origanum Vulgare L. herb using the method of random balance. In our study, we used the method of random balance, which is based on the fact that the significance of certain factor effects depends on their contribution to the response variance [12]. Primarily, the matrix of design of experiments was defined, the experiment took place and based on its results scatter diagrams were constructed. Significant factors are taken from scatter diagrams. The design construction of experiments matrix is preceded by coding the factors, selection of variation levels and by determining the experiment center [13]. Materials and methods This research was performed in 2019-2020. The materials which we used for this study comprise a dry extract of Origanum Vulgare L. herb and excipients (silicon dioxide, magnesium carbonate basic, magnesium aluminometasilicate (Neusilin S2 ® ), isomalt (GalenIQ™720), F-melt ® Type C (co-spray dried excipients), sucralose, berry flavor and citric acid, microcrystalline cellulose (MCC). Excipients were kindly provided by Witec Industrial. The formulations were designed according to the method of random balance. In this design, technological parameters of tablets based on Oregano vulgare L. dry extract have been studied as a function of 8 quantitative factors. The names of factors, with their variation levels, are shown in the Table 1. Table 1 Quantitative factors and their levels Table 2. The bulk density (у 1 ), tapped density (у 2 ), Hausner ratio (у 3 ), friability of the tablets (у 4 ), resistance of the tablet's for crushing (у 5 ), disintegration (у 6 ) were evaluated. Tablets based on Oregano vulgare L. dry extract have been obtained by direct compression method according to the matrix given in Table 2 Pharmacology, Toxicology and Pharmaceutical Science added in the formulation to adjust tablets weight to 550 mg if factors were investigated at the lower levels. The compression mixture from each formula was evaluated by several parameters such as bulk density, tapped density, Hausner ratio. The mixtures were directly compressed using 12 mm semi-sphere round punches into tablets of 550 mg on a single tooling tablets machine. A batch of 50 tablets was prepared for all the designed formulations. Obtained tablets based on Oregano vulgare L. dry extract were evaluated for the following parameters: friability, resistance for crushing, and disintegration of the tablets. Table 2 Design of matrix of the tablets based on Oregano vulgare L. dry extract formulations and technological parameters of compression mixtures and obtained tablets Bulk density (g/cm 3 ) Resistance for crushing (N) Disintegration (min) Bulk density of the compression mixture was determined by pouring the mixture into the graduated cylinder. The bulk volume and weight of the mixture were also determined. The bulk density is the ratio of the total mass of the compression mixture to the bulk volume [14]. Tapped density is the ratio of the total mass of the compression mixture and its tapped volume. The volume was measured by tapping the compression mixture 500 times [14]. Tapped volume was noted if the volume did not show a difference between two tapping intervals. The compression mixture had not ability to flow that is why we could not determine flowability using the funnel method. To express flowability we have used Hausner ratio. It expressed as the tapped density of the compression mixture divided by the bulk density [14]. The resistance of tablet's for crushing is used to test the hardness of the tablets. The hardness of each batch of tablets was measured in Newton, where five tablets from each formula were tested through Tablet's hardness tester (Electrolab Company), and then the average value was documented [14]. The friability test was conducted by placing pre-weighed tablets in the Friability Tester (Electrolab Company); the latter was operated at 25 rpm for 4 min. The dust was removed Pharmacology, Toxicology and Pharmaceutical Science from the tablets surface and the tablets weight loss caused by fracture or abrasion was recorded as the percentage weight loss. Tablets should lose not more than 1 % of their weight to be acceptable [14]. The disintegration test for all formulations was carried out using Disintegration Testers (Electrolab Company). Six tablets were placed individually in each tube of the disintegration test apparatus and discs were placed. The water was maintained at the temperature of 37 ± 2 °C and the time taken for the tablet disintegration was noted [14]. Results According to the obtained results we have constructed scatter diagrams for each response in order to determinate the significant factors form these diagrams. The difference between the average values of the factor for the upper and lower levels determines the influence of the factor on the technological parameter of the compression mixture and obtained tablets. The difference between the average values of the parameter is shown through the median on scatter diagram. The value of the median indicates the significance of the factor. The influence of quantitative factors on the bulk density of the compression mixture (y 1 ) is depicted in Fig. 1. Fig. 1. Scatter diagram of bulk density results Based on the analysis of the scatter diagram of bulk density results we have defined the statistically significant effects of factors х 1 and х 4 . The decrease in the amount of silicon dioxide causes the increase in the value of the bulk density and vise versa, the bigger content of GalenIQ™720 is introduced in the compression mixtures, the larger the value of bulk density becomes. According direction of the medians, higher amount of magnesium carbonate basic (factor x 2 ) causes a reduction of the bulk density. Factors х 3 , х 5 , х 6 , x 7 and х 8 have a slight positive effect on the studied parameters. The dependence of tapped density on quantitative factors is shown in the Fig. 2. Factors x 1 , x 2 x 3 are statistical have been identified as statistically significant for studied parameter. The introduction of the silicon dioxide (factor x 1 ) and magnesium carbonate basic (factor x 2 ) at the lower level into the composition of the compression mixtures increase the value of the tapped density. According to the scatter diagram in the Fig. 3, only factor x 5 has no influence on the Hausner ratio. Fig. 3. Scatter diagram of Hausner ratio results The increase in the amount of silicon dioxide (х 1 ) and citric acid (х 8 ) cause the increase in the value of the Hauster ratio. This means that the flowability of the compression mixture becomes worse. At the upper level the factors x 2 , x 3 , x 4 , х 6 , and х 7 value of the Hausner ratio decreases and flowability gets better. The influence of the investigated factors on the friability of the tablets based on the Oregano vulgare L. dry extract is shown in Fig. 4. We have discovered the statistical significance of the factors х 1 , x 3 , х 5 , x 6 , and x 7 for studied parameter of the tablets. Introduction of Neusilin S1, F-melt ® C, sucralose, and berry flavor on the upper level in the tablets makes better the friability of the tablets. The decrease in the amount of silicon dioxide in the tablets enables to improve the studied parameter. The dependence of resistance for crushing obtained tablets on quantitative factors is shown in the Fig. 5. Scatter diagram of tablets resistance for crushing results Based on the analysis of the scatter diagram we have defined the statistically significant effects of factors х 1 , and х 3 . The better results of resistance for crushing tablets based on the Oregano vulgare L. dry extract were obtained at the lower levels of silicon dioxide and berry flavor. Increase in the amount of Neusilin S1, F-melt ® C, and sucralose in the tablets have improved the hardness of the tablets. The dependence of disintegration tablets on quantitative factors is shown in the Fig. 6. The influence of the quantitative factors on the disintegration tablets based on the Oregano vulgare L. dry extract allowed us to reveal the statistical significance of the factors х 3 and х 6 . The decrease in the amount of Neusilin S1 and sucralose in the tablets composition enables to reduce their disintegration time. Discussion We used excipients that are widely used in the development of the composition and technology of tablets [15][16][17][18], as well as new excipients. Neusilin US 2 was used at the development of technology of tablets based on the extracts of goat's rue herb, extracts of bilberry leaves and taurine by direct compression method. It improved the homogeneity of the mass, increased the resistance to crushing, reduced friability and slightly reduced the disintegration time of the tablets [15]. The influence of the amounts of Neusilin US 2 on the technological indicators of the quality of compression mixture and the tablets with round-leaved wintergreen extract was determined. According to results increase in the amount of Neusilin US 2 improves Carr index of compression mixture, resistance to crushing, and friability of the tablets [16]. Neusilin of two brands, US 2 and UFL-2, have been studied as moisture regulators at the development of composition and technology of the tablets based on the Malva sylvestris L. and Plantago lanceolata L. dry extracts [17]. Neusilin US 2 has provided the best results. Neusilin ® UFL2 has been studied to increase solubility of poorly soluble drugs in the composition of an amorphous drug composites preparation [19]. The comparative evaluation of GalenIQ 721, against known excipients such as Pharmatose M200 and Alfacel type 102 has been performed. The influence of these excipients on compatibility properties, disintegration time and flowability has been studied. GalenIQ 721 gave better results than Pharmatose M200 but it inferior of Alfacel 102. Scientists predicted that the GalenIQ 721 characteristics, could be more stable to changes in composition and process conditions than those of Alfacel 102 [21]. Usage of F-melt have helped in solving formulation problems at the development of orally disintegrating dosage forms and improved flowability, compressibility, palatability, dissolution, disintegration, and dust generation [22]. In our research the increase of Neusilin S1 and sucralose amount makes better almost all investigated parameters but increase disintegration time of tablets. Hence, on the next stage of our research, we have decided to investigate more detailed the impact of these excipients on the main technological parameters of tablets based on the Oregano vulgare L. dry extract. To define the optimal quantity of GalenIQ™720 and F-melt® C in the composition of the investigated tablets we have include them in the next stage of the experiment too. These excipients improved most of Pharmacology, Toxicology and Pharmaceutical Science the studied parameters, but we have made decision to define the influence of their amount on the tablets quality more details in the narrower range because these substances are novel and very little information we have about their usage in the pharmaceutical technology. The amount of berry flavor has been decided to stabilize at the upper level of 0.005 g in the one tablet. The increase of silicon dioxide and magnesium carbonate basic amount had negative influence on most investigated parameters; citric acid have been defined as statistically insignificant factor. Hence, on the next stage of our research, we have decided to not use these excipients at development composition of the tablets based on the Oregano vulgare L. dry extract. Therefore, excipients such as Neusilin S1, GalenIQ™720, F-melt ® C, and sucralose will be used to optimize the composition of the tablets based on the Oregano vulgare L. dry extract with anti-inflammatory and antimicrobial properties for use in the treatment of a sore throat. Study limitations. The study did not investigate the influence of excipientsʼ amount on the dissolution of the tablets and wetting time. Prospects for further research. Further research is aimed at optimization of the composition of excipients for the development tablets based on dry extract of Origanum Vulgare L. herb. Conclusion Tablets based on the Oregano vulgare L. dry extract were successfully obtained by direct compression method. 1. Based on the random balance method it is established the most significant quantitative factors influenced on the technological parameters of compression mixture and tablets based on the Oregano vulgare L. dry extract. 2. Taking into account the results of technological research we made decision to decrease amount of excipients in the composition of the tablets. 3. Considering that excipients such as Neusilin S1, GalenIQ™720, F-melt ® C, and sucralose improved most of the technological parameters of the compression mixture and tablets with dry extract of Origanum Vulgare L. herb, they were included to the next experimental stage to develop the composition and technology of the tablets based on the Oregano vulgare L. dry extract with anti-inflammatory and antimicrobial properties for use in the treatment of a sore throat.
v3-fos-license
2024-05-03T15:16:22.775Z
2024-04-30T00:00:00.000
269519316
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/jat/2024/7311720.pdf", "pdf_hash": "a8348c619f5707d882b8d5cbf999032f7f1b6de1", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42610", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "sha1": "7c6f9fb222612f963664cb1990299d810888bb15", "year": 2024 }
pes2o/s2orc
Metro Train Stopping Scheme Decision Based on Multisource Data in Express-Local Train Mode , Introduction Te confict between the substantial travel demand of urban residents and the limited road trafc resources is intensifying and exacerbated by people's growing time consciousness.Congested road trafc conditions signifcantly undermine residents' travel experience.Te subway has become the preferred mode of transportation for most residents due to its safety, efciency, comfort, and speed.As a high-capacity means of transport, which does not occupy surface transportation resources, the subway has also become an integral component of transportation infrastructure development in major cities and is widely employed to alleviate pressure on ground transportation systems.Simultaneously, China's rapid urbanization necessitates continuous expansion of city boundaries and prompts the transition from a "single center" to a "multicenter" model.Tese developments and changes inevitably impose new requirements on public transportation systems and continue to rise long-haul passengers' travel demands steadily.Te characteristics of passenger travel demand and fow distribution on longdistance rail transit lines difer signifcantly from those observed on existing rail transit lines.Compared with traditional urban rail transit station operations involving frequent stops, implementing fast and slow train stopping schemes can better cater to the travel needs of long-haul passengers. To solve this problem, scholars at home and abroad have made the following typical studies on train operation schemes.Cury et al. [1] applied train operation schemes to the feld of urban rail transit frstly.Before that, the research on train operation schemes was limited to the feld of railway transportation, and the research content was the optimization of train numbers and departure intervals.Smrek [2] summarized the analytical techniques, operational models, and planning objectives applicable to the operation and organization of urban rail transit.Assis and Milani [3] proposed a method for calculating the optimal train schedule of subway lines based on a linear programming model, considering the time variation of passenger demand and line operating conditions.Wang et al. [4] considered various constraints in the actual train operation, established a multiobjective mixed integer nonlinear programming model, and obtained the dynamic train schedule and train turnover plan. For the study of the operation scheme of the single interchange and cross-station parking mode, Niu et al. [5] proposed a quadratic integer programming model with linear constraints to meet the dynamic OD demand based on the cross-station stopping mode and minimize passenger waiting time and obtained the train schedule.Zhang et al. [6] analyzed time-varying passenger demand, proposed a fexible transit scheme that reduced the average travel time of passengers, and found the optimal scheme based on the genetic algorithm.Cao et al. [7] proposed a comprehensive evaluation model suitable for the express train mode, aiming at shortening the train running time and reducing the passenger waiting time and travel time, and the 0-1 integer programming model was established and solved by the Tabu algorithm and obtained the optimal train running scheme.Lee et al. [8] classifed passengers according to starting and ending points and transfer choices, set up a coordination model of fast and local train modes with the goal of the shortest passenger travel time, designed a genetic algorithm, and obtained the optimal combination scheme of the fast and local train.Zhang et al. [9] proposed a timetable optimization model aiming at minimizing the total waiting time of passengers and solved it by genetic algorithm.Tang and Xu [10] established a double-layer programming model for the train operation scheme of suburban rail transit lines and designed a hybrid algorithm combining G-SA and MSA.Niu et al. [11] analyzed the problem of matching between the time-varying passenger fow demand and passenger waiting time and between trains and running routes in terms of the formulation and algorithm application of the train operation plan with the single route and all-station stopping mode, established a model with the goal of minimizing the passenger waiting time, and adopted the genetic algorithm.Deng et al. [12] constructed the elastic demand function of passenger travel, established a multiobjective double-layer programming model, and designed a simulated annealing algorithm.Based on IC card data, Zheng and Jin [13] proposed a new subway operation plan formulation method and obtained a departure plan that was closer to the passenger fow demand.Zhao et al. [14] established an integer linear programming model based on time-varying section requirements and predetermined service levels and used the two-stage method to solve the fnal train running plan.Based on AFC data, Yang et al. [15] proposed comprehensive optimization to formulate route plans and schedules.A mixed integer nonlinear programming model was established and solved by the improved NSGA-II. In terms of the parallel operation of fast and local trains, Liu et al. [16] analyzed a line of rapid rail transit in Shanghai and discussed the reasonable crossing station location and siding setting.Sun et al. [17] established a double-layer programming model for fast and local train operation schemes.Te upper layer model took the minimization of the passenger travel time and train turnover time as the target, and the lower layer model was the passenger fow allocation model, and it was solved by the particle swarm optimization algorithm.Zheng et al. [18] established a 0-1 integer programming model with the goal of maximizing the overall travel time of passengers under the cross-station parking mode and solved it by the Tabu search algorithm. In summary, most studies primarily focus on designing subway operation schemes by designing operational routing, stopping schemes, optimization models, and model-solving algorithms.Most studies ignore the importance of input data, and the input data are often only single OD survey data or data obtained from the automated fare collection system (AFC) or prediction data derived from the aforementioned sources without considering the infuence of land use and other factors on passenger fow trends.As a result, the designed optimization scheme is only suitable for the passenger fow in the input period of the model but not for the actual passenger fow generated by the line.Part of the reason is that data from single source cannot fully refect the characteristics of the passenger fow.Tere is also a part of the factor that the passenger fow has a certain randomness and volatility, and the passenger fow data input to the model cannot capture the overall trend of passenger fows over a long period of time, and the smaller the amount of data, the greater the deviation.As urban centers continue to develop and grow in size, it has become increasingly evident that land use planning along subway stations signifcantly infuences the passenger fow distribution, resulting in tidal passenger fows becoming a common occurrence.To address this issue caused by the uneven distribution of residential and employment lands, this study focuses on a specifc subway line and proposes a train stopping scheme based on fast and local train modes.Multiple sources of data including AFC data, point of interest (POI) data, and real estate website data are collected for analysis purposes.Based on these datasets, an optimization model for the train stopping scheme is established with objectives of minimizing both the passenger travel time and system-generalized cost functions under various operational constraints.Te NAGA-II algorithm with the elite strategy is employed.Te proposed method is validated using Changchun Metro Line 1 as an illustrative example.Te main contributions of this paper are as follows: (1) Utilizing multisource data including the POI data, real estate website data, AFC data, and passenger survey questionnaire; a spatial weight function is introduced to obtain the land use index refecting the distribution characteristics of the passenger fow. Based on these data, the express-local train stopping plan can be designed that adapts to the overall trend of the passenger fow without requiring extensive analysis or prediction.(2) An optimization model for train stopping schemes is established by analyzing characteristics of the passenger fow distribution as well as passengers' choice behavior.Tis model determines the optimal proportion of fast and slow trains and maximizes limited resources while improving quality of service and travel efciency. Te remainder of this paper is organized as follows.Section 2 provides detailed descriptions of issues studied, Section 3 establishes an optimization model for the train stopping scheme and designs the NSGA-II algorithm to solve it, Section 4 presents empirical research results regarding optimal ratios between fast and slow trains followed by discussion, and fnally, Section 5 summarizes thesis fndings while proposing future research directions. Train Stopping Strategy. Te train stopping plan specifes the stopping mode and stopping station.General urban rail transit train stops are mainly divided into two categories: one is all-station stopping mode and the other is the expresslocal train cross-station operation scheme.Local trains stop at every station to cater to the short-haul passenger fow and enhance operational accessibility.Express trains only stop at stations with high intensity of passenger fow, signifcantly reducing the travel time for long-haul passengers while also alleviating congestion on local trains as quickly as possible. Classifcation of Passengers. From the point of passengers, they do not need to choose what kind of trains to take when running the all-station stopping mode.Passengers with diferent origins and destinations will choose the frst train they encounter.When the express and local train modes are operated, passengers need to make a decision on whether to board the express train or the local train, and diferent decisions result in diferent travel times for passengers with the same origins and destinations.So, according to the origin and destination, passengers are divided into fve categories: P1, P2, P3, P4, and P5.A is defned as the express station, and B is defned as the local station.Te routes of the fve categories of passengers are shown in Table 1. For P3 and P4 passengers, if their origins and destinations are close (<5 km) or there are no transfer opportunities available, most of them will choose to take the local direct train.However, P3 long-haul passengers will board the express train when it arrives and then transfer if necessary.Tey may also opt to directly board a local train when one is available.Nevertheless, passengers often have low desire or demand for transfers.Tus, those with diferent types of departure and destination stations tend to choose the local direct train more frequently.So, P3, P4, and P5 can be classifed into one category according to their choice behavior.As for P1, whether they take the express train or the local train, the on-board time is the same.However, for P2, diferent choice behaviors correspond to diferent on-board times, and the on-board time of choosing to take the express train will be signifcantly less than that of taking the local train.Terefore, P1 and P2 belong to two categories.With the previous one, there are three categories fnally, L1, L2, and L3.L1: Passengers whose departure and destination stations are both express stations, and both the express stations are adjacent.L2: Passengers whose departure and destination stations are both express stations, and both the express stations are not adjacent.L3: Local direct passengers. Analysis of Passenger Travel Time. Te travel time consists of the on-board time and the waiting time.Te travel time is analyzed separately according to the classifcation of passengers, and the ratio of express to local trains is set as 1: n; the arrival of passengers is evenly distributed, so the average waiting time is half of the departure interval, the duration of a cycle is T, the pairs of the express train is D f , and the pairs of the local train is D s .For L1 and L2 passengers, their waiting time t wf is expressed as follows: For L3 passengers who choose a local train, the waiting time t ws is Te on-board time includes three parts: the running time, the stopping time, and the start-stop additional time consumed during starting and braking.Te on-board time t in is given as follows: Journal of Advanced Transportation where t od is the pure running time between the origin and destination; t stop is the stopping time; t qt is the start-stop additional time; a s is 0-1 decision variable, if the train stops at s station, a s � 1, otherwise, a s � 0. L1 passengers on-board time is given as follows: L2 passenger have two types of routes: express direct and local direct.Express direct t L21 in is shown in formula ( 5) and local direct t L22 in is shown in formula ( 6): L3 passengers on-board time t L3 in is given as follows: Model Construction and Solution 3.1.Processing of Multisource Data.Te stopping scheme for the express-local train is designed based on the multisource data.Te data used in this paper include AFC data, POI data, real estate website data, and questionnaire data. Preprocessing of AFC Data. Te AFC system stores nine items of valid information on the passenger rail transit travel, including the date, ticket card number, station name, transaction type, credit card time, etc.By processing the relevant information, the spatial and temporal distribution characteristics of the passenger fow and the OD table of passengers joining the transfer can be obtained, shown in Table 2. Amap POI Data. Each POI contains four aspects of information: name, category, coordinates, and land use nature.Considering the scope impact of the TOD mode, it takes the subway station as the center and the bufer range within European distance of 800 m of the surrounding as the research scope.Te POI data are collected and sorted through Python to obtain the land use information near the station. 3.1.3.Real Estate Website Data.Te data of "Fangtianxia" and "Lianjia201d" real estate websites are crawled through Python to obtain the distance, number of households, and foor area information of residential land and ofce land and to estimate the number of employees or residents near the site.Match the real estate website data with the POI data, and input the information into the Excel if the information is successfully matched.Due to the limited information uploaded by the real estate website, there is a part of the property information missing.For POI with this situation, the foor area, number of buildings, and foor height of POI are recorded, and then, the number of households is estimated according to the height of each foor of 3 meters and each household of 100 square meters, and fnally, the data are entered into the Excel. For the estimation of the number of employees in ofce buildings, the building area can be used for calculation.Te used area is calculated according to 60% of the building area, and the per capita ofce area is calculated according to 10 square meters to estimate the number of employees.which are given in Table 3. Passenger Travel Characteristics Survey Questionnaire Data.Trough a survey questionnaire, the basic travel characteristics data of subway passengers, such as the travel purpose, travel time, travel cost, and frequency of rail transit travel, are obtained. Te processing of multisource data involves data layer fusion based on the POI data and real estate information data, so as to obtain ofce and residence indices refecting land use characteristics.Cluster analysis is then conducted on stations using the station passenger collector-distributor volume abstracted from the AFC data and land use index to determine the stopping mode, which belongs to the feature layer fusion.Finally, the train operation scheme model is established by inputting the stopping scheme and OD matrix, obtaining feasible schemes.Before determining the fnal train stopping plan, the survey questionnaire data and the potential stopping plan are integrated at the decisionmaking level.Te data fusion process is shown in Figure 1. Multisource Data Fusion Based on Spatial Weight Function.Te spatial weight function ω in the geographical weighted regression model is introduced in the analysis of land use around the station, and the data from the real estate website are integrated.Te principle is that the closer the things are, the closer they are to each other (Tobler's frst law of geography), and the closer the distance, the greater the where p ijlive is the number of households in the residential interest point ij, d ij is the distance between i and j, and h is the window width Te ofce index ω iwork of site i is given as follows: where p ijwork is the number of employees in the ofce interest point ij. Te standardized residence index, ofce index, and passenger fow data are given in Table 4. Stations 121, 128, and 135 are the terminal stations and transfer stations, which must be the stopping stations.To eliminate the interference of abnormal data, the data of stations 121, 128, and 135 are cleared. Model Establishment. Te research focuses on a bidirectional single rail transit line without siding tracks during peak hours, and turn-back of trains is not considered.A multiobjective optimization model is established to minimize the total travel time cost of passengers and the operating cost of enterprises.Considering a series of driving conditions, the optimal pairs of express trains and local trains, and the stopping sequence of express trains are obtained.Te parameters and variables involved in the model are given in Tables 5 and 6. Te assumptions of the model are as follows: (1) All the passengers between express stations take express trains when available; otherwise, they will take local trains. Te passenger travel time includes the passenger waiting time and on-board time According to the passenger classifcation method, the number of diferent types of passengers is determined as follows. Te number of L1 passengers Te number of L2 passengers Te number of L3 passengers Waiting Time. L1 passengers waiting time L2 passengers waiting time L3 passengers waiting time Total passenger waiting time T w � q 1 T w1 + q 2 T w2 + q 3 T w3 . (18) On-Board Time. L1 passengers L2 passengers L3 passengers 6 Journal of Advanced Transportation Passengers' total on-board time Objective 2. Minimum train operating costs Te cost changes brought about by this model only involve the additional cost of stops, so the second objective is expressed as follows: Te driving condition constraints are as follows: Formula (26) represents the constraint on the stopping sequence of express trains; it stipulates that express trains must stop at the start and end stations and across one stop at the whole line at least.Formulas (27), (28), and (29) represent the constraint on the range of departure frequency; the departure frequency of the express train is greater than that of the local train.Formula (30) is the constraint of the section load rate; formula (31) is the constraint of the number of stops of the express train, and formulas (32), (33), and (34) are the constraints of the departure interval. Model Solving. Te multiple objectives of the optimization model are contradictory.Generally, one goal enhancement is accompanied by the performance degradation of another one, so no solution can make each goal reach the optimal solution, but a set of multiple optimal solutions, that is, the Pareto optimal solution set.In essence, the traditional solving methods convert the multiple objective functions by linear weighting into a single objective problem.Te defect of this method is that the dimensions between the optimization objectives are difcult to be unifed, and the value of the weight of each objective function remains to be discussed.Terefore, from the perspective that there are multiple satisfactory solutions for the subobjective solution of the multiobjective optimization model, it solves the Pareto solution set of the stopping scheme by using the binary coded express stopping scheme and the nondominated sorting NAGA-II algorithm with the elite strategy and analyzes each solution from the perspective of the potential stopping scheme and beneft.Finally, the optimal solution of the train stopping scheme and running pairs is obtained.Te algorithm steps are as follows: Step 1: Initialize the population and set the evolutionary algebra Gen � 1. Step 2: Determine whether the frst generation of subpopulation is generated; if it is generated, the evolutionary algebra Gen � 2; otherwise, the initial population is generated by nondominated ordering and selection, crossover, and mutation to generate the frst generation of subpopulation, and the evolutionary algebra Gen � 2. Step 3: Combine the parent population with the child population to form a new population. Step 4: Determine whether a new parent population has been generated; if not, calculate the objective function of the individuals in the new population, and perform operations such as fast nondominated sorting, calculation of crowding degree, and elite strategy to generate a new parent population.Otherwise, proceed to step 5. Step 5: Select, cross, and mutate the parent population to generate the ofspring population. Step 6: Determine whether Gen is equal to the largest evolutionary algebra; if not, then evolutionary algebra Gen � Gen + 1 and return to step 3. Otherwise, the run ends. Time Distribution of Passenger Flow. Te maximum section passenger fow distribution of Changchun Rail Transit Line 1 is shown in Figure 3. As can be seen from Figure 3, the maximum section passenger fow of this line presents a bimodal distribution, showing an obvious morning and evening peak at 7:00-9:00 and 17:00-19:00. Section Distribution of Passenger Flow. Te section distribution of passenger fow for Changchun Rail Transit Line 1 in the morning peak is shown in Figure 4. (1) Direction unbalance coefcient It can be seen from Figure 4 that in the morning peak hours, the distribution of section passenger fow up and down is "convex font," that is, the section passenger fow at two ends of the line is small, and the section passenger fow in the middle is large.Overall, the passenger fow in the upward direction is slightly larger than that in the downward direction.Passenger fow direction unbalance coefcient: Te direction unbalance coefcient of this line is 1.07, and the closer it is to 1, the more balanced the distribution of upstream and downstream passenger fow, so the passenger fow of this line is more balanced in the direction.(2) Section unbalance coefcient Te distribution of cross-section passenger fow on the station can be described by the unbalance coefcient of the passenger fow cross-section, as shown in Figure 5. Te section unbalance coefcient of the passenger fow section exceeds 1.5 in both the upward and downward directions, and the passenger fow shows obvious fuctuation in space. Distribution of Passenger Collector-Distributor Volume (1) Passenger collector-distributor volume of each station: Te passenger fow of each station in the morning peak is shown in Figure 6. As can be seen from Figure 6, at Stations 121, 122, 123, 124, and 134, the number of passengers getting on is much higher than that getting of, while at Stations 126, 127, and 129, the number of passengers getting of is much higher than that getting on, and the diference between the number of passengers getting on and of at Station 127 is particularly obvious.Te number of passengers getting on and getting of at Station 128 is much higher than that at other stations.Combined with the analysis of the land use near the metro line, it can be preliminarily judged that Stations 127 and 128 are located in the city center, Station 127 is the destination of most commuter passengers, and Station 128 is also a transfer station, so there are both commuters and transfer passengers getting of at this station.Stations 121, 122, 123, and 134 are located in the suburbs, and the nearby areas are mostly residential land, so the number of passengers getting on is much higher than the number of passengers getting of.(2) Passenger collector-distributor volume in diferent directions: To further analyze the distribution of passengers getting on and getting of, the number of passengers in the upward and downward directions are counted separately, as shown in Figure 7. As can be seen from Figure 7, in the upward direction, a large number of passengers get on the train at Stations 121, 122, 123, and 124, while few passengers get of.Te situation is just the opposite at Stations 127, 132, and 133.Many passengers are getting on and of at Station 128, and there are almost no passengers getting on at stations south of Station 128.In the downward direction, a large number of passengers get on at Stations 132, 133, and 134, and a large number of passengers get of at Station 127.Similarly, a large Te most signifcant diference between the downward direction and the upward direction is that in the upward direction, there are still a large number of passengers getting of at stations south of Station 128, but in the downward direction, passengers are getting of at stations north of Station 128(except Station 127); there are very few passengers getting of at each station.Tis is because the station around the southern end of the metro line has both residential and ofce land, while the site around the northern end of the line is mostly residential land, and there are few ofce buildings. Distance Distribution of Passenger Flow. Te average station distance of Changchun Rail Transit Line 1 is about 1.2 km, the average passenger distance is 5.64 km, the average passenger distance of upward is 5.88 km, and the average passenger distance of downward is 5.56 km.Te proportion of long-haul passengers is shown in Table 7. All the above passenger fow indicators are given in Table 8. Obviously, the passenger fow volume of each station of Changchun Rail Transit Line 1 is unbalanced, and there are more long-haul passengers; it is suitable for running express and local trains. Optimization Results and Analysis . SPSS was used to perform cluster analysis on the standardized passenger fow data, residence index, and ofce index, and the cluster lineage diagram of the site was obtained, as shown in Figure 8. According to the pedigree chart, the station can be divided into three categories and fve subcategories, as given in Table 9. Te frst type (Stations 122 and 123): these two stations have the highest residence index and the lowest ofce index among all the stations, indicating that the land around these sites is mostly residential land, and the passenger fow is relatively large.It can be inferred that the passenger fow of these sites during the morning peak period is mostly residents living around the site, so the stop is necessary at these two stations. Te second type: the second type of site has a low passenger fow, and the residence index and ofce index are in the middle level.Class 2.1 has Stations 124, 132, and 133, and the passenger fow of these stations is relatively large, and the ofce index is greater than the residence index, so there will be some passengers with the origin station (such as Stations 122 and 123) to the destination of Class 2.1 stations, and Stations 124 and 132 are transfer stations.So, this kind of station must be stopping stations.Te common feature of Class 2.2 stations is that the passenger fow is small, so there is no need for the express train to stop.Te residence index and ofce index around category stations (Stations 125 and 131) are high, but the passenger fow rate is small; the reason is that these areas are still in the development stage, so in the current train operation plan, such stations are designated as overtaking stations.Te passenger fow of Class 2.2.2 is small, and the residence index and ofce index are not high, so the express train does not have to stop.It is worth mentioning that the distance between Stations 125 and 124 is only 500 m, and consecutive stops in short distance greatly afect the efciency of subway operation, and Line 1 and Line 3 cannot realize intrastation transfer at Station 125, resulting in a small number of passengers at the station, so before realizing intrastation transfer between Line 1 and Line 3, it makes little sense for trains to stop at the station. Te third type (Station 127): Station 127 is divided into a separate category because the passenger fow in Station 127 is the largest; the ofce index is the highest among all stations, and the residence index is low, indicating that most of Journal of Advanced Transportation the land around such stations is ofce land.Tere is a large commuter fow, and there is a certain number of other types of passenger fow, so such stations must be stopping stations. Based on the analysis of the passenger fow and clustering results, the potential stop stations and preliminary stopping plan are determined as follows: running express trains in the upward direction and running local trains in the downward direction.To sum up, the stopping scheme of fast and slow trains is determined as shown in Figure 9. In the NSGA-II algorithm, the population size is 100, crossover probability � 0.8, mutation probability � 0.1, and the maximum number of iterations is 300.Te model parameter values are given in Table 10.Te schematic diagram of the Pareto solution set is shown in Figure 10.Four representative solutions selected from the Pareto solution set for comparison from the point of view of total travel time and operating costs, as shown in Table 11.0 represents running slow train only, and the value in parentheses is the objective function value under the single data source: OD matrix.Te timetable is given in Table 12, and the stopping scheme and the train diagram are shown in Figures 11 and 12. Te optimization results are analyzed as follows: (1) Pareto solution 1. Tis scheme has the lowest number of stops, the lowest operating cost of the enterprise, and the highest total travel time of passengers, indicating that among the four schemes, this scheme has the lowest level of passenger service.If decision-makers give more importance to saving the operating cost of the enterprise, this scheme would be chosen. (2) Pareto solution 2. In this scheme, the total travel time of passengers is the least, indicating that this scheme is more conducive to improving service quality of passengers.However, due to the large number of stops of express trains and the high operating cost of enterprises, this scheme would be chosen when decision-makers give more importance to the travel experience of passengers.(3) Pareto solution 3. Compared with solutions 1 and 2, the scheme with the highest degree of overlap between the previous stopping sequence and the potential stopping scheme has a total passenger travel time greater than solution 2 and less than solution 1, and its operating cost is also between the two.Terefore, when decision-makers give more importance to the passenger service level, enterprise cost, and passenger travel experience, this scheme is more appropriate.(4) Pareto solution 4. Similar to the solution 3 stopping scheme, this scheme is more appropriate when decision-makers value both passenger service level and enterprise cost, but value enterprise cost more. Conclusion In this paper, based on the multisource data of rail transit, we propose to optimize the design of the train stopping scheme and solve it by the NAGA-II algorithm, which improves the Tis paper studies the parking scheme of fast and local trains in peak hours, assuming that the passenger fow obeys uniform distribution, and the peak coefcient of the passenger fow can be considered in further research.Besides, the integration of connected bus lines and passenger fow data as well as update frequency of data source can also be considered. P1: Te origin and destination are both express stations A, and both the express stations are adjacent.P2: Te origin and destination are both express stations A, and both the express stations are not adjacent.P3: Passengers start at express station A and arrive at local station B. P4: Te pickup station is local station B, and the dropof station is express station A. P5: Passengers start and arrive at local station B. ( 2 ) Passengers between local stations choose local trains directly.(3) Te arrival of passengers is evenly distributed, and there are no stranded passengers at the station.(4) Te pairs of trains remain unchanged, and the formation of express and local trains is the same.3.3.1.Objective 1. Minimum total passenger travel time cost Total passenger travel time cost is the product of unit time value and passenger travel time min C T � C 0 • T.( 4. 1 . Background.Te total length of Changchun Metro Line 1 is 18.1 km, with 15 stations numbered 121 to 135.Stations 121 and 128 are transfer stations; Station 121 can transfer to Line 8, Station 128 can transfer to Line 2, Station 132 can transfer to Line 3, and there are external transportation hubs nearby: Changchun Expressway passenger station.Station 125 can transfer to Line 3, and Station 124 can transfer to Line 4 (transfer outside the station); the two stations are very close to each other and are connected to a large external transportation hub: Changchun Station.Te specifc geographical location of the route is shown in Figure 2. Figure 6 : Figure 6: Morning peak passenger fow at each station. accuracy and real-time performance of the stopping scheme based on the existing research in this area.Te model results are analyzed and compared from two aspects: passenger travel time and enterprise cost.A Pareto solution set is obtained for the best number of fast and slow trains and stopping schemes.(1) Potential stop stations are determined based on land use nature, and the spatial weight function is introduced to obtain the land use index refecting the characteristics of passenger fow.Te stop plan could be adjusted according to land use planning and land development mode along the metro line in the future.(2) Te proposed model and algorithm can provide efective ideas for the preparation of rail transit train operation plans and guide passengers to travel reasonably; it has certain practical signifcance for the rail transit research. Table 2 : Passenger fow OD matrix in peak hours. Table 3 : POI data and real estate website data.After fusion, the residence index and ofce index of each station are output.Te Gaussian kernel function is used to calculate the indices of various types of land use around the site.Te residence index ω ilive of site i is given as follows: Table 6 : Notation for decision variables. Table 7 : Te proportion of long-haul passengers. Table 11 : Pareto solution set comparison table.
v3-fos-license
2021-05-18T00:10:27.989Z
2021-04-20T00:00:00.000
234728023
{ "extfieldsofstudy": [ "Art" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.preprints.org/manuscript/202104.0525/v1/download", "pdf_hash": "118c75fcd7595121bc1ecddbb727be7eb7c593c7", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42611", "s2fieldsofstudy": [ "Art", "History" ], "sha1": "b7fea336ec8f979df31b07dfbd21ede1e6a039ef", "year": 2021 }
pes2o/s2orc
Toulouse, the glorious Athens of the South? Computational analysis of the Salons des Artistes Méridionaux (1907-1939) Created in 1905, the Société des Artistes Méridionaux organized annual exhibitions in Toulouse, in order to promote regional arts and (re)created a "Latin" and modern style. Based on a corpus of 11,486 artworks exhibited at the SAM between 1907 and 1939, this paper seeks to measure the contours and particularities of the Salons des Artistes Méridionaux, using a computational approach. From November 23 to December 11, 2021, the 91st Salon des Artistes Méridionaux will be held in Toulouse. This annual exhibition is remarkable for its longevity.1 Since the beginning of the 20th century, it has brought together artists from different backgrounds painters, sculptors, draughtsmen, decorators, cabinetmakers, etc. The founders of the Société des Artistes Méridionaux, created in Toulouse in 1905, had the ambition to (re)create and promote a "southern style", which would be specific to the South West of France. This paper seeks to measure the contours and particularities of the Salons des Artistes Méridionaux, using a computational approach. It relies on a database of all the exhibition catalogs of the Salons des Artistes Méridionaux (which will be abbreviated to SAM), from the date of the first accessible catalog -1907to 1939.2 A total of 11,486 numbers exhibited at the SAM was transcribed (a single number may include several artworks), with all the information regarding the exhibitor first name, name, birth place, awards, status in the SAM, names of professors, address, other biographical information and all the information about the works number in the catalog, title, medium, date of creation, sale price / name of the owner, reproduction in the catalog. This database thus allows us to understand the characteristics of SAM between 1907 and 1939, while the current bibliography only considers the artistic life in Toulouse in the broadest sense [2] or the SAM after 1939 [11, 10, 14, 13]. ∗Author of the dataset, teacher at the PSL Master Course (https://data-psl.github.io/intensive-week-dhai/) and responsible for this project, which gathered together the students/co-authors. 1https://www.artistes-meridionaux.fr/historique.php, accessed March 29, 2021. 2Most of the catalogues are available online, on the website of the digital and patrimonial library of Toulouse: https://rosalis.bibliotheque.toulouse.fr/, accessed March 29, 2021. 1 Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 20 April 2021 doi:10.20944/preprints202104.0525.v1 © 2021 by the author(s). Distributed under a Creative Commons CC BY license. After explaining the objectives of the Artistes Méridionaux, at the time of the creation of their Society and their Salon (section 1), we will see that their initiative corresponded to an economic, artistic and symbolic revenge against the Parisian capital (section 2). Nevertheless, this "southern" affirmation was far from being inclusive: the SAMs were riddled with gender inequalities as well as inequalities between members (section 3). 1 "For the glory of our region" 1.1 An exhibition at the core of Toulouse The Société des Artistes Méridionaux was created in 1905, by artists and craftsmen living in Toulouse and coming from different backgrounds : painting (Augustin-Antonin Balon, Georges-François Castex, Paul Costes, Jean Diffre, G.-F. Lineau, André-Pierre Lupiac), sculpture (Auguste Guénot, Louis Oury, Georges Vivent), architecture (Raoul Castan), glassware (Louis-Victor Gesta), but also and above all decorative arts and furniture making (Edmond and Maurice Alet, Paul Balard, René Deflandre, Jean-Germain and Jules Rigal). After a first exhibition in 1906, in a disused chapel located rue de Languedoc, in Toulouse, they chose the Capitole in 1907 in order to display their works (Fig 1) and they started publishing exhibition catalogues. Figure 1: Postcard of the Salon des Artistes Méridionaux in 1908. Coll. L. Saint-Raymond The choice of this venue is significant. Indeed, the Capitole was and is still the emblematic monument of Toulouse. Formerly the home of the "Capitouls" (the consuls of Toulouse), it underwent several phases of construction: its current façade was built in the 18th century and, between the years 1880 and 1898, the municipality launched the creation of four grandiose rooms, including the Galerie des Illustres. In 1907, the Capitole was then a jewel for the arts: in addition to hosting the city hall of Toulouse, the Capitole included a museum with its four reception halls, but also a theater in which the municipal opera performed and which made the reputation of Toulouse as the capital of Bel Canto[4]. By choosing the Capitole, the Artistes Méridionaux sought to position themselves at the heart of the arts in Toulouse. 2 Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 20 April 2021 doi:10.20944/preprints202104.0525.v1 From November 23 to December 11, 2021, the 91st Salon des Artistes Méridionaux will be held in Toulouse. This annual exhibition is remarkable for its longevity. 1 Since the beginning of the 20th century, it has brought together artists from different backgrounds -painters, sculptors, draughtsmen, decorators, cabinetmakers, etc. The founders of the Société des Artistes Méridionaux, created in Toulouse in 1905, had the ambition to (re)create and promote a "southern style", which would be specific to the South West of France. This paper seeks to measure the contours and particularities of the Salons des Artistes Méridionaux, using a computational approach. It relies on a database of all the exhibition catalogs of the Salons des Artistes Méridionaux (which will be abbreviated to SAM), from the date of the first accessible catalog -1907-to 1939. 2 A total of 11,486 numbers exhibited at the SAM was transcribed (a single number may include several artworks), with all the information regarding the exhibitor -first name, name, birth place, awards, status in the SAM, names of professors, address, other biographical information -and all the information about the works -number in the catalog, title, medium, date of creation, sale price / name of the owner, reproduction in the catalog. This database thus allows us to understand the characteristics of SAM between 1907 and 1939, while the current bibliography only considers the artistic life in Toulouse in the broadest sense [2] or the SAM after 1939 [11,10,14,13]. The choice of this venue is significant. Indeed, the Capitole was -and is still -the emblematic monument of Toulouse. Formerly the home of the "Capitouls" (the consuls of Toulouse), it underwent several phases of construction: its current façade was built in the 18th century and, between the years 1880 and 1898, the municipality launched the creation of four grandiose rooms, including the Galerie des Illustres. In 1907, the Capitole was then a jewel for the arts: in addition to hosting the city hall of Toulouse, the Capitole included a museum with its four reception halls, but also a theater in which the municipal opera performed and which made the reputation of Toulouse as the capital of Bel Canto [4]. By choosing the Capitole, the Artistes Méridionaux sought to position themselves at the heart of the arts in Toulouse. Tourism and regionalism The Société des artistes méridionaux was created at a pivotal moment in the history of Toulouse. At the beginning of the 20th century, this city sought to assert its identity and position itself as the capital of the South-West of France. A strong regionalist movement was born, based on the promotion of the local dialect -Occitan -and on tourism [7]. The association Les Toulousains de Toulouse, created in 1904, was the spearhead of this local patriotism, through its newsletter and forum of expression, L'Auta, launched in April 1906 and still active today. This bulletin regularly published the announcements and reports of the SAM. Furthermore, the Les Toulousains de Toulouse founded the Musée du Vieux-Toulouse in 1907, in the Hôtel Dumay, in order to promote the local heritage. 3 Beyond the actions of the Les Toulousains de Toulouse, the municipality of Toulouse sought to promote local tourism. In 1906, Auguste Guénot, professor of geography and president of the tourist office, coined the term "Ville Rose" to designate Toulouse: this catchy and commercial expression, officially adopted by the city, was used as a slogan on postcards and in all tourist businesses [3]. At the 1907 SAM, Maurice Alet exhibited a landscape of Toulouse and entitled it "Un coin de la Ville Rose" (A Corner of the Ville Rose). Another sign of the proximity between the SAM and the regionalist promotion was the medal of the Pyrenees Cup, created by Louis Oury and exhibited on the same occasion. The Pyrenees Cup was a great car race, created in 1905: it started from Toulouse and linked, in a loop, all the cities of the Pyrenees, from Perpignan to Biarritz, passing by Foix, Luchon or Pau [7]. The Artistes Méridionaux participated indirectly in the promotion of local tourism, by putting in the center of their works regional places: between 1907 and 1939, the exhibited landscapes privileged the Southwest (Fig 2). Of the 2,984 works exhibited at the SAM, whose title explained the place represented, 59% concerned places of the Toulouse region (that is, to use an anachronism, places of the Midi-Pyrénées region which existed administratively between the years 1980 and 2016) and 18%, places of the South-West in the broad sense, that is to say of the Aquitaine region and the Languedoc-Roussillon region. In total, nearly 8 out of 10 identified landscapes were of places in the Southwest. In this corpus, the four most represented places -often reproduced in the illustrated catalogs (Fig 3) -were, in decreasing order, Toulouse (with 468 works exhibited at the SAMs between 1907 and 1939), Saint-Bertrand-de-Comminges (73), Albi (72) and Collioure (60). This geographical anchorage is highlighted in the prefaces of the first catalogs, in which the Artistes Méridionaux claim an "effort towards the beautiful, impregnated with the scents of the soil, [by exhibiting] works which voluntarily reflect the familiar landscapes." 4 By promoting local landscapes, they sought to contribute to "the glory of our region" 5 , but, more than that, they sought to revitalize their territory by (re)creating an artistic style that was properly southern. Promoting "Latin" and modern arts In the reasoning of the first Artistes Méridionaux, the territory and the local traditions were the basis of a properly modern and "Southern" art. Thus, the SAM exhibitions would contribute to the affirmation of this same artistic style: We will affirm the will to be of our soil, of our country, to be southerners whatever the cost, to remain in the traditions, the tastes, the customs of our environment, not to look 4 SAM Exhibition Catalogue, 1909, p.5: "effort vers le beau, imprégné des senteurs du terroir, oeuvres qui volontairement reflètent les paysages familiers" 5 SAM Exhibition Catalogue, 1912, p.14: "la gloire de notre région" elsewhere, to create, in these conditions, a style. 6 By facilitating the "awakening to modern art of the southern soul", 7 the SAM exhibitions would thus ensure the artistic reputation of Toulouse, and therefore its wealth: help us to create, to improve, to perfect this Languedoc style that our best hope is to see inscribed on our homes, our furniture, our interiors ... Our beautiful country will gain a little glory, a lot of material wealth, the implementation of energies that are lost, of raw materials helping elsewhere to accomplish what we should do here. 8 This promotion of a clean style carries, however, some racist undertones. Also, in the preface of the 1909 catalog, one can read: The Southern craftsmen and artists are aware that they arrive in their time to fulfill the destinies assigned to our race by translating [...] what the old Latin soul carries in it of taste for the beautiful rhythmic orders and their adaptation to our life. 9 From this problematic point of view, promoting the "Latin" artworks "grown in southern soil" would prove "to everyone that Toulouse has not degenerated". 10 The definition of what would be a properly southern style was as vague as it was contradictory. The SAM catalogs spoke of a "Latin soul", or of a "Languedoc style" or of a Greek heritage -"Toulouse is the city of arts, the one that proudly claims the glory of the title and the function of Athens of the South". 11 Between Greek and Roman heritage, "Latin", "southern" or "Languedoc" claims, there is enough to be geographically lost. The colophon of the first catalogs also testifies to these artistic contradictions (Fig 4) In the center, this colophon draws the silhouette of the Saint-Sernin basilica, jewel of the Southern French Gothic or gothique méridional, also called gothique toulousain and gothique languedocien. Behind the cathedral, the sun -typical of the Toulouse heat! -rises over the city and the intertwined letters S, A and M. Stylized violets surround this landscape -the emblematic flowers of Toulouse. The style of this colophon is openly inspired by the sinuous lines and typography of Art Nouveau, whose centers in France were Paris and Nancy. In order to be "modern", the "southerner" style paradoxically drew its inspiration from the North of France. The Artistes Méridionaux assumed this comparison, wanting to compete with Nancy on the French artistic scene : "As there is the Nancy school, there will be the Languedoc school". 12 The profession of faith of the Artistes Méridionaux thus resembled a declaration of independence, a revanchist emancipation in the art world. Nostalgia of an artistic golden age This spirit of revenge was rooted in an idealized past, in this case the 17th and 18th centuries during which the city of Toulouse had its own drawing school, which became the Académie royale de peinture, sculpture et architecture de Toulouse between 1750 and 1793 ( [9]). These artistic structures and this environment had no equivalent in France -outside of Paris -and gave a true identity to the école de Toulouse during this period, which can be described as the golden age of Toulouse painting ( [16]). Between 1751 and its disappearance in 1793, the Académie royale de peinture, sculpture et architecture de Toulouse organized painting and sculpture exhibitions, called Salons (like the Parisian ones), in order to promote local painters and sculptors. The Salons took place in the Galerie des peintures of the Capitole: not surprisingly, the Artistes méridionaux chose this same prestigious venue to organize their own Salons and to recover the artistic greatness of the city. In the 19th century, the Académie royale de peinture, sculpture et architecture de Toulouse found an equivalent in the Ecole nationale et spéciale des Beaux-Arts, founded in 1827 in Toulouse, and then from 1883, in the Ecole nationale et spéciale des Beaux-Arts, when the latter came under the supervision of the ministère de l'Instruction publique et des Beaux-Arts. Nevertheless, according to the Artistes méridionaux, this School of Fine Arts did not have the weight to impose a true local artistic identity: The public is satisfied with local production, and goes to Paris for what is art furniture. The population seems to have a pride in its School of Fine Arts, imposed by a local tradition rather than inspired by the recognition of services rendered; the proof is everywhere. 13 Further on, these same artists deplored the flight of their comrades to Paris and the disaffection of the demand for the local production: Artists live mainly in Paris, and it cannot be denied that those who, for love of their native soil or any other reason, cannot attempt an exodus, have a hard life here. One does not want to know them, one does not trust their work, in their works, one addresses oneself outside, believing that what one asks of them cannot be carried out on the spot. 14 The exhibitions of the SAM therefore sought to fight against this Parisian domination, by reintegrating the artists at the local level, by valuing the creations and by giving them a strong artistic identity. Bitterness over Parisian domination The bitterness over Parisian domination seemed to implicitly target the Union artistique de Toulouse (which will be shortened to UAT). The first UAT exhibitions took place in Toulouse between 1861 and 1864, then between 1885 and 1909. Founded on the same principle as the "friends of the arts" exhibitions ( [12], [6], [5]), the UAT relied on an association of collectors, who jointly purchased the exhibited works -these works being redistributed among the members of the association by drawing lots. According to its statutes, the purpose of the UAT Society was "to encourage, in Toulouse, the progress of the Arts and to propagate the taste for them through public Exhibitions of painting, sculpture, drawing and engraving". However, this exhibition attached great importance to the Parisian art world. Indeed, the exhibitors had to mention their performances and their awards obtained at the official Parisian Salon. For instance, at the 1907 UAT, Henri Biva mentioned that he had received an "honorable mention" at the 1892 Salon, a 3rd class medal at the 1895 Salon, a 2nd class medal at the 1896 Salon, that he had received a bronze medal at the 1900 Universal Exhibition in Paris and that he was now "hors concours", i.e. exempted from passing before the jury of the Parisian Salon ( Fig 5). During three years, the two exhibitions overlapped, in March-April for the UAT, and in May-June for the SAM. It seems that the SAM were held in response to, or even against, the UATs: to test this hypothesis, the last three catalogs of the UAT (1907UAT ( , 1908UAT ( and 1909 were thus transcribed and added to the dataset. 15 Taking the opposite view of the UAT Between 1907 and 1909, the SAMs took the opposite view of the UATs and asserted themselves as a much more local and southern group. The addresses of residence that the artists gave in the catalogs show two very different profiles. During these three years, half of the 655 exhibitors of the UAT came from the Paris region, while the 79 exhibitors of the SAM lived in Toulouse or its surroundings (Fig 6 and Fig 7). The places represented in the exhibited landscapes were equally contrasted: of the 227 works whose title mentioned a precise place at the UAT between 1907 and 1909, only a quarter of them concerned the South-West -Paris, the Côte d'Azur, the Atlantic coast, Brittany and Normandy being widely preferred. On the contrary, the 127 geolocatable works, at the SAM, clearly favored the South-West (Fig 8 and Fig 9). From the point of view of the exhibitors and the exhibited works, the SAM thus affirmed a much more southern identity, as opposed to the UAT which were much more turned towards Paris Beyond the geographical and symbolic revenge, the SAMs signed the end of an economic domination against the Parisians who were stepping on their toes in the UAT. On the one hand, the SAM gave more visibility to the artists: the exhibitors were fewer in number -about 30 annual exhibitors at the SAM, compared to about 200 for the UAT -and they showed more works at each exhibition -2 per exhibitor, on average, for the UAT, compared to 7 to 11, on average, for the SAM. Table 1, in the appendix, details the econometric results of hedonic regressions conducted on the selling price of artworks exhibited between 1907 and 1909 at the SAM or the UAT, and in the right-hand column, for those exhibited at the SAM only. Significantly, artists offered much higher selling prices -210 francs, on average -when they came from the Paris region. On the other hand, when restricting ourselves to the SAM, this margin ceased to be significant: exhibitors living in Haute-Garonne -i.e. Toulouse and its surroundings -gave significantly higher prices than their comrades, by 11 francs on average. More anecdotally, the selling price of the catalog was two to four times lower for the SAM: it cost 25 centimes of franc, even illustrated, whereas the catalogs -not illustrated -of the UATs cost 50 centimes without the price of the works, 1 franc with the price. The SAM thus seemed more open to a local audience, and in a sense more democratic. However, this claimed openness masked internal inequalities and fine-grained processes of exclusion, which the computational analysis of the data helps to shed light on. A man's world? In spite of the opening towards southern artists, without distinction, the SAM were at the beginning almost exclusively male (Fig 10). In 1907, of the 38 exhibitors at the SAM, there was only one woman; the following year, no woman exhibited, and in 1909, one woman exhibited and 28 men. On the contrary, the UAT was much more feminized, with about a quarter of the exhibitors being women. Year Only after the war, from 1921-1922, did the number of women increase, both in absolute numbers and in percentage terms, to just under 30% of exhibitors (Fig 11). Despite this catch-up, women experienced a higher turnover than men: 40% of women who exhibited at the ASM participated only once, compared to 35% of men. Similarly, 55% of women exhibited once or twice, compared to 50% of male exhibitors. Similarly, women were in the minority among the "regulars": of the 20 exhibitors who participated in 20 or more Salons, there were only three women, Jeanne Boyé, Pauline Rivière and Hélène Rivière (Fig 12). Women thus stayed in SAM exhibits for less time than men. Perhaps they felt uncomfortable in this predominantly male environment. Perhaps they did not find it economically beneficial. Indeed, Figure 12: "Regulars" of the SAM who participated in more than 20 exhibitions between 1907 and 1939 between 1907 and 1939, all other things being equal, the selling price of a work created by a woman was significantly lower than that of a work created by a man, even after 1920, a period of greater openness to women (Table 2). This effect of gender remained negative, even when controlling the regression by being a "regular" exhibitor (Table 3) -the fact of being a "regular" had no impact on the selling price. A place for everyone but everyone in his / her medium? Gender inequalities hide structural effects. Men and women did not exhibit the same type of works and the mediums were distributed differently (Fig 13). While painting and watercolor were the most exhibited mediums for both of men and women, men exhibit more sculptures and drawings and women exhibited more pastels. Among decorative arts, the quantitative analysis of titles, through TreeCloud [8] (Fig 14). Despite having a lot in common, for example the passion for "table", "chambre", "broderie", etc., the male and female decorative artists do show differences in interests. Firstly, it is inferred that male artists prefer to master a more diversified field of craftsmanship. "Verni", "ciré", "reliure", "mosaïque", "forgé", "marqueterie", even "broderie", considered feminine in the stereotype, all occupy important places in the title of the male artists. While for their female counterparts, the vocabulary for craftsmanship is much more limited to "ciré", "marqueterie" and "broderie". Besides, in terms of materials, female artists prefer a wide range of ornamental trees like "cerisier", "chêne", etc., whilst male artists frequently includes materials like "fer" and "cuir". It is also interesting to note that female artists really favor the words "stylisé" while the male shows no such tendency at all. They prefer words like "décoratif", "décor", which are more neutral. Finally, men produce more decorative furnitures (like Maurice Alet or Jean-Germain and Jules Rigal) and women exhibit more "embroidered cushions" and ceramics. Indeed, Pauline Rivière, the first woman who exhibited at the SAM, in 1907, showed cushions, collar and screen with embroidered patterns. Similarly, Madeleine Bijon-Cathary exhibited many paintings from 1928 through 1939 but the only works that were reproduced in the catalogue were the Art Deco ceramics she displayed during the first three years of her participation (Fig 15). The SAM thus maintained a gendered division of artworks, while advocating a unique, "southern" and modern art. A closed environment? It is difficult to assess whether the SAM constituted a closed environment: to do so, one would have to measure the participation of all exhibitors in all existing exhibitions during the same period, from 1907 to 1939. Nevertheless, it is possible to understand this openness/closure from the database. Exhibitors' moves are a first way of understanding this aspect. The cartography of successive moves (Fig 16) shows the importance of the South-West: very few exhibitors at the SAM "went up to Paris". On the contrary, most of them stayed in a perimeter close to Toulouse, indicating a very strong attachment to this region. This local anchorage is all the more visible in the choice of the new exhibition place: from 1920 onwards, all the SAM took place in the "Palais des Arts", i.e. in a 300m 2 room located in the west wing of the Ecole des Beaux-Arts, built between 1892 and 1895. 16 Unlike the Capitole, which, because of its political dimension, was open and accessible to all, this new venue seemed reserved for a small world of art insiders, or at least more confidential. Another indicator allows us to understand the routine -and therefore rather closed -aspect of an exhibition: the presence of a reproduction in the catalog, which was a kind of privilege granted to the creator of the work in question. Indeed, an artwork reproduced by photo-engraving has more visual impact on the visitors of the SAM, also leaving a trace in libraries and memories, even after the end of the exhibition. The logistic regression in Table 4 highlights the determinants that, in probability, increased the chances of an artwork being reproduced in the catalog. The results are striking: only the fact of being a "regular" in the strict sense -more than 20 participations -or in the broader sense -more than 17 participations -significantly increased the probability of this privilege. The emblematic example is Maurice-Gaston Alet, who exhibited the most (Fig 12) and whose furniture was almost systematically reproduced in the catalogs. There seemed to be two speeds between the one-shot exhibitors and the "regulars". Finally, the structure of the art market in Toulouse allows us to measure the economic openness of the SAM: was there a national or international pool of collectors who lent works at these events? Did the SAM mobilize local, national or international art dealers? On this last point, it is striking to note the absence of dealers as intermediaries. The SAM catalogs made no mention of works lent or put up for sale by dealers, nor did they mention any artist who would give his or her address in a gallery. Nevertheless, a few Artistes méridionaux represented themselves, in their own art galleries, like Maurice Alet ( [18]), Hélène Gasset-Ousset, Juliette Gustos and Henri Martin. They did not hesitate to advertise their business in the pages of catalogs (Fig 17). This is an important feature of the Toulouse art world. Indeed, when comparing the SAM with another exhibition in another southern city -the AIB, Artistes Indépendants Bordelais, active from 1928 in Bordeaux -the exhibitors of the AIB were either not represented by any local art gallery (there were only restorers of paintings) or they were represented by a Parisian dealer [15]. On the supply side, the art market in Toulouse was thus characterized by the adage "help yourself and Heaven will help you" (aide-toi et le Ciel t'aidera). On the demand side, were there enough collectors to support local production? The catalogs of the SAM give a first answer, as they mention the name of the collector, when the latter lent a work which belonged to him or her: the following network (Fig 18) represents the links between the collectors and the collected artists-exhibitors. Most collectors hid behind an anonymous name, which makes identification impossible and therefore doubtful -perhaps the artist was hiding behind this mysterious name. Three collectors -only! -stood out in this network, by their willingness to reveal their names and therefore their patronage, and by the -quite relative -frequency of their loans: Louis Lacroix (president of the Académie de Arts de Toulouse), Dr. Camille Soula and Dr. Joseph Ducuing. Nevertheless, their choices were very selective. Louis Lacroix loaned works by well-known and deceased Toulouse artists -Jean-Paul Laurens and Henri de Toulouse-Lautrec -and two drawings by Paul Bernardot. Dr. Soula lent works by André Arbus, Auguste Guénot and Paul Bernardot and Dr. Ducuing, a work by Paul Bernardot. These three patrons officially supported a very small number of southern artists, or even a single artist (Paul Bernardot). This clique effect is made visible in two frescoes of the Parnasse occitan (Fig 19), created by Marc Saint-Saëns in 1934-1935 to decorate the new library in Toulouse 17 In this work, the two friends Joseph Ducuing and Camille Soula are represented jointly -the former was the latter's student at the Toulouse medical school [1] -with the artists of their collection, André Arbus and Paul Bernardot. Another regular of the SAM, the sculptor Henry Parayre, was also painted by Marc Saint-Saëns, who was none other than Joseph Ducuing's son-in-law. This fresco thus testifies to the selectivity of this network of collector-patrons, and to a very important clique effect. Conclusion A computational analysis thus made it possible to identify some characteristics of the SAM: a very strong anchoring in the local space, a detachment from the UATs which were too dominated by Paris, but also a certain form of machismo and, in fine, a milieu which was rather closed to a few privileged regulars and a few equally selective collectors-patrons. These temporary conclusions must be subjected to further research, in the archives and reserves of museums -Musée des Augustins, Musée du Vieux-Toulouse. They will also benefit from being compared to other exhibitions that took place at the same time as the SAM, in order to test the degree of openness of the exhibitors and their commercial strategies, but also to the UAT (whose corpus has been constituted with Hadrien Viraben). Last but not least, this first step will be followed by the meticulous research of the works exhibited at the SAM. The analysis of the Toulouse art world is part of a larger research project -DatART -led by Léa Saint-Raymond and hosted at the ENS-PSL Observatoire des humanités numériques. By making available data from the SAM, the UAT and the AIB of Bordeaux, it will allow us to understand the regional logics at work in the art world and to reconnect this history with the local artistic and cultural heritage. *** p < 0.001, ** p < 0.01, * p < 0.05, ·p < 0.1
v3-fos-license
2017-06-17T07:53:47.255Z
2015-06-22T00:00:00.000
6820734
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0131084&type=printable", "pdf_hash": "92d3ccd1ad2ade378203ed509fab675180f737e3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42612", "s2fieldsofstudy": [ "Medicine" ], "sha1": "97dcb8c36dec42fbb07e1cb1f91b8b08d5039367", "year": 2015 }
pes2o/s2orc
Health Facility Graduation from Donor-Supported Intensive Technical Assistance and Associated Factors in Zambia Introduction The FHI360-led Zambia Prevention Care and Treatment partnership II (ZPCT II) with funding from United States Agency for International Development, supports the Zambian Ministry of Health in scaling up HIV/AIDS services. To improve the quality of HIV/AIDS services, ZPCT II provides technical assistance until desired standards are met and districts are weaned-off intensive technical support, a process referred to as district graduation. This study describes the graduation process and determines performance domains associated with district graduation. Methods Data were collected from 275 health facilities in 39 districts in 5 provinces of Zambia between 2008 and 2012. Performance in technical capacity, commodity management, data management and human resources domains were assessed in the following services areas: HIV counselling and testing and prevention of mother to child transmission, antiretroviral therapy/clinical care, pharmacy and laboratory. The overall mean percentage score was calculated by obtaining the mean of mean percentage scores for the four domains. Logistic regression models were used to obtain odds ratios (OR) and 95% confidence intervals (CI) for the domain mean percentage scores in graduated versus non-graduated districts; according to rural-urban, and province strata. Results 24 districts out of 39 graduated from intensive donor supported technical assistance while 15 districts did not graduate. The overall mean percentage score for all four domains was statistically significantly higher in graduated than non-graduated districts (93.2% versus 91.2%, OR = 1.34, 95%CI:1.20–1.49); including rural settings (92.4% versus 89.4%, OR = 1.43,95%CI:1.24–1.65). The mean percentage score in human resource domain was statistically significantly higher in graduated than non-graduated districts (93.6% versus 71.6%, OR = 5.81, 95%CI: 4.29–7.86) and in both rural and urban settings. Conclusions QA/QI tools can be used to assess performance at health facilities and determine readiness for district graduation. Human resources management domain was found to be an important factor associated with district graduation. Introduction The global agenda for HIV includes universal access to HIV prevention and treatment [1]. To achieve this, Zambia, like other resource-limited countries, relies on donor support. Zambia has benefited from the US President's Emergency Plan for AIDS Relief (PEPFAR) and the Global Fund to Fight AIDS, Tuberculosis and Malaria (GFATM) in scaling-up HIV care and treatment services nationwide. However, it is recognized that a major challenge facing efforts to scale up HIV/AIDS services in Zambia and elsewhere in sub-Saharan Africa is the sustainability of services should donor support be removed or reduced. The Zambia Prevention, Care and Treatment Partnership II (ZPCT II), a PEPFAR funded project through the United States Agency for International Development (USAID), supports the Government of Zambia in scaling up HIV and AIDS services in 5 provinces of Zambia. At the time of this study in 2012, a total of 350 health facilities (341 public and 9 private) were supported through this partnership, in implementing HIV/AIDS services. In 2005, the Zambian Ministry of Health (MOH) adopted a performance improvement approach (PIA) as a critical component in driving the improvement of quality of health care. "[PIA] is a step-by-step methodology for finding out what is needed to ensure good performance and delivering it; it uses quality tools in a guided and logical manner to attain performance targets and improve quality" [2]. To ensure the sustainability of HIV/AIDS services in these ZPCT II-supported health facilities, the ZPCT I (later on called ZPCT II in 2009) project introduced a quality assurance and quality improvement (QA/QI) system in 2007 [3]. The QA/QI data collected provided feedback to strengthen service provision, develop local capacity and improve overall monitoring and evaluation of different technical areas of HIV/AIDS services. The QA/QI system supported the provision of high quality HIV services, consistent with the National HIV Guidelines and Standards in all ZPCT II-supported health facilities. Importantly, the approach guided a tailored reduction in intensive technical support from ZPCT II, and transferred the bulk of the technical assistance responsibilities to the MOH district and provincial offices once the standards were reached and maintained: a process referred to as district graduation. After graduation, the MOH and the Ministry of Community Development, Mother and Child Health (MCDMCH) through the Provincial Medical Office (PMO) and the District Medical Office (DMO) respectively, took responsibility of ensuring that there was provision of quality HIV/AIDS services through continued mentorship and monitoring of quality through the PIA approach in all government health facilities. This allowed ZPCT II to shift its focus to non-graduated districts and only conducted quarterly monitoring of the quality of services in the graduated districts. Support to HIV services in Africa has been essential and lifesaving but countries including Zambia are now preparing to have less or flat lined donor support even as demand for services increases [4]. Over time, more of the financial burden for sustaining these services will be shifted to host countries. Strategies should be built into any project design and implementation to facilitate continuation of services beyond the period of external donor support [4]. Sustainability can be defined as the "capacity to maintain program services at a level that will provide ongoing prevention and treatment of a health problem" beyond external donor financial, managerial and technical support [5]. Elements of sustainability include technical, programmatic, social and financial elements. All four elements need to be addressed to achieve operational sustainability [6]. Other elements of sustainability include factors such as availability of drugs and equipment, staff establishment and continuing education and the overall working conditions in a facility. When these elements are missing the quality of services are impacted [5,7]. The concept of sustainability can be operationalized by grouping determining factors into three clusters: context (environmental factors), activity profile (type of services offered or activities carried out) and organizational capacity (capability to carry out set tasks faced by the organization) [4].These factors interact in different ways; the identification of gaps in these factors illuminates barriers to sustainability of services [4,7]. Torpey et al, in their case study concluded that achieving operational sustainability was possible in a resource limited setting through institutionalization of a QA/QI system. The authors illustrated how the use of a QA/QI system can make it possible to identify performance strengths and weaknesses in service delivery that are integral to operationalizing sustainability. They demonstrated how feasible and practical it is to prepare districts for graduation from project support by starting with improving the quality of services at individual health facilities and building up the districts' ability to manage the HIV services [4,7]. In this study we use the same concept of determining operational sustainability using a QA/ QI system. We assess the QA/QI system that was used in the ZPCT II project to determine district graduation from intensive donor support, describe the graduation process and determine performance domains associated with district graduation. Study setting and design This was a retrospective review of data collected between July 2008 and December 2012 for QA/QI activities conducted in 275 of 350 (79%) ZPCT II-supported health facilities located in 39 of 42 targeted districts for graduation in five provinces. In this context, a district is a geographical and administrative area defined by government characterised by a cluster of health facilities. The health facilities within the districts were classified according to residence as either rural or urban facilities. The five provinces were Central, Copperbelt, Luapula, Northern and North Western Provinces. The process of district graduation assessment was rolled out in phases. At the time of this study, the rolling out of the assessment had not yet reached all the 42 districts; that is why only 39 districts were assessed. The 275 of the 350 health facilities belonged to the 39 assessed districts while the 75 remaining health facilities belonged to the three districts that were not yet assessed for graduation. The QA/QI tool was administered on a quarterly basis to health facility staff who were working in the following four technical areas: HIV counselling and testing and prevention of mother to child transmission (CT-PMTCT), antiretroviral therapy/clinical care (ART-CC), Pharmacy and Laboratory. The QA/QI tool was designed to assess four quality performance domains: technical capacity, commodity management, data management and human resources management. Sample size and sampling strategy There were 23 health facilities from 7 districts included that were supported during the ZPCT I period (2008)(2009)) and an additional 252 facilities from 32 districts included that were supported by ZPCT II (2009)(2010)(2011)(2012). The process of assessing graduation started toward the end of ZPCT I, and facilities that were assessed in ZPCT I were not re-assessed in ZPCT II, the results discussed in this manuscript are therefore based on combined data for 275 facilities from 39 districts assessed for graduation in both ZPCT I and ZPCT II. District graduation process A district with health facilities achieving and maintaining good quality services in line with existing MOH and Health Professions Council of Zambia national standards across all technical areas is targeted for graduation. The technical areas considered in the QA/QI tool were adherent to government guidelines for: (i) HIV counselling and testing and Prevention of Mother to Child Transmission of HIV services (CT-PMTCT), (ii) antiretroviral therapy and clinical care services (ART-CC), (iii) Laboratory services, and (iv) Pharmacy services. District graduation occurs when health facilities within a district score 90% and above in all the technical areas and maintain this good quality service for more than two consecutive quarters. The criteria was used for districts with five or more facilities supported by ZPCT II is that a minimum of 80% of health facilities in a district should score 90% and above. For districts with less than five ZPCT II-supported facilities, 100% of the health facilities should meet the criteria for graduation for the district to be graduated. Graduation of a district does not mean elimination of financial support but rather reduced technical support. Facilities and districts that graduated received less technical support from ZPCT II as this responsibility of technical support was passed on to the MOH at district and provincial level as part of their ongoing monitoring system. The transfer of responsibility was done following planning and involvement from both the MOH and ZPCT II. However, a process of less intensive monitoring of service quality by ZPCT II continued after a district graduated in order to facilitate and assure sustained good quality of HIV services in line with national standards. During the post-graduation period, a comprehensive post-graduation management plan for each graduated district was developed that enabled quality service to be sustained within the local district health systems. This occurred through establishing a comprehensive system for continually monitoring the quality of care in each technical area of a facility in a graduated district and incorporating this into the standard MOH national QI system, that was implemented by the district health office team. QA/QI Tool and Graduation domains The QA/QI tool contained four domains: technical capacity, commodity management, data management and human resource management (Table 1). In each domain, there are specific questions that are tailored toward the four technical service areas offered at the health facility (CT-PMTCT, ART-CC, Laboratory and Pharmacy). The options to the responses to each question in the QA/QI tool were dichotomous variables. Technical capacity. Facilities eligible for graduation must have incorporated technical strategies at all levels of service delivery and be able to consistently provide quality service as defined by the minimum national MOH standards. Commodity management. Commodity management measured continuous availability of HIV test kits and chemical reagents for laboratory tests. Moreover, facilities must have been able to effectively forecast, quantify, order, procure and store ARVs, OI drugs, and other drugs including laboratory supplies under optimal conditions in order to eliminate stock outs and reduce overstocking of commodities. Data management. This domain measured appropriate management of health information system which is critical for monitoring patient outcomes, planning and tracking progress and performance of programs. Quality of data collection, entry, generation and submission of reports across all technical areas was also assessed under this domain. Furthermore, correct completion of CT-PMTCT registers, ART clinic information and timeliness of report submission was measured as part of key criteria required to graduate. In addition, this domain assessed general management and use of electronic patient record management system. Human resources management. The human resources management domain assessed availability of trained staff. In order to attain graduation status, facilities must had at least two trained health care workers providing services in various technical areas. This would ensure that staff transfers, staff changes and retirements would not disrupt service provision in the short to medium term. Ethical approval Ethical approval was obtained from the FHI360 Protection Human Subjects Committee, North Carolina, USA and the ERES Converge Ethics Committee, Lusaka, Zambia. Data collection and operationalization Data were extracted from a paper based QA/QI tool and entered into a Microsoft Access (Microsoft Corp., Redmond, USA) database for data management and exported to Microsoft Excel, Statistical Package for Social Sciences software version 21(SPSS Inc., Chicago, USA) and STATA version 12 (Stata Corp., College Station, Texas, USA) for analysis. Data analysis We scored the type of technical service available in the health facilities (CT-PMTCT, ART-CC, Pharmacy, and Laboratory services) in each of the four domains. The total score for the responses for each technical area was obtained by summing up the individual responses to the questions in that technical areas. This total score was then divided by the maximum score for that technical area and then multiplied by 100 percent to obtain a percentage score of performance. The mean percentage score for each domain was then calculated by obtaining the mean of the percentage scores for the four technical areas in that domain. The overall mean percentage score for all four domains was then calculated by adding these mean percentage scores for the four domains and then dividing the sum by four to obtain a mean of mean percentage scores. The outcome of interest was whether or not a district graduated following assessment using the QA/QI tool. We used logistic regression models to estimate the odds ratios (OR) and 95% confidence intervals (CI) of factors associated with district graduation. In addition, the mean percentage scores, OR and 95% CI of factors associated with district graduation were stratified by rural and urban facilities and by province using logistic regression models. Not all health facilities had all the four technical service areas (CT-PMTCT, ART-CC, Laboratory and Pharmacy) in the same locality. Out of the 275 health facilities, 94 (34%) had all four service areas in the same locality (Fig 1). The assessment for graduation in each health facility was therefore done according to what service area was available. Of the 39 districts (with 275 health facilities) assessed for graduation, 24 districts (62%) graduated from intensive donor supported technical assistance and 15 districts (38%) did not. The 24 districts that graduated had 145 health facilities while the 15 districts that did not graduated had 130 health facilities. In the districts that graduated, 77 (53%) out of 145 health facilities were rural based while the remaining 68 (47%) were urban. In districts that did not graduate, 88 (68%) out of the 130 health facilities were rural while the remaining 42 (32%) were urban. The proportion of rural facilities was less in districts that graduated than those that did not graduate; 53% versus 68%, respectively (Fig 2). The mean percentage score in human resource domain was statistically significantly higher in graduated than non-graduated districts; 93.6% versus 71.6% (OR = 5.81, 95% CI: 4.29-7.86, P <0.001), respectively ( Table 3). The human resources domain had the lowest mean percentage score of 71.6% in non-graduated districts compared to all other domains within non-graduated districts. Within Human resource domain, in the non-graduated districts, ART-CC service had the highest score of 92.8%, followed by CT-PMTCT and Laboratory scored 81.7% each and least score was in Pharmacy with 53.4%. In graduated districts, the least human resource domain performance score was in Laboratory with 87.6% (Table 3). In the technical capacity domain, CT-PMTCT within graduated districts had the highest mean percentage scores (93.8%) followed by ART-CC service (91.6%), then Pharmacy (89.5%) and Laboratory (87.6%) with mean percentage scores below the graduation threshold of 90%. The graduation scores in the districts that did not-graduate followed a similar pattern as those that graduated;, with CT-PMTCT and ART-CC service areas scoring above the graduation threshold (93.8% and 93.0%, respectively) while Pharmacy and Laboratory had mean percentage scores below the graduation threshold (86.4% and 86.5%, respectively). In commodity management domain there was little variation in mean percentage scores between graduated and non-graduated districts across CT-PMTCT, Pharmacy and Laboratory. The mean percentage scores in commodity management domain were above the graduation threshold in both graduated and non-graduated districts. In data management domain, the mean percentage scores for ART-CC, CT-PMTCT, Laboratory and Pharmacy were high in both graduated and non-graduated districts. The domain mean percentage scores for data management in both graduated and non-graduated districts were 97% and 97.4% respectively, above the graduation threshold. The overall mean percentage score for all four domains was statistically significantly higher in graduated than non-graduated districts in rural settings; 92.4% versus 89.4% (OR = 1.43, 95% CI: 1.24-1.65, P <0.001). However, this finding was not statistically significant in urban settings (Table 4). Except in commodity management domain, urban facilities in graduated districts had higher mean percentage scores than rural facilities in graduated districts in data management, human resources management and technical capacity domain. (Table 4). Performance stratified by province The overall mean percentage score stratified by province was 90% and above in both graduated and non-graduated districts in four of the five provinces; the overall mean percentage score in Central province for districts that did not graduate was 88% (Fig 3). The overall percentage score was statistically significantly higher in graduated than non-graduated districts in Central province (OR = 1.6, 95% CI: 1.31-1.97, P <0.001) and North Western province (OR = 1.45, 95%CI: 1.07-1.96, P = 0.015).There was no statistically significant difference between graduated and non-graduated districts in the other provinces. In both Central and North Western provinces lack of human resource management contributed to this overall difference observed in graduated versus non-graduated districts; in Central province the mean percentage score for human resource management domain was 86.4% in graduated districts and 38.7% in nongraduated districts (OR = 10.03, 95%CI: 5.29-16.98, P <0.001), while in North-western province it was 91.8% in graduated districts and 61.2% in non-graduated districts (OR = 7.12, 95% CI: 3.17-16.00, P <0.001). Discussion This study found that the overall mean percentage score for all four domains was statistically significantly higher in districts that graduated than districts that did not, indicating that it is feasible to use the QA/QI tool to measure service performance of health facilities and determine district graduation. The QA/QI tool can be used to establish the process required to support health facilities in attaining high levels of technical and management competency and progress toward independence from technical and programmatic assistance in low resource settings. The QA/QI process can be used in identifying performance gaps in health facilities that could be strengthened through collaboration between government and donors; where the donor progressively strengthens the gaps while preparing government to take over the full support for HIV/AIDS activities. This study found that human resource is an important factor in determining district graduation. Like most low-income countries, Zambia has a shortage of skilled health professionals [8]. This has a negative effect in the provision of quality health services including services for HIV/ AIDS care. This was demonstrated in our study where we found that the contributing factor to low performance in the human resource domain in non-graduated districts was not having at least two staff trained in pharmaceutical management of commodities for ART and opportunities infection, use of the anti-retroviral drugs logistics system, and general logistics management information system, as measured by the QA/QI tool. Even in rural and urban setting separately, the human resource management domain was still an important factor in determining district graduation. This was also observed in Central and North Western provinces where lack of human resource management contributed to the overall mean percentage score that was statistically significantly higher in graduated and non-graduated districts. We found that in the rural setting, the overall mean percentage score for all four domains was statistically significantly higher in graduated than non-graduated districts. This suggests Factors Associated with Health Facility Graduation from Donor-Support that the more rural health facilities there are in a district, the less likely that district will graduate. The lack of statistically significant difference in the overall mean percentage score between graduated and non-graduated districts in the urban settings could be because urban based health facilities have more and better medical equipment and human resources which could result in most facilities meeting graduation criteria. Unlike urban areas, rural settings have the lowest number of health workers, compared to urban areas [9]. Therefore, there is need for more effort in districts that have a greater proportion of rural rather than urban health facilities so that they are able to meet the graduation standards. However, in our study there seemed to be an indication that the human resource management domain in ART-CC services in rural settings performed better than in urban settings in both graduated and non-graduated districts. This could be due to strong community involvement in ART-CC services that exists in rural areas through volunteers that work as adherence support workers and lay counsellors. Regardless of whether the health facility is in a rural or urban settings, ZPCT II offered small monetary support to adherence support workers and lay counsellors found in these health facilities However, because of the cost of living which is lower in rural areas, this support is an incentive which translates in volunteers having a profound effect on HIV/AIDS service delivery in rural areas. The other findings are that there was high performance, above the graduation threshold, in the commodity management domain in both graduated and non-graduated districts. This could be due to the Zambian government's efforts, with support from USAID through John Snow Incorporation that improved distribution of essential pharmaceuticals that started in 2009 in these districts. Furthermore, a "bottom-up" quantification method for pharmaceuticals was introduced by the Zambian government, which provided necessary information for forecasting and avoiding drug stock outs [8]. We also found high performance, above the graduation threshold, in data management in both graduated and non-graduated districts. This could be attributed to the support that ZPCT II offered to MOH in paying salaries for data entry clerks (DECs). The DECs collected and managed routine data in HIV services for the MOH, in health facilities that offered ART. ZPCT II also provided mentorship to health workers in the health facilities on data management and reporting. Since the introduction of DECs in 2006, the timeliness and quality of data have improved greatly [10]. Government is now considering including DECs in their district health establishments. To further improve data quality, the DECs visited other health facilities in the district that did not offer ART; where they took one day in a month to also collect data and compile reports that were sent to MOH and ZPCT II. We found low performance in Laboratory and Pharmacy services in the technical capacity domain in both graduated and non-graduated districts. The low performance in Laboratory services was due to lack of adequate storage space for laboratory reagents, inadequate timely external referral systems for laboratory specimen and non-participation in external quality assessment scheme for CD4, as measured by the QA/QI tool. The low performance in Pharmacy was due to inappropriate maintenance of storage conditions in the dispensing area, lack of adherence to good dispensing practices when dispensing drugs to patients, and lack of and non-use of appropriate standard operating procedures/guidelines, as also measured by the QA/ QI tool.These weaknesses in Laboratory and Pharmacy services were also observed in the Zambia National Health Strategic Plan for 2011-2015 [8]. The low performance in Laboratory and Pharmacy technical areas is also linked to the human resource management domain as observed in our study. This could be due to the nature of work in these two technical areas that require specific skills that are not easily transferable compared to ART-CC and CT-PMTCT; where task shifting is common. Task shifting is better implemented within ART-CC and CT-PMTCT services than in Laboratory and Pharmacy services. Nurses are now undertaking a range of tasks that were formerly the responsibility of doctors, while certain tasks that were for nurses have been shifted to adherence support workers and lay counsellors [10]. This study had the following limitations; the nature of data used was for routine QA/QI assessments which only measured four performance domains. However, there could be other domains we did not measure that include management capacity (leadership and governance) and health financing systems of the districts [11]. The study gave equal weight to all the four performance domains when in reality some domains could have been more important than others [12]. The strength of this study is that most of the ZPCT II-supported health facilities and their respective districts were assessed for graduation and all those that were assessed were included in this study. Also, the QA/QI tool that was used to measure district service performance and graduation was based on MOH national guidelines and standard operating procedures. Conclusions This study has shown that it is feasible to use a QA/QI tool to assess performance of health facilities and determine district graduation. The human resources management domain was found to be an important factor associated with district graduation. This association with district graduation was also shown to be important in rural and urban settings separately. Furthermore, the human resource domain also contributed to performances seen in all technical service areas. In both graduated and non-graduated districts, the performance in the other three domains (commodity management, data management and technical capacity) was above the graduation threshold which could have been as a result of the extra effort that the donors provided to government through payment of DECs for data management; improved distribution of essential pharmaceuticals in commodity management domain, and the support of taskshifting through payment of adherence support workers and lay counsellors under the technical capacity domain. Health system strengthening interventions should therefore take into account human resources challenges in resource limited settings and should be based on empirical evidence on what donors can do in preparing government to take over management of health systems.
v3-fos-license
2023-01-19T22:24:09.523Z
2021-09-08T00:00:00.000
255977034
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1186/s13071-021-04973-2", "pdf_hash": "d79c8f0237da3ee78081cac2c98be0b5fc774d36", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42613", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "d79c8f0237da3ee78081cac2c98be0b5fc774d36", "year": 2021 }
pes2o/s2orc
Detection of Leishmania tarentolae in lizards, sand flies and dogs in southern Italy, where Leishmania infantum is endemic: hindrances and opportunities Leishmania tarentolae is a protozoan isolated from geckoes (Tarentola annularis, Tarentola mauritanica), which is considered non-pathogenic and is transmitted by herpetophilic Sergentomyia spp. sand flies. This species occurs in sympatry with Leishmania infantum in areas where canine leishmaniasis is endemic. In the present study, we investigated the circulation of L. tarentolae and L. infantum in sand flies, dogs and lizards in a dog shelter in southern Italy, where canine leishmaniasis by L. infantum is endemic. Sheltered dogs (n = 100) negative for Leishmania spp. (March 2020) were screened by immunofluorescence antibody test (IFAT) using promastigotes of both species at two time points (June 2020 and March 2021). Whole blood from dogs, tissues of Podarcis siculus lizards (n = 28) and sand flies (n = 2306) were also sampled and tested by a duplex real-time PCR (dqPCR). Host blood meal was assessed in sand flies by PCR. Overall, 16 dogs became positive for L. infantum and/or L. tarentolae by IFAT at one or both sampling periods. One canine blood sample was positive for L. infantum, whilst two for L. tarentolae by dqPCR. At the cytology of lizard blood, Leishmania spp. amastigote-like forms were detected in erythrocytes. Twenty-two tissue samples, mostly lung (21.4%), scored molecularly positive for L. tarentolae, corresponding to 10 lizards (i.e., 35.7%). Of the female Sergentomyia minuta sampled (n = 1252), 158 scored positive for L. tarentolae, four for L. infantum, and one co-infected. Two Phlebotomus perniciosus (out of 29 females) were positive for L. tarentolae. Engorged S. minuta (n = 10) fed on humans, and one P. perniciosus, positive for L. tarentolae, on lagomorphs. Dogs and lacertid lizards (Podarcis siculus) were herein found for the first time infected by L. tarentolae. The detection of both L. tarentolae and L. infantum in S. minuta and P. perniciosus suggests their sympatric circulation, with a potential overlap in vertebrate hosts. The interactions between L. tarentolae and L. infantum should be further investigated in both vectors and vertebrate hosts to understand the potential implications for the diagnosis and control of canine leishmaniasis in endemic areas. Background Zoonotic visceral leishmaniasis, caused by Leishmania infantum (Kinetoplastida, Trypanosomatidae), is a neglected disease of medical and veterinary importance, which impacts health, society and the economy in many tropical, subtropical and temperate regions of the globe [1]. Indeed, this disease affects mainly poor people [2] and may be fatal if not treated timely and properly. Infected dogs are the main reservoirs of L. infantum in the domestic and peri-domestic environments [3], with mainly subclinical presentation and only a small proportion manifesting overt clinical disease [4]. The causative agent is transmitted by bites of phlebotomine sand flies of the genera Phlebotomus in the Old World [5,6] and Lutzomyia in the New World [7,8]. Meanwhile, phlebotomine sand flies of the genus Sergentomyia are known to feed primarily on cold-blooded animals [9] and are associated to Leishmania spp. in lizards [9][10][11][12]. Nonetheless, DNA of L. infantum has been detected in Sergentomyia minuta [13,14], suggesting it can feed also on available endothermic tetrapod animals. This picture has also been corroborated by other reports of L. infantum DNA in several Sergentomyia spp., such as Sergentomyia dubia, Sergentomyia magna and Sergentomyia schewtzi in Africa [15], and S. minuta from endemic areas of canine leishmaniasis (CanL) in Europe [13,14,[16][17][18][19][20][21]. Meanwhile, the DNA and/or amastigote forms of pathogenic Leishmania spp. (i.e., Leishmania donovani, Leishmania tropica and Leishmania turanica) have been detected in reptiles [22][23][24][25], therefore suggesting their potential role as reservoirs of mammalian pathogenic Leishmania spp. [24,26,27]. On the other hand, Leishmania tarentolae (subgenus Sauroleishmania) is a less regarded trypanosomatid infecting geckoes (e.g., Tarentola mauritanica), possibly transmitted by herpetophilic Sergentomyia spp. in Europe, North Africa and the Middle East [28,29]. Incidentally, L. tarentolae is widely considered non-pathogenic. Nonetheless, some strains of this species (e.g., RTAR/FR/78/LEM125) may cause transient infection in mammals under laboratory conditions, as this species can differentiate into amastigote-like forms [30][31][32][33]. However, the molecular findings of L. tarentolae in a human mummy in Brazil [33], as well as in human blood [14] in central Italy, suggest its capacity to infect mammals. Nevertheless, the pathogenicity, virulence and overall deleterious effects of L. tarentolae in mammals are still unknown. In addition, given the high similarity in gene composition with L. infantum (i.e., 90%), L. tarentolae is considered a model for recombinant protein production and vaccine candidate [34][35][36], which could mean that natural infection with L. tarentolae may have a protective effect against L. infantum [37]. Given the variations in dog antibody levels between seasons of sand fly activity and the sympatric occurrence of both Leishmania species, we investigated the circulation of L. tarentolae and L. infantum in sand flies, dogs and lizards in a dog shelter in southern Italy where CanL by L. infantum is endemic. Study area and sample collection One hundred dogs which scored negative to Leishmania spp. on molecular and serological tests in March 2020 were re-sampled in June 2020 and March 2021 in a shelter located in a CanL-endemic area in Apulia region, southern Italy (40.419326N, 18.165582E, Lecce) [38]. The shelter is built in a dry and windy area 8.0 km from the nearest seaside (Fig. 1a). The environment around the shelter is characterized by few olive trees, withered grass, no water sources and surrounded by the typical muretti a secco (stone walls) where reptiles (i.e., Podarcis siculus lizards, Hierophis viridiflavus carbonarius snakes, and T. mauritanica geckoes) and rodents thrive. Dogs with a minimum age of 7 months were included in the study; signalment (i.e., age, sex, breed) and anamnestic data (i.e., previous protozoan and bacterial infection, and treatment) were recorded at time of enrollment. Moreover, a complete physical examination was performed by a veterinary clinician to assess the health status of the enrolled dogs. From each dog, whole blood was collected in vacuum containers with EDTA (2.5 ml) and serum collection tubes with clot activator (5 ml). From May to November 2020, sand flies were collected biweekly using 64 sticky papers (21.0 cm × 29.7 cm, covering up to 4 m 2 ) (Fig. 1b) and two CDC light traps were set from 5:00 p.m. to 8:00 a.m. Collections were carried out during the sand fly activity season [16] until the total disappearance/absence of sand flies (i.e., three consecutive negative captures). All specimens were stored in labeled glass vials containing 70% ethanol then morphologically identified using taxonomic keys and descriptions [39,40]. Reptiles were captured in the area of the shelter, on the same walls where the sticky traps were placed (Fig. 1c), by lassoing or by hand. Species of reptiles were identified using reference keys [41], and then physically examined to assess their health status. Anamnestic data (e.g., species, biological stage, sex, physical abnormalities such as tail loss or predator-induced wounds) were recorded in each animal's file. A small amount of blood was obtained via lizard tail fracture or by cardiocentesis when animals were adults and non-gravid females. Blood samples were stored at −20 °C and tail tissue in 70% ethanol. For each animal, blood smears were performed and then assessed for the presence of Leishmania parasites [42] using Diff-Quik stain [43]. Smears were rinsed in tap water to remove excess stain, and later evaluated using an optical microscope (LEICA DM LB2, Germany). Fecal samples were also collected from each animal. Captured lizards were humanely euthanized according to protocols [44] and dissected. Intestine, heart, kidneys, liver, lungs, spleen and skeletal muscle were individually collected and frozen at −20 °C. Serological testing Serum samples from all enrolled dogs were tested to assess the exposure to L. infantum and L. tarentolae. An IFAT for the detection of IgG anti-L. infantum was performed as previously described (Fig. 2a) [45], whereas for antibodies against L. tarentolae, the IFAT was performed using promastigotes of L. tarentolae (strain RTAR/IT/81/ ISS21-G.6c) as antigen (Fig. 2b) following the same procedure as for L. infantum IFAT. Serum samples from a dog positive for L. infantum by cytological and molecular analyses, and a healthy dog negative for L. infantum, were used as positive and negative controls, respectively, for both IFAT. Samples were scored as positive when they produced a clear cytoplasmic and membrane fluorescence of promastigotes from a cut-off dilution of 1:80 [46]. Positive sera were titrated by serial dilutions until negative results were obtained. Molecular procedures Genomic DNA (gDNA) was extracted from the thorax and abdomen (heads and last segments were removed for morphological identification) of each female sand fly (n = 1281) using an in-house method as previously described [47]. Lizard tissues (i.e., intestine, heart, kidneys, liver, lungs, spleen, skeletal muscle and tail), as well as blood samples from lizards and dogs, were extracted using two commercial kits, GenUP gDNA and GenUP Blood DNA kits (Biotechrabbit GmbH, Hennigsdorf, Germany), respectively, according to the manufacturer's instructions. DNA from lizard fecal samples was extracted using a specific commercial kit (DNeasy Pow-erSoil Kit, QIAGEN, Hilden Germany) following the manufacturer's instructions. All samples were tested by duplex real-time PCR (dqPCR) for detection of L. infantum and L. tarentolae (samples were considered positive with quantitation cycle (Cq) values up to 38.0 and 38.6, respectively), as previously described [48]. Approximately 100 ng of gDNA (with the exception of the no-template control) was added to each dqPCR. gDNA from cultured promastigotes of L. infantum, originally retrieved from a dog living in Italy (zymodeme MON-1) (Fig. 2c), and L. tarentolae (strain RTAR/IT/81/ISS21-G.6c) (Fig. 2d) was used as positive controls. For sequences analyses, Leishmania dqPCR-positive samples were amplified by conventional PCR (cPCR) using primers L5.8S/LITSR targeting the partial region of the internal transcribed spacer 1 (ITS1, ~ 300 bp) and PCR protocol run as described elsewhere [49]. Engorged sand flies (n = 22) and all specimens that scored positive for Leishmania spp. were tested for blood-meal determination by cPCR using primers targeting the vertebrate host mitochondrial cytochrome b (350 bp), and a PCR protocol was run as previously described [19]. All PCR reactions consisted of 4 μl of gDNA and 46 μl of PCR mix containing 3 mM MgCl2, 10 mM Tris-HCl (pH 8.3) and 50 mM KCl, 125 μM of each dNTP, 1 pmol/μl of each primer and 2 U of Ampli-Taq Gold (Applied Biosystems, Foster City, CA, USA). Amplified products were examined on 2% agarose gels stained with GelRed (VWR International PBI, Milan, Italy) and visualized on a Gel Logic 100 gel documentation system (Kodak, NY, USA). Amplicons were purified and sequenced in both directions using the same primers as for PCR, employing the Big Dye Terminator v.3.1 chemistry in an automated sequencer (3130 Genetic Analyzer, Applied Biosystems, Foster City, CA, USA). All sequences were aligned using the ClustalW program [50] and compared with those available in GenBank using the BLASTn tool (http:// blast. ncbi. nlm. nih. gov/ Blast. cgi). To determine genetic clustering of L. tarentolae, the representative ITS1 sequences obtained from lizard, sand fly and dog samples and from reference strains of L. tarentolae and L. infantum were phylogenetically analyzed along with those of other Leishmania spp. available in the GenBank database. Phylogenetic relationships were inferred using the maximum likelihood (ML) method based on the Kimura 2-parameter model [51], and discrete gamma distribution was used to model evolutionary rate differences among sites, selected by bestfit model analysis and based on the lowest score obtained by Bayesian information criterion (BCI) using MEGA6 software [51]. Evolutionary analyses were conducted with 5000 bootstrap replications using MEGA6 software [52]. The corresponding ITS1 sequence of Trypanosoma brucei (GenBank: KU552356.1) was used as outgroup. Results Of 100 dogs serologically examined, 16 scored positive against promastigotes of L. infantum and/or L. tarentolae by IFAT at one or both sampling periods (June 2020 and March 2021; Table 1). In particular, three dogs scored positive only against promastigotes of L. infantum (titer of 1:80) and five of L. tarentolae (titer up to 1:160). Of the eight animals positive for both species, four were positive at both time points, the remaining with different combinations (Table 1). Of dog blood samples tested by dqPCR, one collected in March 2021 scored positive for L. infantum (Cq = 37.2), whilst two for L. tarentolae (one in June 2020, Cq = 36.2; one in March 2021, Cq = 36.9). Podarcis siculus lizards (n = 28) were captured in the same study area (Fig. 1d), including 14 males and 14 females, whereas no snakes or geckoes were collected. Cytological blood smear examination revealed Leishmania spp. amastigote-like forms inside erythrocytes (Fig. 3a) and promastigote-like (Fig. 3b) forms in one lizard. Out of 224 lizard tissue samples examined by dqPCR, 22 samples (i.e., intestine, heart, kidneys, liver, lungs, spleen and skeletal muscle) scored positive for L. tarentolae, corresponding to 10 positive lizards (35.7%). Lungs had the highest number of positive samples (six, 21.4%), whereas the lowest Cq value (24.7) was recorded from liver (Table 3). Lizard blood, tails and fecal samples were all negative by dqPCR. BLASTn analysis of ITS1 sequences confirmed the L. tarentolae species identification showing a nucleotide identity of 98.7% with the reference sequence (GenBank: KU680858) available in the GenBank database and with L. tarentolae strain RTAR/IT/81/ISS21-G.6c. The phylogram of ITS1 showed a close phylogenetic relationship by clustering all L. tarentolae sequences herein obtained in a species-specific clade (Sauroleishmania), with the exclusion of the other Leishmania species (bootstrap value of 95%) (Fig. 4). Sequences obtained for L. tarentolae from lizards, dogs and sand flies were deposited in GenBank (MW832546, MW832547, MW832548). Discussion Data herein presented suggests that dogs may be exposed to L. tarentolae, a species largely disregarded by the scientific community since it is merely considered a saurianassociated trypanosomatid, yet it occurs in sympatry with L. infantum. In addition, dogs after initial exposure against promastigotes of L. tarentolae may then seroconvert, remaining seropositive even during the nontransmission sand fly season, suggesting a persistent rather than transient presence of L. tarentolae in a nonpermissive host. This event may happen in endemic areas where reptiles, herpetophilic sand flies and dogs share the same environment, and both Leishmania spp. occur in sympatry. While L. tarentolae has been previously reported exclusively infecting geckoes (i.e., Tarentola annularis and T. mauritanica) [53][54][55], the detection in lacertid lizards, P. siculus, is a new finding, which could be of major importance to better understand the epidemiology and host preference of this protozoan. The occurrence of L. tarentolae in lizards was confirmed both by the detection of Leishmania amastigote-like forms in erythrocytes (Fig. 3a) and by cPCR and dqPCR. At cytology, the Leishmania forms differ from those of L. infantum in that amastigote-like forms infected erythrocytes rather than leucocytes, with possible promastigote-like forms circulating freely in blood (Fig. 3b). The life cycle of L. tarentolae is yet to be fully unraveled, though promastigotes and amastigote-like forms have been previously recorded in blood and intestinal lumen from geckoes [10,22]. Unexpectedly, rather than blood and feces, results of the dqPCR indicated that organs, such as lungs and liver of lizards, are the preferential samples for diagnosing the infection by L. tarentolae, probably due to a low parasitic load of promastigotes/amastigotes-like forms in blood. Indeed, parenchymatous organs showed higher amount of L. tarentolae DNA, which agrees with higher parasitic loads detected. Blood is not considered the ideal sample for the molecular detection of Leishmania spp. due to the low circulation of the parasite [4]. This could represent a hindrance for the molecular identification in mammals, despite the detection of two samples positive with high Cq values. Podarcis lacertid lizards (commonly known as wall lizards) are synanthropic reptiles, which may play a role as reservoirs of other pathogens causing zoonotic diseases such as Lyme disease and rickettsiosis [56,57]. These reptiles live in wall cracks, under stones and anywhere they find shelter and food, with a wide distribution throughout the Mediterranean basin [58]. The microhabitats where lizards live are similar to that of breeding and resting sites of sand flies [16]. The high prevalence of infection in lizards overlaps the abundance of herpetophilic S. minuta and of P. perniciosus, which is the main vector of L. infantum [59]. Nonetheless, the finding of S. minuta as the most abundant species (92.7%) compared to P. perniciosus (7.3%) was already observed in other dog shelters from southern Italy where L. infantum is prevalent, such as in Apulia [16], Sicily [60][61][62][63], as well as Morocco [64], Portugal [65] and Spain [21]. In addition, the low number of P. perniciosus collected may be correlated to the species phenology and environmental preferences. Indeed, P. perniciosus is more abundant in domestic or peri-urban settings, and S. minuta in rural or wild areas, similar to the characteristics of the studied shelter [66]. As for many other species of phlebotomine sand flies, S. minuta displays a rather catholic feeding behavior [5] depending on host availability. The detection of human blood in S. minuta suggests the opportunistic attitude of this species, as already demonstrated in Sicily where 64% of engorged sand flies scored positive for human blood [63]. The ectoparasiticide treatment of dogs could have affected the sand fly species composition, similarly to a previous study from a dog shelter where a group of animals were treated with a combination of 10% imidacloprid/4.5% flumethrin collar, and the remaining were left untreated [67]. In that study, S. minuta was the most common sand fly species identified (66.6%) throughout a collection period of 2 years, followed by P. perniciosus (15.1%), Phlebotomus neglectus (8.8%) and Phlebotomus papatasi (0.23%). Although S. minuta has been found molecularly positive for L. tarentolae, the vector capacity has never been demonstrated. However, transmission of this Leishmania sp. most likely occurs as described for mammalian Leishmania, through a pool feeding mechanism [36]. Also, the direct ingestion of the sand fly by lizards cannot be ruled out [36]. Given that the dog population was under an ectoparasiticide treatment and considering the high abundance of S. minuta, dogs could have ingested infected L. tarentolae sand flies. Another peculiar result of this study is the lack of reptile blood in the engorged S. minuta analyzed. This can agree with the hypothesis of a reduced density of preferred reptile hosts in the shelter area, as a consequence of the high predatory pressure exerted by dogs. Hence, further studies are advocated to better elucidate the reptilian and mammalian interactions in the life cycle of L. tarentolae. The molecular detection of L. tarentolae in the blood of two dogs is unprecedented, and the exposure to this parasite was confirmed by the seropositivity in 16 dogs, of which eight scored positive for both Leishmania species and five against promastigotes of L. tarentolae only. This result is new to science, since IFAT using promastigotes of L. tarentolae was herein described for the first time. Although the IFAT method reported should be further validated using serum samples of animals purposely infected with both Leishmania spp., L. tarentolae exposure has been previously demonstrated to be associated with transient infections in mammals [30][31][32][33]. Positive dqPCR blood samples for both species of Leishmania were from seronegative dogs at both time points, suggesting a recent or transient presence of the parasite for which the animal had not yet seroconverted. In addition, the exposure of animals to this protozoon is also supported by the detection of L. tarentolae in two P. perniciosus, which usually feed on dogs. The association of this Leishmania species to sand flies of the genus Phlebotomus was already described in 6.6% of Phlebotomus perfiliewi examined in Central Italy [14]. Though the seropositivity of dogs against promastigotes of L. tarentolae does not imply the reservoir competence of canids, these data are of medical and veterinary relevance. Indeed, the detection of a significant reduction in anti-L. infantum antibody titers in 55.4% of L. infantum-seropositive and clinically healthy dogs from the same shelter was recently demonstrated after sampling one year apart [68]. A large proportion of these animals (44.4%) became seronegative (i.e., below the cut-off value of 1:80), further suggesting a possible L. tarentolae transient exposure. Indeed, although the IFAT is considered the gold standard for the diagnosis of L. infantum, as it is based on the visualization of the immunofluorescence on the whole promastigotes, cross-reactions with highly similar species of Leishmania may occur. This event was observed in eight dogs which had titers for both species. However, co-infections could also have caused crossreactivity, given the discrepancies in titers for both species (e.g., dog positive for L. infantum with titers 1:1280 and to L. tarentolae with 1:160). Given the relevance of serology in epidemiological studies and in the management of diseased patients, the variations in antibody titers requires careful examination. Under the above circumstances, considering that the IFAT for the detection of antibodies against Leishmania promastigotes represents the reference serological method for CanL diagnosis and screening, as well as for clinical staging and therapeutic purposes [69,70], the cross-reactivity between the two species of Leishmania might directly impact the interpretation of CanL-related clinical signs, prognosis and treatment. Finally, the sympatric occurrence of L. infantum and L. tarentolae in sand flies (e.g., co-infection in S. minuta) in the specific epidemiological context herein studied could result in hybridization events between these two species. This event has been previously experimentally confirmed for L. infantum and Leishmania major in Lutzomyia longipalpis [71]. The possibility of genetic exchange and hybridization events could have implications for the pathogenicity and visceralization capacity of an otherwise innocuous species such as L. tarentolae. However, these hypotheses need further research. Conclusions Under specific epidemiological contexts where canids, reptiles, herpetophilic sand flies, L. infantum and L. tarentolae occur in sympatry, dogs may be exposed to L. tarentolae. Results of this study further suggest the low host specificity of L. tarentolae in the ability to infect other reptiles (i.e., lacertid lizards) and likely mammals on which S. minuta may feed. Serological findings indicate that a cross-reactivity for both species of Leishmania may occur, having diagnostic and clinical implications for seropositive healthy dogs. Future studies should focus on determining the prevalence of L. tarentolae infection in dogs and its possible interactions with L. infantum in areas where they are sympatric.
v3-fos-license
2023-11-22T16:08:54.120Z
2023-11-20T00:00:00.000
265316014
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2023.1279830/pdf?isPublishedV2=False", "pdf_hash": "2aa3a6d8e10b9788311cfd363559a30fe9e275a1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42614", "s2fieldsofstudy": [ "Medicine" ], "sha1": "d9031735cb952865884ee5f773ef2564636c772d", "year": 2023 }
pes2o/s2orc
Intramural hematoma in the proximal sealing zone of the thoracic endovascular aneurysm repair: frequency and safety in acute and subacute type B dissections Introduction To assess the outcomes after thoracic endovascular aneurysm repair (TEVAR) in the presence of intramural hematoma (IMH) in the proximal sealing zone. Material and methods Patient data were retrospectively extracted from the hospital records of patients treated with TEVAR for acute and chronic aortic dissection type B in one single center. The initial, preoperative, first postoperative, and last follow-up CT scans were evaluated in the aortic 3D multiplanar reformats and the centerline regarding IMH presence in the proximal sealing zone, anatomical preconditions, and the morphological TEVAR complications including migration and bird-beak. Groups with (IMH) and without IMH (no-IMH) were compared. Results Overall, 84 patients (IMH:42; no-IMH:42) were treated at the age of 63(55; 72) years, of whom 23/84 (27%), 34/84 (40%), and 27/84 (32%) were in the hyperacute, acute and subacute dissection phases, respectively. The bovine arch was found in 10/84(12%) and the type III arch was most common (43/84;51%). IMH maximum extent was found in zones 0, 1, 2, and 3 in 14/84 (17%), 17/84 (20%), 18/84 (21%), and 6/84 (7%), respectively. Sealing was achieved in zone II in 71/84 (85%) and LSA was revascularized in 66/84 (79%) of the overall cohort. Early mortality and paraplegia were 2/84 (2%) each; stroke rate was 3/84 (4%). During the 22 months median follow-up (22;4;43) no RTAD was observed. Migration ≥10 mm (IMH: 11/82; no-IMH: 10/82; P = 1.0) and bird-beaks (IMH: 10/82; no-IMH: 12/82; P = 0.8036) were comparable in both groups and accompanied by a low aorta related mortality (1/82) in both groups. Conclusion The presence of the IMH in the proximal TEVAR sealing zone is frequent and may not be relevant for the occurrence of the RTAD, stent-graft migration, or bird-beak formation. Introduction The endovascular treatment of type B dissections (TBD) has evolved to the invasive therapy of choice in the current guidelines (1).Thus, an increasing number of patients are treated in the hyperacute and acute phases due to complicated TBD with acute life-threatening states including organ malperfusion, rupture, and conservatively unmanageable recurrent pain or uncontrollable hypertension.Furthermore, in the presence of the risk factors for aortic growth during the follow-up, TEVAR should be considered in the subacute phase to ensure aortic remodeling and prevent rupture and mortality at a later stage (2,3). However, TEVAR may also be associated with a higher perioperative risk, particularly when performed in the acute and hyperacute phases (4).Retrograde type A dissection (RTAD) is one of the most serious complications, associated with high mortality (4).Responsible for the occurrence of RTAD may be the vulnerability of the aortic wall in and proximal to the TEVAR sealing zone in the aortic arch in combination with excessive oversizing (4,5).Therefore, reduced proximal oversizing has been recommended and broadly applied (6).However, reduced oversizing may create new challenges after the absorption of the IMH in the proximal sealing zone, and result in proximal stent graft malapposition, proximal sealing zone dilatation and stent-graft migration (7). The aim of our study was to evaluate the rate of IMH in the proximal TEVAR sealing zone and to examine the impact of its presence on the occurrence of RTAD, the remodeling of the descending aorta, and the development of migration and the bird-beak configuration. Materials and methods The study was approved by the local ethical committee of the University medical center Tübingen (322/2022BO2).Patient consent was waived due to the retrospective character of the study. TBD treatment protocol and cohort specifications The center's protocol provides the TEVAR treatment of all complicated TBD with rupture or malperfusion in the hyperacute phase.In case of complicated TBD with uncontrolled hypertension, recurrent pain, or early aortic diameter progress of 5 mm in the CT scan 48 h after the diagnosis the invasive treatment is performed in the acute phase.Uncontrolled hypertension is defined as the inability to control blood pressure with intravenous antihypertensive therapy (targeted blood pressure <120/80 mmHg) or to substitute intravenous antihypertensive therapy with oral medication after 1-week post admission.Uncomplicated TBD patients with the presence of aortic growth risk factors including proximal entry tear diameter >10 mm, or aortic diameter >40 mm are discharged from the hospital after the oral blood pressure control has been established.Those patients are subjected to elective TEVAR in the subacute phase.All patients with acute TBD are followed up with contrast-enhanced CT scans at 48 h and 7 days on a regular base.The further follow-ups are performed at 3, 6, 12 months and yearly thereafter. This retrospective cohort study included patients with hyperacute (<24 h), acute (day 1-14), and subacute (day 15-90) TBD treated between 2016 and 2023 with TEVAR in a tertiary referral hospital.The procedures were isolated through the search of the center's database (SAP, Walldorf, Germany).The patients were assigned to 2 study groups: The no-IMH group without the IMH in the proximal TEVAR sealing zone, and the IMH group where sealing was performed in the presence of the IMH (Figure 1).Furthermore, at least 10 mm of IMH-free sealing zone was defining the no-IMH group (8).The study design and the manuscript were organized according to the STROBE guidelines for observational studies (9). Population demographics, co-morbidities, and procedural specifications Demographic data included age and sex.The co-morbidities were obtained from the hospital records: hypertension, current nicotine abuse, orally or insulin-treated diabetes, dyslipoproteinemia, COPD, previous percutaneous coronary intervention (PCI) or coronary artery bypass graft (CABG), and previous ascending aorta replacement. Procedural details including the indication for treatment, the Ishimaru landing zone (zone 0-III), stent-graft type and diameter, the revascularization of the left subclavian artery, the presence of a bovine arch, the arch type (type I-III) (10), and the technical success were drawn from the operation protocols, intraoperative imaging, and the pre-and postoperative CT scans (11).Our procedural protocol includes the proximal oversizing in dissections of 10% and the standard use of Relay NBS (Terumo Aortic, Inchinnan, UK) and Gore C-TAG conformable (Gore Medical, Flagstaff, AZ, USA) stent-grafts without a long proximal bare stent.In the case of Relay NBS, tapered stent-grafts are used in dissections by default to reduce distal oversizing.Our preference for the Relay NBS graft can be explained by the proximal deployment mechanisms of this endograft which include the proximal to distal deployment, stabilization wires, and the V-patch in the inner aortic curvature to stabilize the endograft and to prevent the bird-beak during the deployment, respectively (12).Thanks to the V-patch the inner portion of the endograft can be securely apposed to the inner aortic curvature.In comparison to the Relay NBS, C-TAG conformable device with the active control system allows for the inner curvature apposition by the active endograft orientation feature in the proximal landing zone (13).C-TAG conformable is predominantly used in our center for patients with high true lumen tapering to address the risk of distal stentinduced new entry due to the fact that this endograft may have a reduced risk of this complication during the follow-up in comparison to the ring stent design (14). According to our protocol, the left subclavian artery (LSA) is revascularized in all hemodynamically stable patients during the same procedure.All patients treated electively in the subacute phase receive a cerebrospinal fluid drain on the day before the operation.The endograft deployment was performed in all patients under the left ventricular output reduction with rapid pacing. CT analysis CT scan analysis of the admission/preoperative, first postoperative, and last follow-up CT scan was performed with dedicated software (Therenva, Rennes, France).All patients were subjected to a contrast-enhanced CT scan with a slice thickness of 1 mm.The post-processing of the DICOM data set included the centerline measurement of the diameter of the proximal landing zone in the preoperative CT scan and the outer-to-outer total aortic diameter and TL diameters at the level of the pulmonary artery bifurcation.Proximal oversizing was calculated according to the formula: [(proximal stent-graft diameter, mm/outer-to-outer proximal sealing zone diameter, mm)-1] × 100%. Stent-graft migration was measured by the increase in the distance between the distal left common carotid artery (LCCA) origin and the proximal stent-graft end at the outer curvature of the aorta.The bird-beak was described by the angle between the innermost proximal stent-graft plane and the inner aortic curvature plane, as described in our previous works (15, 16).The CT morphological absence of the contrast agent in the venous phase defined the total false lumen thrombosis, whereas partial thrombosis included patent and thrombosed areas of the FL. Outcome parameters Early follow-up outcomes included the postoperative results within 30 days after the operation, while mid-term follow-up outcomes described the findings of the last follow-up, which consisted of a contrast-enhanced CT scan and a patient interview with the physical examination.Primary technical success was reported according to the SVS reporting standards for TEVAR (11).A stroke was defined as a new neurological event that persisted for >24 h and affected the National Institutes of Health Statistical analysis The statistical analysis was performed with JMP ® 14 software (SAS, NC, USA).Categorical variables are presented as patient count (percentage), and continuous variables are reported as median (1st quartile; 3rd quartile).Fisher's exact test or x 2 − test was employed for categorical variables.Continuous data were tested for normality and equality of variance by Kolmogorov-Smirnov and Levene tests, respectively.t-test was used for normal distribution, and the Mann-Whitney U test was applied for non-normal continuous variables.Multivariate logistic regression analysis with the Wald test and likelihood ratio test was performed to assess the risk factors for migration.P < 0.05 was considered significant. Patient cohort and procedural parameters The median age of the cohort was 63 (55; 72; Table 1) years and 21/84 were female.Between the study groups there was a trend towards a higher rate of hypertension (37/ 42 vs.42/42; P = 0.0551) in the IMH group and hypercholesterinemia with statin treatment was more common in the no-IMH group (9/42 vs. 2/42; P = 0.0485).Other comorbidity parameters were comparable between the study groups including current nicotine abuse, diabetes, COPD, previous PCI/CABG, and previous ascending aorta replacement (Table 1). The aortic arch types I, II, and III, were found in 20/84, 21/84, and 43/84 of the cohort, respectively, and the distribution was comparable between the study groups (P = 0.7219).The bovine arch was present in 10/84 patients (Table 2). The individuals were treated in the hyperacute, acute, and subacute dissection phases in 23/84, 34/84, and 27/84 of the cases, respectively.The most important TEVAR treatment indications in the hyperacute and acute phases were malperfusion (29/84), early diameter progress (19/84), rupture (18/84), recurrent pain (16/84), and uncontrolled hypertension (6/84).In the subacute phase, patients were treated due to the Primary entry tear localization was zone II, III, and IV in 30/ 84, 39/84, and 15/84, respectively.The proximal sealing zone was zone I in 3/84, zone II in 71/84, and zone III in 10/84 of the patients.Patients acutely treated in the zone I received a carotidcarotid bypass through the ante-tracheal approach (1/84), whereas for the elective treatment in the subacute phase, a proximal scallop TEVAR for the LCCA was customized (2/84).LSA revascularization with a carotid axillary bypass was performed in 66/84 patients prior to the TEVAR but during the same intervention.The proximal oversizing of the stent-graft was 11% (8; 13), and the most common stent-graft in this study was Relay NBS (Terumo Aortic, Inchinnan, UK), which was used in 76/84 of the cohort.In 8/84 Gore CTAG conformable stent-graft was implanted.The technical success of the procedure was achieved in 84/84 (100%). IMH extent and dynamics The presence of the IMH in the aortic arch zones 0-III affected 55/84 individuals (Figure 2).The most proximal IMH extent is shown in Table 2.In 14/84 patients the IMH reached zone 0, of whom 5/17 had a hematoma of the ascending aorta.Zone I was affected in 31/84 cases, whereas an IMH was present in zone II in 49/84.Naturally, the extent of the IMH was more proximal in the patients from the IMH group (P < 0.0001) but was also found in zones II (8/42) and III (5/42) of the no-IMH group.Those patients were assigned to the no-IMH group due to an IMH-free proximal sealing zone length ≥10 mm. The patients who were treated electively in the subacute phase (median 31 days; 21; 138) had an IMH in 23/27 cases at the diagnosis CT scan.The IMH remained stable until the treatment in 22 (96%) patients and showed an extent increase in only one case. Early follow-up outcomes At 30 days, mortality occurred in 2/84 (Table 3) patients.One patient with ruptured acute TBD died on the day of the operation, due to the continuous distal perfusion of the false lumen rupture site in the descending aorta.The second patient developed an infrarenal abdominal rupture of the false lumen on the second postoperative day.Stroke was found in 3/84 (4%), of whom all were non-disabling.One patient with vertebral artery transposition had a minor cerebellar stroke with transient vertigo. A second patient with a covered left vertebral artery arising from the aortic arch experienced postoperative delirium, which gradually disappeared, and a third patient, without LSA revascularization prior to the TEVAR, had a smaller posterior stroke, and recovered completely during the hospital stay.Furthermore, 2/84 (2%) patients showed postoperative paraplegia.In both cases, cerebrospinal fluid drainage was not implemented before TEVAR, due to the aortic rupture.Both had an aortic coverage over 25 cm and the LSA was covered without revascularization.One patient recovered during the hospital stay after the implementation of the cerebrospinal fluid drainage, whereas the other remained permanently paraplegic.During the early follow-up RTAD, and type I endoleaks were not observed.The median ICU stay was 0 (0;1) days and the incidence of birdbeaks was low (2/84; 2%; Table 3). Mid-term follow-up outcomes During the follow-up of 22 (4; 43) months, 2 more patients died (2/82; 2%) of whom one patient at 3 months with prosthesis infection, while the second had lethal bleeding from Complete false lumen thrombosis was present in 80/82 descending thoracic aortas, with only two patients with partial thrombosis in the IMH group.The remodeling of the true lumen was more prominent in the No-IMH group with a diameter increase from preoperative to the follow-up measurement of 14 (4; 19) mm.In comparison, the diameter increase in the IMH group was 7 (2; 15) mm (P = 0.0225).The aortic diameter remodeling (reduction) was less pronounced in the overall cohort 2 (−1; 5) mm, with a significantly better remodeling in the IMH group (No-IMH:0 (−2;3); IMH: 3 (0;5); P = 0.0436). Risk of stent-graft migration The multivariate logistic regression was performed to evaluate the risk of stent-graft migration, and the results are shown in Table 5.The "whole model test" was statistically significant (P = 0.001).The bird beak formation was the main risk factor for stent-graft migration (OR 13.3; CI 2.9-59.3.5;P = 0.0007; Table 5) followed by the dSINE occurrence (OR 11.1; CI 2.1-58.4;P = 0.0045).Stent-graft type (P = 0.5410), treatment zone (P = 0.2738), arch type (P = 0.7859), the timing (dissection phase) of TEVAR (P = 0.1333), true lumen (P = 0.9227) and aortic diameter (P = 0.9892), remodeling, and the presence of IMH in the proximal sealing zone (P = 0.4822) were not significant in the multivariate logistic regression.Interestingly, proximal oversizing ≤5% was not significant in the Wald test (P = 0.0614), however, significant in the likelihood ratio test (P = 0.0450), and stood out with the highest odds ratio (OR 21.5; CI 0.9-493.8). Discussion Theoretically, the anticipated proximal TEVAR landing zone in TBD may consist of a completely healthy or in extremo a totally dissected aortic wall.The latter is not regarded as an adequate and sustainable proximal landing zone even though the primary entry tear may be initially covered and the false lumen thrombosis induced.The dissected proximal landing zone may lead to proximal SINE, stent-graft migration, and type IA endoleak in the short or long-term (18).Therefore, Aorta related mortality 2 Endoleaks type I (total) n (%) (19,20,21).Other authors reported the occurrence of RTAD after the TEVAR treatment of the type B IMH with a diseased proximal sealing zone (affected by the IMH) (22).The current study reports the high incidence of IMH in the proximal TEVAR landing zones in TBD, compares the outcomes of patients with and without the IMH at the proximal stent-graft end, and reports comparable results regarding RTAD, bird-beak formation, and stent-graft migration. Our report goes in line with the previous study by Kuo et al. with considerable rates of IMH in the arch landing zones 0-III (8).The authors showed that 37% of their cohort needed zone 0 or zone I debranching to achieve at least a 10 mm IMH-free proximal TEVAR landing zone (8).Furthermore, they suspected that the occurrence of 3 RTADs may have been associated with the IMH at the proximal stent-graft end, without proving this in the multivariate analysis, due to the low patient and event numbers (8).The colleagues, therefore, recommended further evaluation of these findings in greater cohorts.We included 84 patients, and unlike Kuo et al. we excluded chronic TBD from the analysis.Furthermore, this study reports the results of a cohort with a relatively high proportion of patients treated in the hyperacute phase (27%), who may be at a higher risk of RTAD due to the fragility of the aortic wall as reported previously (4,8).RTAD did not occur in this cohort, which may imply the safety of the proximal landing in the IMH with a moderate oversizing of approximately 10% as used in this study.Previously, oversizing of 0%-10% was recommended for the treatment of TBD to reduce the risk of RTAD (5).However, the uncertainty of the IMH fate during the follow-up (7), the result of the hematoma absorption, and its unclear effect on the dilatation of the proximal landing zone may suggest that a targeted oversizing of 10%, as applied in our study, may be reasonable in IMHaffected proximal landing zones.This is even more important due to the result of our multivariate regression analysis, which isolated an oversizing of 0%-5% as a risk factor for future stent-graft migration (OR 20.2).Furthermore, in the short term, the extent of the IMH was stable in the patients treated in the subacute phase, thus the delay of the TEVAR to the subacute phase for the purpose of IMH absorption may not be advisable.As shown by Evangelista et al. for type B IMH the absorption of the hematoma may be expected at 6 months (7) after diagnosis. A significant increase in bird-beaks was observed in our study during the follow-up.Bird-beaks have been reported to increase the risk of type I endoleaks after TEVAR and they may lead to the instability of the stent-graft in the proximal landing zone with migration (23,24).However, no type IA endoleaks were observed in our study and substantial migration ≥10 mm was found in 26% of the overall cohort, however, without any difference between the groups (P = 1.0).Furthermore, the increase of bird-beaks in the IMH and the no-IMH groups during the follow-up was equal (+10 cases).These observations may suggest that the presence of the IMH at the proximal stentgraft end does not affect the stent-graft stability in the proximal sealing zone and that migration and bird-beaks may be somewhat associated with other effects.These observations were confirmed by the multivariate analysis to identify the risk factors for migration ≥10 mm.Proximal landing in the IMH was not relevant nor were other factors including stent-graft type, treatment zone, arch type, treatment phase, and pronounced remodeling.The major risk factors for migration were the birdbeak occurrence during the follow-up (P = 0.0007), the occurrence of dSINE (P = 0.0045), and the proximal oversizing ≤5% (P = 0.0614). The aortic remodeling of the true lumen was superior in the no-IMH group, which, however, may be explained by the substantially higher rate of subacute dissections with stiffer dissection membranes in the IMH group (25).Regarding the aortic diameter regression after TEVAR, the IMH group showed significantly better remodeling.Our previous works described a better diameter remodeling of subacute/chronic dissections than of those treated in the acute phase, which may be an explanation due to the higher rate of subacute dissections in the IMH group (26). This study has several limitations.The findings of this retrospective observational study need to be confirmed by studies with a robust prospective design.Although the measurements and the study outcomes were standardized and well-defined our study may be susceptible to bias due to its retrospective and single-center design.Furthermore, the study included a limited patient number and thus, may be underpowered to determine the risk of events with a low incidence as RTAD.Nevertheless, we consider that this study may be helpful for further evaluations e.g., in a meta-analysis, due to its well-defined outcome parameters and reporting standards. In conclusion, this study implies that the treatment of type B aortic dissections with TEVAR in the early dissection phases may be safe with a low risk of RTAD and considerable aortic remodeling in the thoracic aorta.The presence of the IMH at the proximal stent-graft end may not affect the TEVAR performance in the proximal landing zone in terms of bird-beak and migration.dSINE and bird-beak occurrence, as well as the proximal stent-graft oversizing ≤5% were identified as major risk factors for stent-graft migration. FIGURE 1 ( FIGURE 1 (A) Shows the intramural hematoma with proximal dissection extent in the aortic centerline at the site of the left subclavian artery.(B) demonstrates the intramural hematoma extent in the proximal landing zone II between the left carotid and left subclavian artery. FIGURE 2 ( FIGURE 2 (A) Shows the preoperative sagittal CT scan at the time of the type B dissection diagnosis in a patient with intramural hematoma (at the site of the left subclavian artery).(B) demonstrates the postoperative sagittal CT scan after TEVAR in the landing zone II.(C) indicates the sagittal CT scan of the same patient: the stability of the proximal endograft position is well visible.The remodeling of the downstream aorta is shown in (D). Stroke Scale (NIHSS) by at least 2 points (17).Any new onset of transient or permanent paraplegia or paraparesis after TEVAR, which occurred as a deficit in motor or sensory function of the lower extremity, or incontinence was assessed. TABLE 1 Demographic characteristics and co-morbidity of the cohort. TABLE 2 Planning and procedural parameters of the cohort. TABLE 3 30 -day outcome of the cohort. TABLE 4 Mid-term outcome of the cohort. TABLE 5 Multivariate logistic regression to evaluate the risk factors of stent-graft migration.
v3-fos-license
2021-11-04T13:52:02.008Z
2021-05-17T00:00:00.000
236545745
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jbm4.10514", "pdf_hash": "40acca3fa002a1525721944cfb523d016481f329", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42615", "s2fieldsofstudy": [ "Medicine" ], "sha1": "3f9ec3ef1bc0364162b1ee39d9f3e77598a34290", "year": 2021 }
pes2o/s2orc
Health‐related quality of life outcomes after surgical treatment of atypical femur fractures: a multicenter retrospective cohort study ABSTRACT The objective of this study was to examine the health‐related quality of life (HRQOL) outcomes for surgically‐treated atypical femur fractures (AFFs) compared to typical femoral diaphyseal fractures. Two large trauma center databases were retrospectively queried for surgically‐treated femur fractures. Fractures were grouped into AFFs and compared to a control cohort. Controls for the AFF group included women with diaphyseal fractures without additional AFF characteristics. Patients were contacted for administration of the Short Form‐36v2 Health Survey. Surveys were completed an average of 30.3 months (range, 6–138 months) and 25.5 months (range, 5–77 months) postoperatively for the AFF and non‐AFF groups, respectively. All patients were female, with 46 patients in the AFF and 26 patients in the non‐AFF group. The average age of the AFF group was 70.1 years compared with an average age of 67.4 years in the non‐AFF group (p = 0.287). Over 90% (91.3%) of patients in the AFF group had a history of bisphosphonate use while 26.9% of patients in the non‐AFF group had used bisphosphonates (p < 0.0001). Patients with AFF reported their postoperative physical and mental health to be no different than similarly aged patients with femoral diaphyseal fractures, as measured by the Short Form 36, version 2 (SF‐36v2) Health Survey. These data suggest that mid‐term patient‐reported quality of life outcomes are similar among women who sustain an AFF compared to a cohort of more typical femoral diaphyseal fractures. © 2021 The Authors. JBMR Plus published by Wiley Periodicals LLC. on behalf of American Society for Bone and Mineral Research. Introduction B isphosphonates are currently one of the most commonly prescribed medications to prevent osteoporotic fractures. (1)(2)(3) Bisphosphonates are frequently used as first-line agents for postmenopausal osteoporosis. (4) Alendronate was first synthesized in the 1970s, and by 2006 in the Unites States about 30 million prescriptions were written annually, accounting for nearly 15% of postmenopausal women. (5) Because the incidence of osteoporotic fractures is expected to increase with the aging population, the prescriptions of bisphosphonates had also been predicted to increase. (6) Yet with the advent of newer medications to treat osteoporosis, and with concerns about rare side effects from bisphosphonates such as osteonecrosis of the jaw and atypical femoral fractures (AFFs), a relative decrease in projected bisphosphonate utilization in the past decade has been realized. (7)(8)(9) Not limited to management of osteoporosis, bisphosphonates have been used to treat a variety of pathologies ranging from Paget's disease of bone to hypercalcemia of malignancy. Safety profiles for bisphosphonates show they are generally well-tolerated, but more recently there have been concerns about the long-term use of bisphosphonates. (1,10,11) In 2005, Odvina et al. (12) first reported on a small number of patients who experienced primarily non-spinal fractures of the lower extremity and femur while on bisphosphonate therapy, with histological analysis showing markedly suppressed bone formation after prolonged bisphosphonate use. Subsequently, several authors have reported similar findings, all associated with chronic bisphosphonate use. (6,13,14) In each case, the fractures were found to be secondary to low-energy mechanisms, presented with prodromal thigh pain, and demonstrating a prominent femoral cortex medial spike radiographically ( Figure 1). (15)(16)(17) Bilateral AFFs have been reported in up to 48% of cases. (18)(19)(20)(21)(22) Cortical thickening near the fracture site and delayed union after surgical repair have also been reported. (17,(23)(24)(25) A taskforce committee report commissioned by the American Society for Bone and Mineral Research (ASBMR) has established definitions of the emerging phenomenon to standardize investigations into the pathophysiology, epidemiology, and orthopedic clinical and medical management of AFFs. (26) Numerous authors have reported on patients who have sustained AFFs, including presentations, treatments, duration and type of bisphosphonate use, and final clinical outcomes. (27,28) Although the radiographic characteristics and potential risk factors for AFFs have been welldescribed, less has been reported about the health-related quality of life (HRQOL) outcomes after patients have undergone surgical treatment for their AFF. The objective of this study was to collect HRQOL outcomes from patients who underwent surgical treatment for their AFFs and compare them to a similar cohort who underwent surgical treatment for diaphyseal femoral fractures. In this way, we aimed to determine the relative effect of repaired AFFs compared with an otherwise healthy cohort sustaining a similar isolated fracture on quality of life measures. Patients and Methods After Institutional Review Board (IRB) approval, two large trauma center institution databases were queried for all surgically treated fractures of the femur using International Classification of Diseases, Ninth Revision (ICD-9) codes from January 2004 to December 2014 at the Massachusetts General Hospital (MGH) and the University of Louisville. Only female patients were enrolled ( Figure 2). Fractures of the hip and supracondylar regions were excluded from the study, leaving diaphyseal femoral fractures for review. All potential patient radiographs were reviewed by two orthopedic surgeons who were blinded to clinical details, and only fractures without previous implants or injuries were included. Based on imaging characteristics and mechanisms of injury, fractures were classified as either AFFs or non-AFFs according to the 2014 ASBMR guidelines. (26) Bisphosphonate use and duration was not used to determine fracture type. Medical records were reviewed for use of any anti-osteoporosis medication, with the focus on bisphosphonates, in the time preceding the fracture. Patients from both groups were then contacted via telephone and/or direct mailings where the Short Form 36, version-2 (SF-36v2) Health Survey and a brief survey of injury mechanism, medication use, smoking, employment, and previous fracture history was administered. Mean Physical Component Score (PCS) and Mental Component Score (MCS) were calculated according to the methods described by Taft et al. (29) The average duration of bisphosphonate use was calculated. If a range of bisphosphonate use duration was given for a patient, the average number was used and those who had used bisphosphonates only once, were assigned a duration of 1 year because the majority of bisphosphonate medications have a relatively long half-life. Additionally, mechanisms of injury were categorized as either high energy or low energy. Statistical analysis was performed using Microsoft Excel (Microsoft Corp., Redmond, WA, USA) for Student t tests and the MedCalc online statistical calculator (MedCalc Software Ltd., Ostend, Belgium; https://www.medcalc. org/) for proportion tests. Results A total of 119 patients were identified with diaphyseal femoral fractures, and 72 patients (61%) were available for follow-up; all patients completed the SF-36v2 and brief health survey. Of these, 46 AFFs and 26 controls with isolated diaphyseal femur fractures were identified using the major and minor criteria outlined by the ASBMR (Table 1). (26) The AFF and non-AFF groups had similar ages: 70.1 AE 8.8 years (mean AE standard deviation) for the AFF group and 67.4 AE 10.8 years for the control group (p = 0.287) ( Table 2). Over 90% (91.3%) of patients in the AFF group had used bisphosphonates, whereas 26.9% of patients in the non-AFF group had used bisphosphonates (p < 0.0001). Average follow-up between the two groups was similar at 30.3 AE 29.1 and 25.5 AE 18.7 months for the AFF and control groups, respectively (p = 0.397). The average duration of bisphosphonate use was 9.4 AE 5.6 years in the AFF group and 5.6 AE 4.0 years in the non-AFF group (p = 0.060). The mechanisms of injury differed significantly between the two groups, with the AFF group generally experiencing lower energy mechanisms (ground-level falls and fractures while simply walking) as compared to the higher-energy mechanisms (motor vehicle accidents) in the control group. Lower-energy mechanisms accounted for 95.7% of the fracture mechanisms in the AFF group versus 69.2% in the non-AFF group (p = 0.002). Patients with AFFs rated mid-term postoperative physical and mental health similar to that of non-AFFs, as measured by the PCS at 38.5 AE 10.5 for AFFs versus 35.9 AE 10.4 for non-AFFs (p = 0.323) and the MCS at 52.7 AE 11.5 for AFFs versus 51.4 AE 11.7 for non-AFFs (p = 0.636) calculated from the SF-36v2. Discussion In this study, we aimed to assess self-reported health-related quality of life (HRQOL) outcomes in women who sustained atypical femur fractures compared to those with isolated femoral diaphyseal fractures, both of whom underwent surgical repair of their fractures. The SF-36v2 questionnaire used to assess an individual's health status, which can be used to compare outcomes between groups of patients by type of intervention or disease. The questionnaire consists of eight scales yielding two summary measures: PCS and MCS. With an average follow-up time of 2.4 years, both groups reported similar postoperative PCS and MCS. Our finding of similar self-reported HRQOL between AFF and non AFF patients was unexpected, given that the AFF group generally experienced lower-energy trauma (ground level fall, break while walking), compared to the non-AFF group. Other authors have shown that high-energy trauma can adversely affect HRQOL outcomes in orthopedic patients. (30) However, Ko and Chang (31) also reported on long-bone fractures with similar outcomes for overall SF-36v2 scores between higher-energy femoral shaft fractures and lower-energy isolated tibial shaft fractures after intramedullary nailing and subsequent implant removal and healing. Yet when divided into the survey's domains, the PCS had a higher score for femoral shaft fracture patients (p = 0.002). (31) Our study suggested that although generally lower-energy mechanisms were involved in the AFF group, the health effect upon the patient was still similar to the higherenergy traumas in the non-AFF group. The clinical significance of our study is that patients with AFF may expect similar midterm patient-reported physical and mental outcomes to their higher-energy non-AFF counterparts. Although our study suggests similar mid-term outcomes between the cohorts after healing, other authors have reported higher rates of early postoperative complications among AFF patients with bisphosphonate use. A study by Edwards et al. () in 2013 reviewed data from the United States Food and Drug Administration Adverse Event Reporting System (FAERS), which revealed that 26% of cases of AFFs exhibited delayed healing or non-healing. Additionally, Bogdan et al. () in 2015, reported a 12% failure rate as well as delayed average time to union after surgical repair of AFFs. Although we realize that bisphosphonates remain a choice of therapy for individuals at high risk for fracture, the potential concern of AFFs has come to the attention of practitioners and patients, and prescriptions for bisphosphonates have declined in the past decade. However, the fracture prevention benefits of bisphosphonates remain, and it should be noted that the overall risk-benefit profile of bisphosphonates should be carefully considered in at-risk patient populations. The current study indicates that mid-term health-related outcomes following surgical repair of AFF are similar to those of surgically-repaired diaphyseal fractures of the femur. Our study had several limitations, including the retrospective nature of data collection in this cohort study with limited sample size. Only those subjects who survived after fracture repair were available to enroll in the study, thereby possibly introducing a survival bias. A large number of patients were also lost to follow-up in this cohort, suggesting that our results may not be generalizable to the population at large. Still, to our knowledge, this represents one of the largest cohorts of AFFs studied, and further provides HRQOLs measures for these AFF patients. In spite of the small number of patients, to our knowledge, this is the first study to include standardized HRQOL outcomes, SF-36v2, for patients treated surgically for AFFs. The information presented will be useful to the practicing orthopedic surgeon and the medical community, particularly with regard to counseling patients on mid-term postoperative expectations after AFFs. Disclosures The authors have no conflicts of interest to declare. The manuscript, including all data, figures, tables, and supplementary materials, has not been previously reported or published and will not be submitted to another journal while under review by JBMR Plus. The data included has not been the subject of previous publications.
v3-fos-license
2019-05-16T13:05:55.036Z
2014-06-16T00:00:00.000
155002885
{ "extfieldsofstudy": [ "Economics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1155/2014/787261", "pdf_hash": "ec52b96cddf5495cdb72b63f4f234c3feb285719", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42616", "s2fieldsofstudy": [ "History", "Economics" ], "sha1": "0ebc1217252cca2e403789616783cd7fd2f03d73", "year": 2014 }
pes2o/s2orc
From the City to the Suburbs : Characteristics of Suburban Neighborhoods Where Chicago Housing Choice Voucher Households Relocated The Housing Choice Voucher program (HCV) is a federally supported demand-side housing subsidy. According to HCV, eligible households are encouraged to secure affordable housing in favorable neighborhoods, including suburban neighborhoods. To what extent, however, is the supply of affordable rental housing located in suburban communities that offer favorable amenities meeting the increased demand? Using the Geography of Opportunity as a framework, this study examines the mobility results of traditional HCV households whomoved from the city of Chicago to surrounding suburban neighborhoods to reveal characteristics of destination communities. Findings indicate that HCV households tend to move into suburban renter neighborhoods that have high poor, African American, and female-headed household populations. Policy makers are encouraged to consider findings to improve life outcomes of suburban HCV program participants. Introduction HCV is a federally funded demand-side housing subsidy.Income eligible families who receive vouchers can use the subsidy to secure housing in neighborhoods across the country.Many HCV households often choose to lease-inplace, consequently remaining in the same or similar urban neighborhoods [1].However, public housing and community revitalization efforts that decrease housing units can restrict lease-in-place opportunities.HCV households therefore may consider suburban communities as a source for affordable rental housing. This study examines HCV mobility within a Geography of Opportunity framework to assess the result of choices made by HCV households in selecting a suburban community in which to live.The Geography of Opportunity framework contends that housing location influences one's life outcomes.For instance, houses located in communities that have low poverty and crime rates are racially and ethnically diverse and are more likely to offer residents access to high performing schools and better employment opportunities. Examining HCV relocation from a local context, Chicago is the focus of this study.Its experience managing various relocation programs using HCV resulted in numerous studies investigating programmatic success or failure related to intracity relocation.However, research on Chicago HCV relocation from the urban center to suburban spaces is limited.Therefore, the focus of this research is to determine whether the supply of affordable rental housing located in suburban communities, that are complete with characteristics that support positive life achievements, are the location to which Chicago HCV households move.To answer associated research questions through the Geography of Opportunity framework, this cross-sectional study statistically and descriptively compares census tracts within six counties located in the Chicago metropolitan area where Chicago Housing Authority (CHA) HCV households (the sample in this study does not include households under tenant protection vouchers or who are participating in special mobility programs.)relocated between 2000 and 2007.The study involves an assessment of the racial and ethnic composition, household composition, income, and proximity to 2 Urban Studies Research employment centers of destination communities of HCV households. Housing Choice Voucher Program With the passage of the US Housing Act in 1949, the federal government assumed the responsibility of ensuring that affordable rental housing was made available to low-income persons throughout the US.Subsequently, several federal programs were developed towards this end, including the public housing development program and the programs that gave rise to the HCV program. In 1974, President Gerald R. Ford signed into law the Housing and Community Development Act.Among other initiatives, the Act created the Section 8 certificate program (Certificate).The Certificate program, managed by the US Department of Housing and Urban Development (HUD), provided rental-housing subsidies awarded by local, state, and regional housing agencies to eligible low-income households that earned between 30% and 80% of area median income.These households, in turn, used the vouchers to secure rental housing in the private market (CBPP, 2009).Certificate holders were required to commit 30% of their income towards the monthly rental cost of a unit as well as assume responsibility for payment of all utilities.The Public Housing Authority (PHA) that administered the voucher paid the balance of the rent, typically 70% to 110% of fair market rent (CBPP, 2009).By design, this program relied upon the private sector to provide affordable and adequate housing.The appeal of the Certificate program is that the subsidy is not place-based; instead, it travels with the client, thereby increasing the likelihood of certificate holders being able to secure housing in mixed income neighborhoods. In 1984, congress authorized the Rental Voucher demonstration program.Though the Rental Voucher program was designed similarly to the certificate program, it included one fundamental change.In the Rental Voucher program, rental subsidies were fixed and did not adjust as rents increased, as was previous practice.The Quality Housing and Work Responsibility Act (QHWRA), which was passed in 1998, merged both programs (Certificate and Rental Voucher), creating the Housing Choice Voucher Program (HCV).Over 2.5 million HCVs are in circulation and managed by over 2,500 PHAs [2].Each PHA is subject to an annual cap on the number of federally allocated vouchers it can administer.Although congress approves the creation of new vouchers annually, it typically only renews existing vouchers. Since the later 1990, however, federal housing policy shifted from developing and managing affordable housing units to attempting to deconcentrate poverty in targeted communities by increasing its focus on issuing housing HCV to qualified households [3].As a relocation source that capitalizes on available units in the private housing market, a unique feature of the HCV is portability.It enables households to move within urban neighborhoods and from urban neighborhoods to suburban and rural communities with the intention of meeting one of the HCV program objectives of moving from high poverty to low poverty neighborhoods (CBPP, 2009).Such moves should facilitate households' access to communities that provide high quality resources.Low income and poor households often are without adequate means in which to exercise free reign in moving to highly resourced communities with social capital building opportunities.The HCV program attempts to provide such access. Several studies have examined mobility among households that have received HCVs [4][5][6][7].However, results of HCV facilitated mobility remain mixed.Program critics contend that HCV holders do not relocate to high opportunity neighborhoods.On the contrary, households were found to move to neighborhoods that are either as poor or slightly less poor than their origination neighborhood [8][9][10][11] or are racially segregated [12].On the contrary, several studies suggest that moves accomplished through HCV result in neighborhood satisfaction [13], moves to low poverty, low crime areas, and to neighborhoods with high employment rates [1,14] and result in children of HCV families to be more likely to graduate from high school and attend college [15]. Three well-known federally funded mobility programs that involved the use of HCV include the Gautreaux Assisted Housing Program (Gautreaux One), the Moving to Opportunity for Fair Housing Demonstration Program (MTO), and the Gautreaux Assisted Housing Program Two (Gautreaux Two).Gautreaux One and Two and the MTO program serve as an important backdrop to the goals of this study because they all used HCVs to relocate households to high opportunity neighborhoods throughout the Chicago metropolitan area. The Gautreaux program began in 1966 as a series of lawsuits against the CHA purporting that the agency purposefully implemented policies that segregated African American families to select Chicago neighborhoods.The courts ruled against the CHA and subsequently instructed it to implement Gautreaux in 1976.One component of the program required CHA to provide African American residents with Section 8 certificates (now HCVs) and mandated them to secure housing in low poverty desegregated areas of the metropolitan region.Participants were screened, provided with counseling, and given referrals to housing in low poverty communities over the course of the program.Over 7,500 families participated in the program where over half used the Section 8 certificates to relocate to housing in suburban communities.The program was completed in 1998. Studies that evaluated the effects of Gautreaux program produced varying results from determining that moves availed clientele access to greater opportunities than had existed in their previous neighborhoods [16][17][18] to a critique of evaluation methodologies used by scholars that resulted in skewed positive results [19]. The Gautreaux experience led to other housing authorities implementing similar housing desegregation models and to the development of a national demonstration program-Moving to Opportunity for Fair Housing Program (MTO).MTO was a HUD developed mobility demonstration social experiment that operated from 1994 to 1998 in Chicago, Los Angeles, Boston, Baltimore, and New York City.It was designed to reveal whether high-risk families realized positive outcomes if they resided in low poverty areas.The structure of the program required the 4,600 participating households to be randomly assigned to three groups: the experimental group offered families a housing voucher and housing counseling and mandated them to locate housing in neighborhoods that had no more than 10% of its population living below the poverty level.The comparison group provided families with only a housing voucher; and the control group consisted of families who remained in public housing [20,21].Families were tracked over the life of the program in an effort to determine the effects of the intervention. MTO studies also produced conflicting results.Early studies suggested that it was successful in moving experimental group families to improved neighborhoods that provided strong schools and good health outcomes [13,22].Subsequent studies, however, found that the quality of the neighborhoods to which experimental families moved to was not as strong as those to which Gautreaux families moved [23] or had declined considerably over time [24].Also, it was found that strong neighborhoods did not afford experimental families any better access to employment opportunities [24].Ultimately, scholars contend that without ongoing counseling, experimental families were not successful in benefiting from the whole host of opportunities available in new neighborhoods. In 2001, the CHA launched Gautreaux Two, a special mobility program for public housing residents.Similar to Gautreaux One, Gautreaux Two offered 500 public housing residents the opportunity to use HCVs to move from public housing sites to private market homes largely within the city of Chicago, that had low poverty rates and low numbers of African American households [9,25].However, families that initially moved into targeted neighborhoods were more found to make subsequent moves to communities that were poor and racially segregated [9]. Mobility, Opportunity, and Neighborhood Effects Mobility via HCV subsidies is contextualized within neighborhood effects literature that examines whether neighborhood indicators affect social development of household residents [26,27] and whether, particular indicators are optimal in encouraging positive life outcomes for disadvantaged households [28].Neighborhood effects, viewed as a function of how neighbors influence behavior and attitudes of each other, are said to be transmitted through socialization, modeling, and neighborhood resources [26,[29][30][31]. As discussed above, a general outcome of US federal policy by way of HCV is to deconcentrate poverty, if not relocate it, as an attempt to disrupt spatial distribution of disadvantage via the promotion of mixed income neighborhood policies.Further, families who receive HCV should theoretically be able to use the voucher to secure housing in favorable socially mixed or "high-opportunity neighborhoods." Galster and killen [32] developed the Geography of Opportunity framework that includes general attributes that define high opportunity neighborhoods.The authors contend that geographic neighborhood effects influence opportunity where individuals who reside in communities that are resource deficient will experience difficulty in achieving success.Conversely, individuals who move to communities that are of high quality and offer resources geared towards achieving success will become successful.Geography of Opportunity or highopportunity neighborhoods boast strong employment levels, high employment rates, high performing school districts with better educational resources, low crime rates, a high proportion of non-Hispanic Whites, and access to extensive social and recreational resources [6,17,20,33]. Suburban HCV facilitated moves, according to the Geography of Opportunity, should be to high-opportunity neighborhoods.Opportunities resulting from neighborhood effects on residents include employment gains among lowincome residents [34,35], improved outcomes for children [36], and, in certain instances, access to differentiated local services [37].Such relocation opportunities where HCV households can capitalize on suburban social and economic assets are found to produce an increase in positive life outcomes, higher efficacy, and better life chances for targeted families [30]. Generally, HCV suburban relocation is low when compared to urban relocation [38].Urban and intracity HCV relocations maximize access to established social networks, economic and racial diversity, public transportation options, and more affordable housing and employment opportunities [39].On the contrary, suburban HCV relocation is considered optimal in that it offers less dense environments, low crime rates, and better performing schools, on average [40].However, suburban NIMBYism (Not in My Back Yard) restricts HCV relocation through collective and civic action that counters affordable housing development [41] specifically, development that is linked to minorities, the disabled, and singled family head of households using Section 8 (HCV) subsidies [42].Consequently, access to suburban communities, though possibly desired by HCV families, is not a widespread phenomenon. For example, Midwest HCV suburbanization is low compared to suburbanization rates in the West, South, and North [43].Families who do relocate to the suburbs often move to inner ring suburbs as a potential residential preference to remain in close proximity to familiar urban neighborhoods and network ties [44,45].However, not all suburban neighborhoods are high opportunity communities, whether inner ring suburbs or further from the urban center.Contrary to popular belief many suburbs are as poor as their urban counterparts, lack transportation access, promote racism and classism, and have limited housing options [9,23,33,46,47].For instance, between 2000 and 2008, Covington et al. [43] found that though HCV suburbanization to neighborhoods with high employment rates and highincome levels increased, substantially more HCV families live in poor communities with limited access to jobs.Suburbs are no longer the collective epitome of economic advantage. Accordingly, the Suburbanization of Poverty is becoming more pervasive across the country, especially in large, older suburban communities.Within the last several years, the nation's suburbs have become more ethnically and racially diverse and home to the largest and fastest-growing poor population in the nation [48,49].Also, the suburban impoverished increasingly includes the foreign born, persons of all races, persons with limited education, and persons with limited employable skills [48]. In Chicago and its surrounding suburbs specifically, research revealed an increase in the number of extreme poverty neighborhoods defined as 20-40% of persons living below poverty [48].Furthermore, racial and ethnic discriminatory practices from redlining to exclusionary zoning limited the migration of low-income minority families to many suburban communities [46,50].Despite the fact that societal attitudes have shifted regarding overt racism and segregation in the suburbs, thereby increasing suburban minority and foreign born populations, African American and Hispanic households are still more likely to live in impoverished lowincome suburban communities [43,51]. In addition to rising poverty rates, suburban communities are also experiencing high rates of unemployment, poor health outcomes, and low educational attainment [52].Unlike their urban counterparts, suburban communities often lack a sophisticated social service delivery system that is able to provide assistance to residents in need of income supports, job readiness, and low-cost health care [53,54].Though suburban communities are comprised of many opportunity neighborhoods, there are a growing number of communities that are mirroring low-income urban neighborhoods of distinctively low opportunity. From a more narrow perspective on HCV relocation, Chicago has been the focus of numerous studies on intracity HCV mobility [22,[55][56][57].Oakley and Burchfield [39] examined HCV relocation within the city of Chicago to determine if voucher families moved to qualitatively better neighborhoods.The scholars determined that voucher holders that relocated between 2000 and 2005 were concentrated in poor African American neighborhoods within the city of Chicago. The literature on CHA traditional HCV relocation to suburban communities is on the rise [43,56,58,59].Many scholars found that the demolition of public housing resulted in an increase of suburban relocation, particularly to the inner ring suburbs [11,60].This study, quasi-modeled after Oakley and Burchfield, adds to suburban literature an examination of the migration of HCV households from Chicago to neighboring suburban communities to determine whether voucher families moved to qualitatively better suburban neighborhoods.Findings from this study, in concert with results from Oakley and Burchfield's analyses, contribute to the literature a more complete perspective of the relocation outcomes of CHA voucher families who moved within the greater Chicago area. Methodology Research suggests that HCV households relocate to urban communities characterized as impoverished and racially and ethnically segregated [1,12,14,61,62].Limited research on the suburbanization of HCV families found that householders were less likely to move to communities with high poverty rates and dense African American households [8,38].This research seeks to add to suburban literature analyses of HCV relocation to Chicago area suburbs to address the following research questions.What are the characteristics that define suburban neighborhoods to which CHA HCV households move?Are HCV suburban neighborhoods where CHA HCV households moved characteristically different than urban neighborhoods to which CHA HCV households moved?Study findings add nuance to the literature through the utilization of a granular database detailing household characteristics of HCV relocated families for each year starting in 2000 and ending prior to the Great Recession in 2007.Accordingly, the additional information on households that relocate to suburban communities assists suburban stakeholders in becoming better equipped to serve this population. Study Area and Data. To address research questions, a study area of six counties surrounding Chicago was identified (data for Cook county does not include data for the city of Chicago which is located within, DeKalb, DuPage, Kane, Will, Lake, and Will).The study area is consistent with local regional planning agencies geographic targets enabling study findings to be considered by regional stakeholders interested in housing policy and planning.Within the study area, a cross-section of active Illinois HCV households by census tract was retrieved from the US Department of Housing and Urban Development (HUD) Multifamily Tenant Characteristics System for the years 2000 to 2007 producing a database of census tracts to which Illinois 1,839 HCV households resided. Concerns exist in the use of administrative data in research for reasons such as incomplete data records and issues with input accuracy.To produce the most accurate database possible that data was filtered further to produce a dataset that contained only data on households that relocated from Chicago to the study area.In the filtering process, accuracy was addressed by cross-referencing household cases inputted by CHA with the same household cases received by respective suburban housing authorities.Only 180 households with incongruent data across two housing authorities were eliminated from the database.Further only initial moves from Chicago to a suburban neighborhood were of interest, eliminating an additional 201 households.Any subsequent moves experienced by a household were not considered in this study.Finally, 121 households receiving tenant protection or special mobility program vouchers were removed from the database. Through these data review processes, 502 HCV households were eliminated from the final study database resulting in an analysis of the quality of 385 suburban census tracts (unit of analysis) to which each of the 1,337 Chicago HCV households moved.Though the sample is relatively small, it is consistent with research that has revealed that lower numbers of HCV households relocate to suburban neighborhoods visà-vis urban neighborhoods [1,8,63].Again, the study database contains relocation data from 2000 to 2007.The beginning point of the timeframe coincides with the start of the CHA's Plan for Transformation.This plan seeks to demolish 58 public housing high-rises (a total of 35,000 units), rebuild 25,000 units, and relocate the remaining affected families utilizing vouchers.The endpoint of the study's selected timeframe (2007) precedes the beginning of the Great Recession minimizing the effect the downturn in the housing market had on study results.Finally, 2000 US Census Tract Summary File 3 data was used to gather specific characteristics of suburban destination communities. The majority of database cases did not have origination census tract information in Chicago, preventing an urban HCV household origination to suburban HCV household destination comparison.To address this limitation, research conducted by Oakley and Burchfield [39] serves as an adequate urban comparison.Oakley and Burchfield examined 2000 and 2005 Chicago HCV data by census tract for households that relocated within the city of Chicago.Three characteristic categories (concentrated disadvantage, residential instability, and immigrant concentration) framed their analysis. To compare HCV relocation across the urban/suburban spaces, predictor variables measured in the Oakley and Burchfield study, as well as in other studies that examined HCV mobility, were measured in this study including percentage of families in poverty [8], percentage of families receiving public assistance [61,64], percentage of female headed households with children [64], percentage of African American and Hispanic residents [8,61], and percentage of rental homes [65]. Proximity to employment centers serves as another viable characteristic of suburban communities and was included as a predictor variable [43,64,66,67].Previous research on Chicago metropolitan area employment centers formed a database of 127 existing employment centers [68,69].This database was integrated with the study database to operationalize the proximity to employment opportunities variable (database usage granted by authors). Study analyses consist of descriptive and inferential analyses (ordinary least square [OLS] regression analysis (OLS is a generalized linear modelling technique that assesses the relationship between a dependent variable () and a series of independent/explanatory variables ( 1 , 2 , . . ., ).It is represented through a line of best-fit where any variance in the dependent variable is explained to some degree by selected explanatory variables controlling for evident interaction of said variables)).Similar to Oakley and Burchfield, the dependent variable for the regression model is the percentage of HCV households in relationship to all rental houses within a census tract.The predictor variables are stated above.The model predicts voucher housing presence among selected neighborhoods and housing characteristics.Proximity to job centers was included in the OLS regression model as a dummy variable.All census tracts located within or adjacent to a job center to which HCV households moved were assigned a value of 1. Adjacency was measured as a 1-mile straight line from the edge of a census tract to the centroid of its closest job center to keep the measurement within a manageable walking distance and simple drive from home to work [70].All remaining HCV households received a value of 0 indicating that the household was more than 1 mile of a job center. Results By providing income eligible families with housing subsidies that are mobile, the HCV program has the potential to open doors to communities and regions that may otherwise be inaccessible.As previous research has demonstrated, HCV households, particularly inner-city families, capitalize on the mobility characteristic of the program and use their voucher to secure housing in different neighborhoods, including suburban communities.The study data presented below confirms that CHA HCV families have used vouchers to move into surrounding suburban communities. Characteristics of Chicago HCV Suburban Households. Between 2000 and 2007, 1,337 unique HCV households moved into 385 different census tracts located within a sixcounty suburban area of Chicago (see Table 1).With an average income of $10,393 and an average family size of 3, HCV head of households that moved to the suburbs paid approximately $950 towards rental costs.Families were also more likely to move into suburban high-rise apartments (63%) that contained 2 to 3 bedrooms (72% of units) than single-family homes.Finally, 90% of the suburban HCV head of households were African American and 92% were femaleheaded households with children under 18 years of age.The next highest percentage of HCV movers (9%) relocated to 42% of the census tracts comprising DuPage County.DuPage is distinctive in that it is the second most populous county (second to Cook County) and the second wealthiest The census tracts where CHA HCV households moved were not overwhelmingly renter communities.Collectively the suburban census tracts where HCV families relocated were comprised of 28% of renter households.The census tracts in Dekalb, where the smallest number of HCV households moved (19), contained the highest percentage of renters (42%).On the contrary, only 29% of the census tracts in Cook County, where the largest number of HCV families moved, were comprised of renter households. Which Characteristics Describe HCV Suburban Neighborhoods? 5.3.1.Income.The median household income (HHI) of census tracts per county within the study area to which HCV households relocated ranged from $23,850 in Cook county to $162,155 in DuPage county (see Table 3).The median HHI for all DuPage county census tracts was the highest compared to other counties.In relationship to poverty, DeKalb census tracts where HCV households moved had the highest percentage of impoverished persons (15%).HCV households that moved into DuPage, however, lived in census tracts with approximately 4% of its respective population living below the poverty line.Collectively, the average percent of people living below the poverty line for the study area was 8%. HCV destination communities where the highest percentage of households receiving public assistance were located in Cook county (3.2%).DeKalb had the lowest percentage of public assistance receiving households (1.72).Data suggests that study area census tracts exhibited a wide range of income diversity. Population Characteristics. The population of study area census tracts was largely Whites (see Table 3).DeKalb's census tracts where HCV households moved had the highest percentage of white households (88%) and Cook's census tracts were HCV households moved had the lowest percentage of white households (59%).The census tracts to which HCV households moved that had the highest percentage of African Americans were located in Cook (26%) and the census tracts to which HCV households moved that had the highest percentage of Hispanics were located in Kane county (36%).Further, HCV households moved into census tracts that were largely comprised of married families with children (37%).Kane county census tracts to which HCV households moved had the highest percentage of female-headed households (68%).One-way ANOVA analyses were performed for all census tracts by county groups for each independent variable.Findings were significant across all independent variables suggesting that at least two of the census tract groupings by county were significantly different from one another (see Table 4).Post hoc tests were computed and, among other relationships, Cook county and DuPage county had the highest number of significantly different means across all independent variables.One can postulate that the neighborhoods that HCV households moved to in Cook county are characteristically different compared to the neighborhoods HCV households relocated to in DuPage county. Oakley and Burchfield examined characteristics of high voucher density and low voucher density Chicago communities to which HCV households moved.Many differences exist between the HCV household urban relocation experience, as presented by Oakley and Burchfield, and the suburban relocation experience included in this study.Oakley and Burchfield's findings suggest that high voucher neighborhoods were more likely to be comprised of high percentages of rental housing (56%), African American households (93%), and female-headed households with children (61%).Income disadvantage also characterized HCV communities confirmed by rather high percentages of poverty (35%) and public assistance (27%).HCV suburban neighborhoods, by comparison, scored much lower on all independent variables compared to Chicago's high voucher and low voucher density communities (see Table 3).For instance, suburban census tracts experienced lower poverty rates (15%), percentages of households receiving public assistance (3%), percentages of African American families (36%), and less renter households (32%).The female-headed household with children variable, however, was higher in one suburban county (Kane) at 68% surpassing both Chicago high voucher and low voucher density communities.Also worth noting, the suburban presence of African American families was higher in selected HCV communities when compared to low voucher communities in Chicago.Despite the few exceptions, one may conclude from the comparisons that the suburban neighborhoods to which HCV household moved are less disadvantaged than the urban neighborhoods where CHA HCV households relocated. Notwithstanding the urban/suburban differences, another intent of this study was to determine whether HCV households moved to suburban communities that can be characterized as opportunity neighborhoods.An OLS regression analysis was performed to determine which characteristics (independent variables) are strong predictors of HCV presence and whether significant predictor variables are aligned with characteristics of opportunity neighborhoods. The best fit regression model produced from study variables revealed the combination of independent variables that provided the strongest predictive power to be the percent of a communities African American population, households receiving public assistance, persons living below the poverty level, and rental units ( = 50.936, < .001)(see Table 5).The 2 value of .349indicates an acceptable fit of the model.Predictor variables that were not found to be significant include percentage of female-headed households with children, percentage of Hispanic population, and proximity to job centers.Similar to Oakley and Burchfield, significant predictor variables are associated with disadvantaged neighborhood characteristics.Collectively, study findings suggest that suburban HCV households relocate to communities that are more likely opportunity neighborhoods when compared to urban HCV household communities.However, when considering only suburban neighborhood comparisons, characteristics aligned with disadvantage are a strong predictor of HCV relocation. Discussion This study examined suburban relocation by HCV households to reveal characteristics of the neighborhoods to which housing was secured.The suburban community variables examined in this study, identified as opportunity variables, were race/ethnicity (African American, White, Hispanic), income (household median income, population living below poverty, and population receiving public assistance), household composition (married and female-headed households), and neighborhood characteristics (rental units and employment proximity). Study findings show that HCV households were more likely to move to suburban communities characterized as largely rental communities with high numbers of African American households, of people living below poverty, and of people receiving public assistance.Though relocation to suburban communities produced better results when compared to HCV relocation within Chicago [39], suburban relocation must also be viewed within its separate geographic context.Given the number of communities that comprise suburban Chicago, HCV families that moved into Chicago suburban communities were more likely to have moved to suburban disadvantaged communities rather than suburban opportunity communities.Implications of HCV suburban relocation to such communities are plentiful. Findings suggest that HCV households are not moving to suburban communities that are in close proximity to established job centers.The Spatial Mismatch Hypothesis is a possible explanation for the lack of a job center pull effect [72].Often used to explain limited access to suburban jobs from inner city communities where low skill workers resided [73], the Spatial Mismatch Hypothesis sheds light on why HCV households where not relocating in large numbers to established job centers in the suburbs.HCV households, living in suburban spaces, should no longer be limited in their ability to be employed as they share location with employers, as per the hypothesis.While research has demonstrated this to be true, particularly for low-income, African American males [16,74], other present barriers to employment counter the access basis of the Spatial Mismatch Hypothesis.Within the context of research findings, residential communities adjacent to Chicago suburban job centers may be engaging in housing discriminatory practices that limit access to housing units by low income/poor (HCV) persons [75].In other words, HCV household relocation may not have occurred near established job centers due to persistent suburban economic discrimination.In addition, research suggests that other barriers make securing employment in suburban job centers challenging, including access to public transportation, child care, racial discrimination, and having the skill set required for suburban jobs [35]. HCV households are also more likely to relocate to neighborhoods with high concentrations of African American households.Census tracts that contained high percentages of Hispanics, however, were not found to be locations to which high numbers of HCV households relocated.Though African American households are overrepresented among the renter populations, and similarly among the HCV households examined in this study (see Table 1), the high concentration of African American households in HCV suburban census tracts may also be a function of prevailing segregation.Census tracts examined in this study revealed higher percentages of White residents. It is likely that the profusion of HCV households in large African American communities is a function of persistent segregation and racial discrimination.Accordingly, research findings support scholars who argue that African American segregation in suburban communities remains disproportionately high [43,76].Further, African American households are often subjected to landlord discrimination when attempting to secure housing [77].It is probable that African Americans HCV households that attempt to secure housing in integrated census tracts are unsuccessful due to landlord bias and racial discrimination.Finally, suburban census tracts that have high Hispanic populations are more likely to be of higher income, which may explain the low presence of HCV households. Research also suggests that African Americans are more likely to live in poor and minority neighborhoods [78][79][80].The general income characteristics of study area HCV suburban communities were found to be, on average, lower than surrounding suburban neighborhoods.Consequently, HCV families are more likely to move to communities with large numbers of families living below the poverty line and receiving public assistance.Scholars assert that higherincome communities do not have a high volume of affordable housing units to rent to HCV households and that lower income communities are more inclined to meet the affordable housing demand [1,14].While HCV Chicago neighborhoods may also be poor, this study confirms that use of HCV to relocate to suburban neighborhoods may not result in entry Urban Studies Research to above poverty neighborhoods.The higher the household income of a census tract, the less likely one will find an HCV household. The supply of rental housing units often relies upon local decision-making processes and zoning practices, which can be purposefully designed to constrict the availability of rental housing units in suburban communities.Rental housing units that are offered through multifamily housing development are subjected to the specific zoning criteria set forth by each municipality.Municipalities interested in limiting the number of rental housing units will make the development of multifamily housing, particularly affordable multifamily housing, cost prohibitive for a developer.Developers that do embark on multifamily housing development will attempt to develop units that, when priced in the market, will absorb associated high development costs in order to turn a profit.Often rental units developed according to this strategy are unaffordable to HCV households.A study on DuPage County Illinois found that there was less evidence of HCV household concentration in communities that had restrictive multifamily zoning [81]. Consequences associated with poverty concentration and racial segregation in urban and suburban communities are vast.Wilson [73] contends that social isolation does not afford low income households, African Americans in particular, contact with successful social networks, access to highly regarded employment opportunities, and the opportunity to move into the middle class.Elevated crime, poor school performance, and widespread community disinvestments are additional consequences associated with residing in segregated and poor communities [82].Similar results are likely in suburban spaces if policy shifts do not occur. Along with challenges in securing housing in suburban opportunity communities are identifying and accessing human, social, and public services in suburbia.Research reveals that the identification of and access to suburban service organizations are challenging and problematic [52,53,83].Consequently, a HCV move to a suburban community may also limit household members' receipt of much needed supportive services, potentially making improving life outcomes of HCV household residents much more challenging. Limitations of Research and Recommendations for Future Research This study contributes valuable findings to housing literature.Additional studies are recommended, specifically, research using large samples sizes to confirm whether HCV households are moving into segregated suburban communities in large numbers.The additional research findings will determine if the choice factor in the HCV program is being exercised without barriers.If choice is consistently suppressed, then policy action needs to be taken that will open more communities to HCV households.Additional variables, particularly those associated with social services of particular interest or need, as determined by HCV families, should be identified and analyzed in future research.For instance, it would be interesting to assess whether access to child-care centers and recreational opportunities serve as pull factors for HCV households.Further, variables that include healthcare services will aid in determining whether HCV families are locating within communities that provide them with low-cost healthcare options. Statistical analyses alone are not sufficient in identifying with certainty the rationale behind one's decision to selfselect to communities.Mixed methods involving qualitative research will glean specific anecdotal findings on the decision equations used by HCV families in determining which community to move.Combined with quantitative research, proposed mixed method studies will produce evidence that is more robust about the geographic factors that will lead to success. Affordable Housing Access and Development. Study results provide scholars with nuanced information on the HCV relocation challenges associated with a population that differs from similar studies.Unlike the Gautreaux One, Gautreaux Two, MTO, and HOPE VI special HCV mobility programs, traditional HCV households do not receive services designed to aid them in and through the relocation process.For instance, traditional HCV households are not provided with housing counseling and guidance in the identification and processing of rental housing or in the evaluation of potential communities that special mobility populations receive.Thus, traditional HCV households are often required to make important relocation decisions with little valuable information.Notwithstanding limited access to opportunity communities resulting in race and economic discrimination, housing location choices can be improved with access to viable and valuable community information.Accordingly, housing counseling must also be made available for traditional HCV households to aid households in securing housing in suburban opportunity neighborhoods. The limited supply of affordable housing units that characterizes suburban spaces, resulting from persistent exclusionary zoning practices, also impedes access to suburban communities [84][85][86].Remedies exist to address limited affordable housing units, particularly in suburban America.Massachusetts, Connecticut, and Rhode Island provide examples of legislation aimed at increasing affordable housing production through development mandates and streamlined approval processes [87].In 2003, Illinois passed the Affordable Housing Planning and Approval Act (AHPAA) that encouraged Illinois municipalities that were found to have less than 10% of housing affordable to working class families (as per HUD adopted guidelines) to submit plans dictating procedures to incorporate affordable housing within their respective jurisdictions [88]. Forty-nine of the 1,287 municipalities were identified as not exempt from the law and thus were encouraged to submit plans.Forty-six of the 49 nonexempt municipalities are located within the target area of this study [88].Further, as of 2011, 12 of the 46 nonexempt municipalities have yet to submit an affordable housing plan [88].Though the law is a victory in and of itself, it does not contain any enforcement powers and, accordingly, no ability to impose consequences for noncompliant communities.In addition, municipalities that have complied with the law and submitted plans are not compelled to implement them, as there are no consequences associated with nonactivity.If Illinois is to realize an equitable increase in the distribution of affordable housing across the state, lawmakers must revise the elements of the AHPAA to include, preferably, economic consequences for nonompliance.Relying upon the "good will" of municipalities is likely to maintain the status quo. Racism and Classism. Data from this study suggests that racism and classism, in particular, oppressive behavior that is made evident through overt, conscious actions and conduct as well as through unconscious, insidious attitudes and perspectives, limits access to the wider suburban geography.Consequently, existing public policies, in the form of housing policies and the HCV Program, are not capable of achieving stated goals of moving persons to diverse, less impoverished communities.Government, from equity and social justice positions, should invest resources and take deliberate action to eliminate such barriers so that targeted groups enjoy the opportunity of choosing communities in which to live, without being subjected to unfair practices. Continuous funding of mobility programs is a method that can achieve long-term success in economic and racial integration of particular regions.Annual funding levels of the HCV program have been on a continuous decline for several years.Recent federal government sequestration cuts and other funding cuts resulted in a decline of $854 million in funding and an estimated 140,000 HCV that were not renewed for 2013 [89].Declining HCV budgets thwarts integration efforts availed by the program.Consistent funding enables local public housing authorities to renew vouchers for eligible households and award additional vouchers to new households, ultimately increasing integrated settlement patterns in targeted areas. Suburban Social Safety Net. Philanthropic organizations that contribute to the budgets of service organizations have not fully acknowledged the increase service need in suburbia [90].Consequently, suburban safety net organizations are experiencing difficulty in securing additional financial support to meet increased need.Study findings contribute to the literature on suburban poverty that will inform policy makers and philanthropic organization of the legitimate the need to increase funding support of suburban safety net programs. In closing, a move to the suburbs by HCV families is not a guaranteed "step-up" along the opportunity ladder until decision makers commit to support and fund appropriate and effective programs and policies for suburban communities. Figure 1 : Figure 1: Frequency of Chicago HCV households that relocated to suburban neighborhoods. Table 1 : Household characteristics of HCV study sample. Table 2 : Study area counties listed by % of HCV relocations. Table 3 : Average neighborhood characteristics of suburban HCV communities by county. county in Illinois (second to Lake County).DuPage is also the primary county for 38 municipalities in the Chicago metropolitan region that collectively contributes 40% of the jobs to the Chicago metro region and employs over 30% of the Chicago metro region's workforce.In addition, DuPage county is comprised of many social service providers and networks.Moreover, DuPage County's adjacency to Cook County may have contributed to HCV households deciding to bypass Cook County and continue west towards DuPage County. Table 4 : Characteristic differences between HCV communities by county: results of ANOVA and Tukey HSD Post Hoc test. a Dependent variable: HCV.
v3-fos-license
2019-06-13T13:06:36.407Z
2017-01-01T00:00:00.000
187936850
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/35/matecconf_mse2017_01007.pdf", "pdf_hash": "2a359cf62b9aadf24039df59b7c33083b4906f1b", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42617", "s2fieldsofstudy": [ "Engineering" ], "sha1": "61f387d48463169201037d634ab5ee15397ae8b6", "year": 2017 }
pes2o/s2orc
Volume optimization of gear trains with spur gears using genetic algorithm Gear train volume optimization presents a complex problem tied to practical application in gear train manufacturing. This paper is oriented on the analysis of the problem of gear train volume minimization from a shaft axes positioning aspect. An original mathematical model has been developed where the objective function gives a minimum volume with changed shaft (spur gear) axes positions, while at the same time complying with all physical constraints. An original optimization software has also been developed using RCGA (Real Coded Genetic Algorithm) optimization methods. The general mathematical model was applied to three real conceptions of gear train as well as a comparative analysis of initial and optimal values. The results show a decrease of volume being directly linked to a decrease of not only space but material used to make the housing, costs, documentation formulation rate, etc. Introduction Gear train presents a complex technical system due to the complexity of its mathematical formulation and large number of variable values. From a research and practical aspect there is a need for constant improvement theoretical and practical segments in order to justify gear train application. Achieving better operational gear train characteristics, and a lower mass, volume and cost are basic objectives when designing a gearbox. Optimization is the most effective way of achieving desired gear train characteristics, it allows for advances in this field as a current and alternative method of improvement. This paper only considers parallel shaft axes gear train. The optimization method which has proven itself as significant for solving practical problems is the heuristic method of genetic algorithm, which is why it is interesting for this field of research. In this specific case the RCGA (Real Coded Genetic Algorithm) was developed, dedicated to solving general engineering problems. Aside from knowing the construction of gear train, the process, and method of optimization, the development of a mathematical model is necessary which can be representative of the problem and improve gear train performance. This paper contains relevant examples which can present results of performance improvements with gear trains. Chonget al. [1] presented a general methodology for optimizing gear ratios, sizes and housing volume for multi-stage gear trains in preliminary design phases. Marjanovic et al. [2] developed a practical approach to optimizing gear trains with spur gears based on selection matrix, selection of optimal materials, optimal gear ratios and shaft axes positions. Golabiet. Al [3] presented gear train volume/weight minimization optimizing single and multistage gear trains' gear ratios. Mendi et al. [4] aimed to optimize gear train component dimensions to achieve minimal volume comparing GA results to analytic method parameter volume. Savsani et al. [5] described gear train weight optimization comparing various optimization methods to genetic algorithm (GA) result values. Gologlu and Zeyveli [6] performed preliminary design automation through optimization of gear parameters and properties using a GA based approach. Tand and Tseng [7] presented a new mutation operator to adaptive direct mutation that focuses on simplicity, robustness, and efficiency within the context of RCGAs. Shopova et al. [8] presented an adjustable genetic algorithm deigned to deal with engineering optimization problems with both real and integer problems. Mushavati and Hamouda [9] implemented cyclic crossover operation to modify GA for multiple part manufacturing line optimization. For the manufacturing process optimization it is necessary to employ theoretical models as presented in [11]. Such a process is practical [12], economical [13], and relatively easy for use [14]. There is a real need to explain and mathematically formulate the gear train volume optimization problem, to find influential factors, and to find approaches and methods through which optimal gear train characteristics are possible. Motivation is based on gear train performance; achieving a minimum volume, consequently a smaller mass, smaller embedded measurements, less material used, decreased expenses and other benefits. The results obtained by the here presented method are compared to common concepts through specific examples. Problem definition Gear trains with spur gears are most frequently made so their shaft axes are in the same plane, which is most frequently the horizontal plane. Gears placed in this way take up a large volume which can be reduced through optimizing the transmission. An optimal solution defines a new position of gear train shaft axes which results in a decreased overall volume of the gear train [15][16]. Gear train volume can be defined as a result of multiplying length, height, and width based on which a mathematical model can be made. Since width considered as a constant due to calculated values of spur gears and other components' constructive dimensions, the optimization problem becomes planar, based on optimizing the area defined by multiplying length (L), and width (H) of the transmission. Fig. 1 shows the positions of the gears relative to the x and y axes. The position of the input shaft is constrained in the origin, while the positions of other shafts is defined by the angle, θi which is also shown in Fig.1. Gearbox volume is directly dependent on the change of these angles. A constant, c1, is left on all sides as a technical clearance and is included in both the initial and optimal models, which is added to form the overall length and height. Before optimization, in the initial design phase the following condition must be met for paired gears u and v in cases of gear trains with three or more stages: This ensures there is no clash between non paired elements placed in the same plane. If more than one stage can be placed in the same plane, this will initially give a decrease of volume even before optimization (Fig. 2a). Mathematical model Once all initial design conditions are met, and gear dimensions calculated, the typically positioned gear train can be optimized. Gear train volume changes when shaft position angles are changed. Minimum volume is achieved through achieving a minimal area of the profile plane. A difference of the rightmost coordinate of the gear train from the origin placed in the first pinion, and the leftmost coordinate from the origin gives the length of the gear train for the observed angles. In the same way the topmost, and bottom most coordinates subtracted give the height of the gear train. The function to determine the area which the gear train occupies is defined as follows: This research presents a gear train optimization approach based on changing positions of gear shaft axes in relation to the horizontal plane. The angle between shaft angle positions can vary in the range of: −180°≤ ߠ ≤ 180°. The previous equation represents a general mathematical model. The equation includes the addition of 15mm on all sides of the gear train area as technical clearance for a housing. This clearance is also included as a clearance on all sides in the horizontal plane, as well as being a minimal distance between two gears on a single shaft. In order to properly constrain the complex mathematical model, so that the result of the optimization gives a realistic model, many design factors need to be considered. Multilevel gear trains with spur gears can be set up in different ways in the initial design phase depending on spur gear dimensions. In case two sets of gears can be placed in the same plane and maintain a minimal clearance between non-paired gears, the volume can be decreased initially by reducing the overall width of the gear train. Optimization constraints, similarly to the initial condition (Fig. 2a), in this case are maintaining a minimal clearance between non-paired gears of these stages when the shaft axes angles change (Fig.2b). For all stages in separate planes there is an obligatory constraint for avoiding clash of shaft and gear, and limiting their distance to a minimal possible value of 15 mm, where this clash is possible (Fig.2c). The objective function for all concepts has an addition of a minimum distance between gears and housing on all sides. The minimal volume which is achieved through optimization includes clearances not only in the observed plane, but also on shaft lengths. These technical clearances have constant values. Constraints which ensure against clash of any non-paired elements m and n are given as a minimal center distance between those elements which must be greater than or equal to the sum of their respective radiuses and a constant technical clearance. The general equation for this constraint is: Optimization method and software Gear train volume optimization is a complex problem and as such is not possible without the use of a computer. In order to optimize, it is necessary to use an optimization method flexible for complex problems with numerous constraints. In this case, genetic algorithm method was chosen due to its favorable characteristics, and an original software has been developed for optimization based on this method. A real coded genetic algorithm (RCGA) was used, the algorithm is presented in Fig.3. The software used was developed in C++. Genetic algorithm (GA) is a heuristic method for optimizing whose operation is based on mimicking natural/evolutionary processes [8]. The algorithm contains three basic operators: selection, crossover, and mutation. Selection is the process of transferring genetic information through generations. Crossover represents the process/operations between two parents, where an exchange of genetic information and new generations are made. The mutation operator creates a random change in the genetic structure of some individuals for overcoming early convergence. Algorithm operation is based on survival of the fittest individuals through evolution which exchange genetic material. Selection ranks individuals in the population using values from the fitness function, which defines the ability/quality of the individual. Genetic algorithm, due to its convergence characteristics has a widespread application. Researchers are inspired to use this algorithm for scientific purposes, industrial application, business applications and to further increase its use. Before use the software was tested according to available literature and its operation meets all necessary criteria. The original software is intended for all problems formulated by a mathematical model. Table 1. Table 1 also gives initial gross dimensions of each example with included technical clearances. As the mathematical model aims to find the minimal volume by changing the area normal to shaft axes optimized examples are hypothesized to change in some measure correlating with the change of shaft position angles. Volumes of test examples in the initial concept arrangement are 9 925 631.1mm 3 for Example 1, 72 341 883mm 3 for Example2, and 39 697 456.5mm 3 for Example 3. The concept of Example 2 does not have the 1 st and 3 rd gear pairs in the same axes due to the initial design conditions not being met. Example 3 has the 4 th gear pair face aligned with the same side face of the 1 st gear pair which also places it behind the 2 nd gear pair due to the width of the 4 th gear pair. The initial concept of Example 3 therefore has an initially decreased volume than if the gear pairs were all in their own respective planes. Results The mathematical model is formed based on geometric characteristics of multistage gear trains. Optimization of initial concepts was conducted in the aforementioned software using RCGA. Input values were inserted into the mathematical model from Table 1. As the width of the gear train is unchanged, the mathematical model gives a minimal area. Convergence of the area and shaft axes angles are shown in Figures 4, 5, and 6 for a smaller number of iteration steps to illustrate rate of convergence. Optima were achieved in 1500 iteration steps for Example 1, 3000 for Example 2, and 6000 for Example 3. The optimization results shown in Table 2 show an overall decrease in volume for all examples. Examples 1 and 2 show an unchanged height and decreased length, while Example 3 has an increased height and significantly decreased height. Initial angles have changed significantly from original concepts with all shaft axes in the same plane resulting in optimal concepts with a minimal volume (Fig. 7). Examples in figure 7 are given as a section view of a simplified housing in order to better show the optimal interior volumes and concepts of the gear trains. Example 3 in figure 6 is shown with the housing rotated by 90° for better visualization. Unlike in [1] constraints for Example 3, as well as for all other examples, do not allow for an overlap of shafts and gears. All shafts have an intended clearance to allow for shafts to be designed in any desired length without forcing the design of a more complex housing to compensate for shaft embedding. Example 1 gives an optimal area (volume) for the range of −52.4912°≤ ߠ ଵ ≤ −72.0083°. All concepts also give optimal volumes for completely mirrored or completely normal values of all angle optima. Conclusions Optimization of gear trains with spur gears is a complex process. One such practical approach is presented in this paper. Presented in the paper is a universal mathematical model which can be applied to gear trains with two or more stages. Three examples are shown optimized using the presented mathematical model calculated in an original software based on a GA method. Real coded genetic algorithm (RCGA) is used for optimization. A large number of iterations is necessary to find optima due to the complexity of the mathematical model. The examples initial volumes are calculated for the common case where all shaft axes are in the same plane. Comparison of initial volumes and optimization results for concepts of Example 1 show a decrease in volume of 10.589%, for Example 2 a decrease of 7.463%, and for Example 3 a decrease of 19.975%. Gear trains with 2, 3 and 4 stages are used as test examples to show the influence of the number of stages on the percentage of volume decrease. Example 2 is used to show that the number of stages does not directly influence volume minimization. In other words the twostage gear train (Example 1) has a greater relative decrease in volume from its original conception than the three-stage gear train (Example 2) due to a small difference in pinion sizes relative to their corresponding gears. If pinion to gear ratios are greater the area, and in turn volume is expected to have a greater difference in volume for optimal shaft positions. Initial conditions and optimization constraints are incorporated to ensure the optimal concepts are practically applicable. The achieved minimization of volume in turn results in lower costs and weights of the housing for such gear trains.
v3-fos-license
2018-04-03T04:45:44.921Z
2004-11-01T00:00:00.000
25331621
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://escholarship.org/content/qt76n267tb/qt76n267tb.pdf?t=p9n381", "pdf_hash": "7536112fba4085d81c4a3030e7d38bf097a5841a", "pdf_src": "WoltersKluwer", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42619", "s2fieldsofstudy": [ "Psychology" ], "sha1": "843d805a0ba7cd2764524cc3871291c48d2ab714", "year": 2004 }
pes2o/s2orc
Computerized Neuropsychiatric Assessment of Geriatric Subjects by Content Analysis of Brief Samples of Their Speech The life expectancy of people living in the United States is increasing. Are very elderly individuals compromised mentally and physically in comparison to much younger persons? The purpose of this study was to test the efficacy of a computerized program applicable to the content analysis of 5-minute speech samples obtained from a group of individuals 90 years of age and older and to compare the results with those previously obtained in younger people with respect to their mental capacities in terms of a set of diverse neuropsychiatric dimensions. After obtaining informed consent from 26 elderly people, recorded verbal samples were elicited from each individual in response to purposely ambiguous instructions to talk for 5 minutes about any interesting or dramatic personal life experiences. The transcripts of their speech samples were digitized on a computer diskette and processed on the computer program. The computer program compared the scores obtained on each verbal sample with norms obtained on 15 previously validated content-analysis scales from individuals ranging in age from 5 to 80. The norms on these content-analysis scales are different for children (aged 5–10) and adults (aged 11–80). The computerized content-analysis scores obtained from this elderly cohort reveal plausible deviations from the norms for younger people. BACKGROUND The extremely elderly are the fastest growing segment of our population. By 2020, more than 3 million Americans will be 90 years of age or older, doubling the current number of people in that age range ( Figure 1). Because little is known about those who achieve this milestone, the remarkable increase in the number of the oldest old presents a public health challenge in promoting the quality of life as well as the quantity of such persons. As we approach this challenge, many important questions about our oldest citizens need to be answered. An opportunity to assess the neuropsychiatric status, including the quality of life, of a cohort of elderly subjects, 90 years of age and older, presented itself through an ongoing National Institute on Aging-supported study awarded to one of us, namely, Professor Claudia Kawas, MD. An extension and modification of this study involved using a computerized program capable of easily and rapidly measuring, from small samples of speech F E A T U R E A R T I C L E The life expectancy of people living in the United States is increasing. Are very elderly individuals compromised mentally and physically in comparison to much younger persons? The purpose of this study was to test the efficacy of a computerized program applicable to the content analysis of 5-minute speech samples obtained from a group of individuals 90 years of age and older and to compare the results with those previously obtained in younger people with respect to their mental capacities in terms of a set of diverse neuropsychiatric dimensions. After obtaining informed consent from 26 elderly people, recorded verbal samples were elicited from each individual in response to purposely ambiguous instructions to talk for 5 minutes about any interesting or dramatic personal life experiences. The transcripts of their speech samples were digitized on a computer diskette and processed on the computer program. The computer program compared the scores obtained on each verbal sample with norms obtained on 15 previously validated contentanalysis scales from individuals ranging in age from 5 to 80. The norms on these contentanalysis scales are different for children (aged 5-10) and adults (aged 11-80). The computerized content-analysis scores obtained from this elderly cohort reveal plausible deviations from the norms for younger people. obtained from these individuals, their current mental status in terms of the magnitude of a large set of neuropsychiatric dimensions. METHODS These neuropsychiatric dimensions were measured by a computerized content-analysis method that used scales designed by Gottschalk and coworkers. The scales include Total Anxiety (and six subscales), Hostility Outward (Overt and Covert), Hostility Inward, Ambivalent Hostility (hostility perceived as originating from outside oneself), Social Alienation-Personal Disorganization, Cognitive Impairment, Total Depression (and seven subscales), Achievement Strivings, Hope, Human Relations, Dependency and Dependency Frustration, Health/Sickness, and Quality of Life. [2][3][4][5][6] Norms for these content-analysis scales have beenderived through years of extensive research and data collection, [2][3][4][5][6] and are based on scores of medically and psychiatrically healthy individuals aged 18-80. Norms were obtained for each content-analysis scale at different occasions when reliability and construct validity studies were being carried out for that content-analysis scale. Data involving normative scores for the Human Relations Scale were initially reported 7 in 1968, for the Hope Scale 8 in 1974, for the Social Alienation-Personal Disorganization Scale 9 in 1961, and for the Depression Scale 10 in 1986. These norms were further described and established with subsequent studies. 2,6 To summarize these studies, the norms (in terms of mean scores and standard deviations) for each scale were obtained from the verbal content-analysis score of well over 100 normal individuals for each content-analysis scale. These subjects had no medical or psychiatric disorder, were balanced for sex distribution, ranged in age from 18 to 80 for adults and from 4 to 17 for children, and they were well distributed in educational level from some grade school to graduate or professional school. Initially, norms were derived from Caucasian subjects 2,6 and later from African American 6,11 and Hispanic 12 subjects. No significant differences have been found in the normative scores with respect to race and ethnicity, 11,12 nationality 11,13,14 (specifically, German, Chilean, and Australian), and educational level or gender. However, the Cognitive Impairment scores derived from verbal samples are influenced by educational level and age. That is, normal children between the ages of 4 and 8 as compared to older children or adults have higher average cognitive impairment scores 15 than children 9 to 17. Norms for the Health-Sickness Scale were first published by Gottschalk and Gleser, 2,3 and these initial studies were amplified later by further data. 6 These norms were also obtained from content-analysis scores obtained from more than 100 medically and psychiatrically healthy adults. The computer software operates on a PC-class computer using Microsoft Windows. The software system relies on a very large dictionary (Ͼ300,000 words) containing part-of-speech information and a large collection of (mostly American) English idiomatic and slang expressions. Some of the words and all of the idioms are identified as possible indicators of semantic content pertinent to one or more of the content-analysis scales. Syntactic information about the words in the input, such as part of speech and number (ie, singular or plural), is extracted from the dictionary and used by a Woods, CA, and (2) the homes of the subjects. Approximately 50% of the speech samples were obtained at the Clinic in Laguna Woods, CA, and 50% were obtained in the homes of the subjects. Subjects The subjects were invited to participate voluntarily in this study of speech patterns, and they were informed regarding its goals and methods, with the assurance that their verbal samples would be kept confidential. The subjects were randomly selected; that is, no discrimination was made with respect to gender, racial status, educational level, ethnicity, or setting of interviews in selecting the participants. The subjects (N ϭ 26) ranged in age from 90 to 100. Twice as many females (n ϭ 17) as males (n ϭ 9) participated in this study. All subjects were English-speaking, and none were excluded on the basis of medical illnesses or previous diagnoses of mental disorders. Procedure The subjects were requested to give a 5-minute verbal sample in response to the following standard, purposely ambiguous instructions: "This is a procedure to study speaking and conversational habits. I have here a recorder and a video camera to record your speech. I would like you to talk for 5 minutes about any interesting or dramatic personal life experiences you have ever had. If you finish talking about one experience, you can continue to talk about another experience until the 5 minutes are over. I will tell you when to start and stop. While you are talking, I would prefer not to answer any questions until the 5 minutes have passed. However, I will certainly answer any questions you may have before we start. Do you have any questions now? Are you ready to begin?" The 5-minute verbal sample was recorded on both a cassette recorder and a digital camcorder. The audible recordings were transcribed and saved as a text document using Microsoft Word 2002. A single investigator (Ann Hoang) carried out all interviews, recordings, and transcriptions. A computer software program, the Psychiatric Content Analysis and Diagnosis 2000 (PCAD 2000), 16 software parser, which outputs an analysis of the structure of each input clause. When a word or phrase from the dictionary is noted as a possible marker of an item from a content-analysis scale, it is added to a listing of scoring candidates. This list of candidates is then examined by a set of scale-dependent procedures that consider the clause structure as well as the score marking to decide the validity of each candidate's scoring. Candidate scores approved by the process are emitted as content-analysis scores applicable to the input clause. The dictionary is predefined for any specific contentanalysis scoring session. But it can be changed or terms added through the use of a set of interactive dictionary manipulation tools. The software system generates four distinct classes of output, the last three of which are optional. The first and most basic output is an interlinear listing of each grammatical clause and the scores assigned to it. This is printed out as clause per line, with each clause followed by a line of scores (one or more per scale, but more lines may be used if the clause or score tokens are lengthy). The second class of outputs is a scoring summary for each scale being used. The summary gives tallies of the number of occurrences of the various scores/codes and a word count of the total verbal sample. It derives a single number from the score/code that is used to characterize the verbal sample on each scale. The summaries indicate to what extent the verbal sample scores deviate from the norms that have been already obtained for each scale, in terms of standard deviations. The third class of outputs is an analysis or interpretation, in textual forms, of the scale scores. It is directed to the clinician's making diagnostic use of the scoring output, and it suggests areas for further examination and areas in which significant deviation from the norms have been found. The fourth class of outputs suggests possible neuropsychiatric diagnoses that the user might consider in evaluation of the subject. The suggested diagnoses are taken from the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV). 25 The current study was a preliminary one to assess the mental status of these elderly people by means of computerized content-analysis of a cohort of 26 of these randomly selected elderly individuals, aged 90 or more, who voluntarily gave informed consent to participate. The focus of interest was the extent of deviation of scores on these content-analysis scales from the established norms for younger adults. Setting The sites for these geriatric studies were (1) the Center for Aging Research and Education Clinic in Laguna RESULTS The computerized program automatically compares the content-analysis-scale scores obtained from each subject's verbal sample with the norm for each contentanalysis scale in terms of (1) no difference from the norm, (2) mild difference (one standard deviation) from the norm, (3) moderate difference (two standard deviations) from the norm, and (4) serious difference (three standard deviations) from the norm. Table 1 summarizes our findings. DISCUSSION For a randomly selected sample of 26 quite elderly people, it is somewhat surprising that 18 had qualityof-life scores within normal limits, that is, from the perspective of the norms for these content-analysis scales, which are derived from physically and mentally healthy, much younger people (ranging in age from 11 to 80). It was not unexpected that a sizable portion (n ϭ 15) had complaints and preoccupations with sickness and the state of their health, and that 12 of them were anxious about death and had fears about physical injuries (mutilation anxiety) (n ϭ 3) and separation (n ϭ 3). On the other hand, why eight of them had elevated guilt anxiety scores was not selfevident. It is noteworthy that nine of the 26 subjects show evidence of various degrees of cognitive impairment on the basis of this measure. The incidence of dementias with aging secondary to Alzheimer's and similar diseases, cerebral vascular disorders, physical injuries, and Parkinsonism accounts for some of this evidence of cognitive impairment. The way the detection of cognitive impairment derived from this computerized verbal-behavior content-analysis measure compares to findings obtained from other neuropsychological measures used in this research needs further scrutiny. Only one subject had an elevated total depression score. This is a low number for a random sample of aged individuals of this sort. Some of these subjects were being administered psychoactive drugs, including antidepressant and antianxiety agents, which most likely influenced these findings. And the finding that 12 of the subjects had significantly decreased human relations scores indicates that the life situations and/or the social circumstances and settings in which these subjects were living, in addition to aging and sickness, possibly contributed to these phenomena. To obtain accurate norms on our content-analysis scales for people of age 90 and more, a subgroup of such individuals who are evaluated as medically and mentally healthy needs to be selected for further study rather than a sampling of these very elderly subjects regardless of whether or not they have a diagnosable medical or mental illness. In any event, the ease of administration, speed of assessment, and the utility of this computerized verbal content-analysis measure used with these elderly subjects is illustrated by our findings.
v3-fos-license
2020-08-13T10:02:12.387Z
2020-08-07T00:00:00.000
225402374
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s13296-020-00379-5.pdf", "pdf_hash": "75663dec737eb1aed5fed32a18e497ea4639072b", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42622", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "sha1": "050a4431192915bb03aeec6ce7a59e03799fec9d", "year": 2020 }
pes2o/s2orc
Experimental Study of the Ductility of a Submerged Arc Welded Corner Joint in a High-Performance Steel Built-Up Box Column Submerged arc welding, which is generally used for the corner joint of box-section columns, is a welding process with a high heat input. The influence on the strength and toughness of the heat-affected zone is an important concern, especially when used with a high-performance steel that may be more susceptible to heat input. The ductility of the welded corner joint is one of the important factors to ensure safety against external forces, such as during severe earthquakes. In this study, a series of material and cyclic loading tests of the corner joint comprising SA440C high-performance steel fabricated by submerged arc welding were conducted. The experimental results indicated that the welded corner joint comprising SA440C steel is ductile enough to dissipate input energy caused by the strong ground motion from an earthquake. Introduction In Japan, square hollow section (SHS) members are used as columns in steel moment-resisting frames in most building structures because the SHS behaves similarly against the bending moment around any axis and it is easy to design a bidirectional moment frame with high seismic resistance. For low-and middle-rise buildings, cold-formed SHS steel tubes are used in many cases. However, for high-rise buildings, built-up box sections fabricated using thick plates of high-strength or high-performance steel are often used because such large SHS steel tubes produced by cold-form processes are not available. Currently, the strongest steel generally used for built-up box columns in Japan is SA440C with a tensile strength of 590 N/mm 2 (The Japan Iron and Steel Federation (JISF) 2004). Thus, as SA440C is a heattreated steel, heat input by welding may substantially affect the mechanical properties, i.e., the strength and toughness of the heat affected zone (HAZ). Built-up box members are fabricated by assembling four steel plates and welding by submerged arc welding (SAW), as shown in Fig. 1. Because SAW has a high heat input, the influence on strength and toughness at the HAZ is of concern. Based on many studies of SAW of high-strength steel (Yoshida et al. 1963;Wada et al. 1998;Kojima et al. 2004), welding conditions that ensure the strength and toughness of welded parts have been specified and are used in Japan (JISF 2016). Previous studies focused on the tensile strength of the butt welding or the HAZ toughness of the corner joint by SAW. However, the stress and strain conditions around the corner joint of the box-column are completely different. Figure 2b schematically shows the deformation of the box column around the beam-to-column joint. This is the possible worst case where there is no diaphragm (or continuity plate) and the column face plate is subjected to a tensile force in the out-of-plane direction as shown in Fig. 2a. The corner of the column may be subjected to an opening and closing force, as shown in Fig. 2c. However, it has not been verified whether, or not, a corner joint in columns has enough ductility to withstand such force. In this study, to investigate the ductility of a welded corner joint comprising SA440C steel box-columns fabricated by SAW, a series of material tests and cyclic loading tests were conducted (Table 1). Fabrication of Built-Up Box Column First, a built-up box column was fabricated, and test specimens were extracted from it later. The shape of the column and welding detail are shown in Fig. 3. This is the same column used in the electro-slag welding (ESW) test for SA440C steel (Iyama et al. 2019). It has a section with dimensions of 800 mm × 800 mm × 36 mm and is fabricated with SA440C Table 2. This is the current standard welding condition for high-performance steel SA440C in Japan according to JISF (2016). The welding wire is according to JIS (Japanese Industrial Standards) Z3351, and welding flux is according to JIS Z3352. The chemical composition of the base metal (column flange plate), backing strip and weld metal obtained by spectroscopic analysis of samples cut from the box column are shown in Table 3. The chemical composition of the flux is shown in Table 4. Specimen for the Coupon Test The A2-type specimen defined in JIS Z3111 was used for the tensile coupon test, which is shown in Fig. 4. Specimens were cut from the base metal (BM), weld metal (WM), and heat affected zone (HAZ). The BM specimen was cut from SA440C steel, which was the surplus of the built-up box column. The locations where specimens of WM and HAZ were cut from are shown in Fig. 5. WM specimens were cut from the center of the weld metal at 16 mm from the surface. HAZ specimens were cut from the center of the heat affected zone at 12 mm from the surface. The location of the center of HAZ specimens from the fusion line were determined by the macroscopic test. Testing Condition of the Coupon Test The tensile test was conducted in a temperature-controlled chamber at 0 °C in order to confirm the yield point and tensile strength under same temperature as the bending test of corner welding. The temperature of the test specimens was continuously monitored by two thermocouples, and the tensile test was initiated after confirming that the temperature reached 0 °C. Results of the Coupon Test The stress-strain relationships of the BM, WM, and HAZ are shown in Fig. 6. The obtained test results are summarized in Table 5. For the HAZ specimen, because yield points did not clearly appear in the stress-strain relationships, the 0.2% offset strength was taken as yield point in the table. For the WM, both the yield point and tensile strength were equivalent to the base metal. For the HAZ, both the yield point and tensile strength were higher than the minimum specified strengths for SA440C (base metal) as shown in Table 1 (JISF 2004). However, the softening by welding heat input caused the yield point and tensile strength in the HAZ to be below the actual yield and tensile strengths of the base metal. Specimen for the Charpy Impact Test Specimens for the Charpy impact test were the V-notch type, as shown in Fig. 7. According to JIS Z2242, the size of each specimen was 10 mm × 10 mm × 55 mm. Specimens were cut from the BM and submerged arc weld regions. For the BM, the specimen was cut out from the 1/4 thickness location of the plate (Iyama et al. 2019). For the submerged arc weld region, specimens were fabricated from the bottom of the welded part, as shown by the hatched area in Fig. 8. The specimens named SA-WM, SA-D1, SA-B, and SA-H1 were prepared with different notch locations, as shown in Fig. 9. The notch of the SA-WM specimen was located center of the WM, that of SA-D1 was located 1 mm toward the WM from the fusion (bond) line, that of SA-B specimen was located on the fusion line and that of SA-H1 was located 1 mm from the fusion line toward the BM. Testing Condition of the Charpy Impact Test In Results of the Charpy Impact Test The Charpy impact test results from the BM samples are shown in Fig. 10, and the test results from the SA-WM, SA-D1, SA-B and SA-H1 samples are shown in Fig. 11. The Charpy absorbed energy v E is shown in the left column, the percent cleavage fracture (in ISO 148-1; and the term is also referred to as the percent brittle fracture in JIS G0202) BA is shown in the right column, and the test temperature is shown on the horizontal axis. In each graph, the filled circles denote The BM exhibited an absorbed energy of v E = 280 J and a percent brittle fracture of BA = 0% at 0 °C. Both the energy transition temperature and the fracture appearance transition temperature are less than − 60 °C. These results indicate that it is a material with considerably high notch toughness. On the other hand, the absorbed energy and percent brittle fracture of the WM at 0 °C are v E = 58 J and BA = 52%, respectively. The fracture appearance transition temperature of the WM is approximately 0 °C. Furthermore, the absorbed energy and percent brittle fracture of the fusion line at 0 °C are v E = 29 J and BA = 90%, respectively. The fracture appearance transition temperature of the fusion line is more than 20 °C. Compared with the BM, the notch toughness of the fusion line is very low and caused by grain coarsening, which is caused by the large heat input of the SAW process. Testing Method for the Vickers Hardness Test To investigate the mechanical properties around the submerged arc weld, a Vickers hardness test was conducted with a test load of 5 kgf (49.03 N). As shown in Fig. 12a, the Vickers hardness was measured along three lines with x = 2, 16, and 32 mm. The pitch of the measured points was determined according to the distance from the fusion line, as shown in Fig. 12b. Test Results of the Vickers Hardness Test The distribution of the Vickers hardness data is shown in Fig. 13. In all the measured lines, the hardness of the HAZ is lower than those of the BM and WM. The low hardness was caused by a large heat input during the SAW process and a slow cooling rate. On the other hand, the hardness of the WM is almost the same as that of the BM. These results correspond to the results of the coupon test described in Sect. 3.1. Specimen for the Bending Test To investigate the toughness of SA440C corner weld fabricated by SAW, a bending test was conducted. Two L-shaped specimens (S1, S2) cut from the corner of a built-up box column with dimensions of 800 × 800 × 36 (SA 440 C) were used, as shown in Fig. 14. The macroscopic cross-section photos in the vicinity of the position where both specimens were cut from are shown in Fig. 15. Due to ultrasonic testing (UT) performed prior to the bending test, a welding defect of incomplete penetration was found in Specimen 2 (S2). The length of the defect is larger than the allowable value in the Japanese standard for weld defects (AIJ 2008). The location of the defect is shown in Fig. 16. Setup and Loading Protocol The test apparatus is shown in Fig. 17. The bottom of the specimen is fixed to the reaction beam. The top of the specimen is connected to the actuator via the loading beam, and it is loaded in the horizontal direction. The actuator, the loading beam, and the specimen are connected with a 14 mm thick plate at the location shown as "pin" in the flange. Although the "pin" is not a mechanical pin joint, the rotational stiffness is small enough to be considered a hinge. Simulating the out-of-plane deformation of the column flange under earthquake, cyclic loading was applied with an amplitude of 0-45 mm to the displacement of the tip of the actuator, δ act , as shown in Fig. 18. To investigate whether the specimen would fracture in a brittle manner, or not, under low temperature, the specimen temperature was controlled during the first cycle. Before the first cycle, the specimen was cooled with dry ice. The first loading cycle was initiated when the temperature of the specimen, measured by two thermocouples placed in a small hole at both ends of the backing bar, was in the range of 0 °C ± 3 °C. It was confirmed that the temperature was in the proper range throughout the first loading cycle, but during the following cycles, the temperature was not controlled, as shown in Fig. 18. The cyclic loading continued until the peak load in a cycle decreased to 50% of the maximum load the specimen experienced. Test Results The load-deformation relations for both specimens are shown in Fig. 19. The first half-cycle of specimen S1 is shown in Fig. 20. The vertical axis represents the load P applied by the actuator, and the horizontal axis represents the displacement δ act of tip of the actuator. From Fig. 20, it is confirmed that the loading amplitude of δ act = 45 mm is 4 times larger than the yield deformation of the specimen. For specimen S1 without a weld defect, no cracks were observed in the 1st cycle in which the test temperature was controlled in the range of 0 ± 3 °C. In the 2nd and subsequent cycles, the peak load in each cycle slowly decreased. During the 35th cycle, sudden drops of the load occurred twice with an audible sound from the metal. Small cracks were observed in the slit between the backing strip and the flange plate. In the 51st cycle, the peak load decreased to 50% of that of the 1st cycle, and the test was terminated. For specimen S2, as similar to specimen S1, no cracks were observed during the 1st cycle in which the test temperature was controlled to ± 3 °C. In the 2nd and subsequent cycles, the peak load in each cycle slowly decreased. In the 21st cycle, a sudden drop of the load was observed with an audible sound from the metal, but a small crack was observed in the slit between the backing strip and the flange plate after the 22th cycle. After the crack gradually appeared on the surface in the 49th cycle, the test was terminated because the peak load decreased to 50% of that of the 1st cycle. The cracks on both specimens after the last cycle loading are shown in Fig. 21. The shapes of the cracks for both specimens are almost the same. The crack initiated at the slit tip between the backing strip and the flange plate and propagated in the direction or 45 degrees through the HAZ, whose strength was decreased by heat input of welding. The notch toughness of the fusion line was very low, v E = 29 J, as shown in Fig. 10, but no crack or brittle fracture occurred along the fusion line, probably because the strength is higher than that of the HAZ, as shown in Fig. 13. The change in the peak load of each cycle of both specimens is shown in Fig. 22. The slopes of the peak load for both specimens are similar until a sudden load drop was Load-deformation relationships observed due to crack initiation. The load drop occurred earlier in the S2 specimen than in the S1 specimen, likely because of the welding defect, but the fracture was not brittle in nature. The degradation of the strength after the crack initiation was slow because approximately 50 cycles were necessary until the peak load decreased to 50% of the maximum. From this result, it can be concluded that the ductility of the submerged arc welded joint at the corner of the builtup box column was sufficiently large against the tensile force from the beam flange, even though it had welding defects that are not acceptable according to the Japanese standards of welding defect (AIJ 2008). Conclusion To estimate the ductility of the welding corner joint of highperformance SA440C steel fabricated by SAW under standard welding conditions in Japan, a series of material and cyclic loading tests of the corner joint were conducted. The obtained results are summarized as follows. From the coupon test and Vickers hardness test, it was observed that the strength of WM was equivalent to the BM. However, although the strength of the HAZ satisfied the specification, it became lower than BM and WM due to the high heat input. The strength of the fusion line became higher than that of the BM and WM. From the Charpy impact test, it was observed that, although the notch toughness of the BM was very high, those of the WM and HAZ were not high. Especially at the fusion line, the notch toughness was poor. During the cyclic loading test of the corner joint, two specimens were examined, including a specimen without a weld defect and a specimen with incomplete penetration. In both specimens, even though the strength decreased with the generation and propagation of cracks, brittle fracture did not occur. From this result, it was confirmed that the submerged arc welded corner joint of high-performance SA440 steel has sufficient ductility.
v3-fos-license
2022-05-16T05:12:57.559Z
2022-05-13T00:00:00.000
248799001
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "9ec57484a96eb772b33022f4f72c4f9ba03578db", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42623", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "9ec57484a96eb772b33022f4f72c4f9ba03578db", "year": 2022 }
pes2o/s2orc
A nonconvex TVq-l1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hbox{TV}_q-l_1$$\end{document} regularization model and the ADMM based algorithm The total variation (TV) regularization with l1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l_1$$\end{document} fidelity is a popular method to restore the image contaminated by salt and pepper noise, but it often suffers from limited performance in edge-preserving. To solve this problem, we propose a nonconvex TVq-l1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hbox{TV}_q-l_1$$\end{document} regularization model in this paper, which utilizes a nonconvex lq\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l_q$$\end{document}-norm (0<q<1)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(0<q<1)$$\end{document} defined in total variation (TV) domain (called TVq\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hbox{TV}_q$$\end{document} regularizer) to regularize the restoration, and uses l1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l_1$$\end{document} fidelity to measure the noise. Compared to the traditional TV model, the proposed model can more effectively preserve edges and contours since it provides a more sparse representation of the restoration in TV domain. An alternating direction method of multipliers (ADMM) combining with majorization-minimization (MM) scheme and proximity operator is introduced to numerically solve the proposed model. In particular, a sufficient condition for the convergence of the proposed algorithm is provided. Numerical results validate the proposed model and algorithm, which can effectively remove salt and pepper noise while preserving image edges and contours. In addition, compared with several state-of-the-art variational regularization models, the proposed model shows the best performance in terms of peak signal to noise ratio (PSNR) and mean structural similarity index (MSSIM). We can obtain about 0.5 dB PSNR and 0.06 MSSIM improvements against all compared models. Images are often contaminated by additive noise during the formation, transition or recording process, usually modeled as: where u is the original true image, f is the corresponding noisy version, and n represents additive noise. Solving the u from the linear system (1) is a classical inverse problem, which is actually an ill-posed problem since the solution of u is non-unique and is very sensitive to the initialization. A nature method to address this problem is regularization technique and functional minimization by introducing some prior informations on the restorations [1][2][3][4][5] , usually formulated as: where R(u) is regularization term that embodies the priors, F(f − u) is the fidelity term that forces the closeness of the restoration u to the observation f, U is a function space modeling the restoration u, and > 0 is a tuning parameter that controls the tradeoff between the two terms. For the regularization term, the earliest regularizer is Tikhonov regularization term proposed by Phillips 6 and Tikhonov 7 in 1960s, which is defined as a quadric functional of the l 2 norm of |∇u| , i.e., �∇u� 2 2 . Tikhonov regularization has the strong ability of noise removing. However, it often overly smoothes the image edges. Rudin, Osher and Fatemi in 1992 8 proposed total variation (TV) regularization to address this over-smoothing problem. The function measured by TV allows for discontinuities along curves during the functional minimization, therefore edges and contours can be preserved in the restoration u. Later on, many scholars have done a lot of research on TV regularization, and proposed lots of improved TV-based regularization terms, such as high-order TV 9,10 , hybrid TV 11,12 , non-local TV 13,14 , overlapping TV 15,16 , anisotropic TV 17 . We note that the TV-based regularizers mentioned above are convex functional. (1) f = u + n, In the last decades, nonconvex regularization based on sparse priors has attracted much attention and found wide applications. It is based on the observation that signals (also images) usually have the very sparse representation in some transformed domains (such as Fourier transform, cosine transform), or in some dictionaries (such as wavelet dictionary, framelet dictionary, self-adaptive dictionary) 18,19 . It is well known that l 0 -norm measured by the number of nonzero entries is the exact measurement of the sparsity. However, it is difficult to be solved in the practice. A popular method to attack this problem is to use the l 1 -norm as a relaxation measurement, which is a convex functional and makes the problem easier to solve. It has been shown that under some assumptions, the regularization problems with such l 1 relaxation leads to a near optimal sparse solution. To further encourage the sparsity of the solutions, some nonconvex regularizers are proposed since nonconvex functions are much closer to the l 0 -norm than convex counterparts 20,21 . Since the seminal work of Geman and Geman in 22 , various nonconvex regularization models have been proposed, such as [23][24][25][26][27] . Although nonconvex optimization problems cannot guarantee the existence and uniqueness of the solution, and will lead to complex numerical calculation, a variety of applications (e.g., [28][29][30][31] ) have shown that nonconvex regularization models outperform the convex counterparts, and yield the restorations of high quality with sharp and neat edges. In addition, Nikolova et al. 25,30 provided a theoretical explanation for this phenomenon. For the fidelity term, one always uses the l 2 -norm �f − u� 2 2 to measure the closeness between the restoration u and the observation f [9][10][11][12][13][14][15][16] . It is well known that such least-squares fitting using l 2 -norm yields the mean filtering, which is only suitable for removing the additive Gaussian noise, but fails for salt and pepper noise. While the least-absolute fitting using l 1 -norm leads to the median filtering that is less sensitive to the outliers. So l 1 -norm fidelity term �f − u� 1 is suitable for removing the salt and pepper noise. A lot of regularization models with l 1 -norm fidelity have been proposed for salt and pepper noise removal, such as 3,26,[32][33][34] . In addition, Chan and Esedoglu in 34 demonstrated that TV regularization with l 1 -norm fidelity term (TVL1) is contrast invariant, as opposed to that with l 2 -norm fidelity term. However, TVL1 model has limited performance in edge-preserving due to the use of the convex TV regularizer. We note in passing that Meyer in 35 suggested to use some weakerthan-l 2 (also l 1 ) norms as the fidelity term to measure the residual. He introduced three functional spaces, G, E and F, to model the oscillatory functions, which are very suitable for image cartoon-texture decomposition, but not suitable for salt and pepper noise removing. Based on the above analysis, we note that: (1) TV regularization with l 1 fidelity can successfully remove salt and pepper noise, but lacks the ability of edge-preserving; (2) Although nonconvex regularization can preserve image edges well, few studies concern on salt and pepper noise removal. In order to solve these problems, and effectively remove salt and pepper noise while better preserving image edges and contours, a nonconvex TV q − l 1 regularization model is proposed in this paper. It utilizes a nonconvex TV q regularizer defined in TV domain to model the restoration u, and employs the l 1 -norm as the fidelity term for the noise f − u . So, the proposed model can remove the salt and pepper noise while preserving image edges and contours due to the combination of nonconvex regularization and l 1 -norm fidelity term. A first-order algorithm based on the alternating direction method of multipliers (ADMM) combining with MM scheme proximity operator is developed to numerically solve this nonconvex model. In addition, a sufficient condition for the convergence of the proposed algorithm is provided. The main contributions of this work are as follows: • A nonconvex TV q regularization variational model with l 1 -norm fidelity is proposed. Although much research has been done on the nonconvex regularization and l 1 -norm fidelity term separately, to the best of our knowledge, there are every few studies on the issue of the combination of the nonconvex regularization and l 1 -norm fidelity. A few recent works can be seen in 3,26,32 . We note that the nonconvex regularizers in these literatures are defined in the image domain itself, or in the coefficient domain on a basis, whereas our regularizer is in TV domain. Compared with models in 3,26,32 , nonconvex TV regularization has superior performance in edge-preserving, we refer the readers in 25,30 for more details. • A first-order algorithm based on ADMM combining with proximity operator is introduced for the nonconvex model. In addition, the convergence property of the proposed algorithm is analyzed. We note that for the "nonconvex regularization + l 1 -norm fidelity" models, the authors in 3,26,32 also used ADMM framework. But 3 did not give a convergence analysis. The authors in 26,32 derived a convergent algorithm by smoothing the l 1 -norm fidelity term. Different from the methods in 26,32 , we give a convergence analysis under some nature assumptions to the proposed functional and the parameters. The structure of this paper is organized as follows. "Introduction" section presents the background and start of this study. "Related work" section gives some background knowledge involving TVL1 model, nonconvex regularization and proximity operator. "Methods" section details the proposed nonconvex TV q − l 1 regularization model, and introduces an efficient numerical algorithm for the proposed model. A sufficient condition for the convergence of the proposed algorithm is also provided in this section. "Results" section discusses the performance of the proposed model and algorithm. "Conclusion" section presents the results. The work ends with concluding remarks. Related work In this section, we recall some background knowledge that are very related to our present work, where TVL1 model is the seminal work for salt and pepper noise removal under the variational regularization framework, nonconvex regularization provides the design of regularizer for the proposed model in this paper, and proximity operator is used to solve the nonconvex subproblem in the ADMM algorithm. www.nature.com/scientificreports/ TVL1 model. Rudin et al. 8 proposed the following TV regularization model with l 2 -norm fidelity term (TVL2) to address over-smoothing problem often arising in the Tikhonov regularization, where �∇u� 1 = |∇u|dx is the regularization term, and �f − u� 2 2 is l 2 -norm fidelity term. Model (2) is convex with respect to u and easy to be solved in the practice. TVL2 model (2) is suitable for Gaussian noise removing. In addition, TV energy does not penalize the discontinuity of the functions along the contours, so the edges can be preserved in the restoration u by model (2). Chan and Esedoglu in 34 proposed the following TV regularization model with l 1 -norm fidelity term (TVL1), TVL1 model (3) is more suitable for salt and pepper noise removing than TVL2 model (2). In addition, compared to TVL2, TVL1 model is contrast invariant. The authors in 34 gave a simple but illustrative example to show the characteristics of the solutions of TVL1 model and TVL2 model. Assuming the observed image f(x) being a characteristic function 1 B r (0) (x) of a disk B r (0) that is centered at the origin and with radius r, they derived the close-form solutions of (2) and (3). The solution of TVL2 model (2) can be written as: Assuming the minimizer of TVL1 model (3) has to be of the form c1 B r (0) (x) for some constant c ∈ [0, 1] , they get the solution as: From (4) and (5), we observe that both disks in the TVL2 and TVL1 solutions vanish if the radiuses are less than 2/ . But for the disks whose radiuses r are greater than 2/ , TVL1 model preserves these disks intactly, i.e., u TVL1 = f , in contrast to the "contrast loss" phenomena in TVL2 model, where the loss is inversely proportional to r . This intuitive example indicates that TV regularization with l 1 -norm fidelity can better preserve the contrast of the images than that with l 2 -norm fidelity in the application of image restoration. Nonconvex regularization. From the view of sparse-representation, TV energy is actually the l 1 -norm of the gradient module, which can be seen as a relaxation of the l 0 -norm that is the accurate measurement of the sparsity. To promote the sparsity of the entries, nonconvex measurement is a good candidate since it approximates the l 0 -norm more closely than l 1 -norm. Nikolova et al. 25,30 proposed the following nonconvex TV regularization model with l 2 -norm fidelity term, which is called NTVL2 model in the following: where ϕ(t) is a nonconvex potential function, and �ϕ(|∇u|)� 1 = � ϕ(|∇u|)dx is the nonconvex regularization term. Since nonconvex function ϕ(t) is closer to the l 0 -norm than l 1 -norm, NTVL2 model (6) can obtain the more sparse representation of |∇u| than TVL2 model (2). Furthermore, compared to the TVL2 model, NTVL2 model encourages the penalty to the pattern of small variation, while decreases the penalty to the pattern with large variation. So, NTVL2 model has a superior performance in noise-removing and edge-preserving than classical TVL2 model. However, NTVL2 model (2) is only suitable for Gaussian noise removing due to the use of l 2 -norm fidelity. To achieve the sparse recovery in the presence of salt and pepper noise, recently some nonconvex regularization models with l 1 -norm fidelity have been proposed 3,26,32 , called NRL1, which are defined as follows: where P(·) is a nonconvex function for sparsity promotion. If A is a identity matrix, model (7) is to recover the sparse image u. If A is a sensing matrix accumulated by a basis, model (7) is to recover the image Au which has the most sparse representation on this basis. We note that the nonconvex regularizer in (7) is defined in the image domain itself, or in the coefficient domain on a basis. To inherit the advantages of nonconvex TV regularization in image restoration, we propose a generalized nonconvex regularization variational model for salt and pepper noise removal. It utilizes a generalized nonconvex regularizer defined in the TV domain as the priors to model the restorations, and employs the l 1 -norm as the fidelity term to measure the noises. New model can effectively remove salt and pepper noise due to the use of l 1 -norm fidelity; and well preserve image edges and contours due to the use of nonconvex TV regularization. Proximity operator of l q function. The proximity operator is a generalized form of the projection operator, often used to solve non-differentiable optimization problems. In this paper, we use it to solve the nonconvex www.nature.com/scientificreports/ subproblems in the iterative algorithm. For a proper and lower semi-continuous function P(x), the corresponding proximity operator is defined as 36,37 , Intuitively, proximity operator prox P,ρ (t) is to approximate the point t with some other point x under the norm �x − t� 2 2 and the penalty P(x). The positive parameter ρ > 0 is introduced as a means to control the approximation. In the following, we review the proximity operator for l q (0 < q < 1) function, which will be used in our numerical implementation. When the penalty is given as l q -norm (0 < q < 1) , i.e., the proximity operator does not has a closed-form expression except for two special cases of q = 1/2 and q = 2/3 . When P(x) = |x| 1/2 , the corresponding proximity operator is a l 1/2 thresholding function 38,39 , where When P(x) = |x| 2/3 , the corresponding proximity operator is a l 2/3 thresholding function 39,40 , where and For any other q, the authors in 41 give a semi-implicit expression of the proximity operator with l q -norm penalty, which is defined as: In (10), the threshold τ satisfies that τ = β + qβ q−1 /ρ with β = 2(1 − q)/ρ 1 2−q , and y * is the shrinkage that has not explicit expression. It is a zero point of the non-linear function h(y) = qy q−1 + ρy − ρ|t| over the region (β, |t|). Methods The main purpose of this paper is to effectively remove salt and pepper noise while successfully preserve image edges and contours in image restoration under the variational framework. Firstly, a variational regularization model combining nonconvex regularization and l 1 fidelity is proposed, which is actually a minimization problem. And then, the classical ADMM algorithm is developed to numerically solve the proposed model, which is programmed by MATLAB software in the experiments. Finally, some commonly used test images and datasets are used to validate the proposed model and algorithm. The PSNR and MSSIM indexes are used as the means to quantitatively evaluate the performance. Figure 1 shows a flow chart to clarify the study design of the present work. The proposed nonconvex TV q − l 1 regularization model. In this section, we present a nonconvex TV q − l 1 regularization variational model, called NTVL1 in the following, which is defined as: www.nature.com/scientificreports/ where P(Bu) is the regularization term, in which P(·) : R + → R + is a continuous, increasing and nonconvex l q function for sparsity promotion, and B is the gradient operator |∇| = (∇ 2 x + ∇ 2 y ) 1/2 . We note in passing that B can be choose some other difference operators, such as x-directional difference ∇ x , y-directional difference ∇ y , and anisotropic difference operator ∇ x + ∇ y . The l 1 -norm �f − u� 1 is the fidelity term. U is a function space (e.g., Sobolev space, bounded variation space). And is a positive tuning parameter, which balances the regularization term and fidelity term. Model (11) combines the advantages of nonconvex TV regularization and l 1 -norm fidelity. It can effectively remove the salt and pepper noises by l 1 -norm fidelity term, while preserving the valuable edges and contours via nonconvex TV regularization. Remark 1 Although much research has been done on the nonconvex regularization and l 1 -norm fidelity term separately, to the best of our knowledge, there are every few studies on the combination of them. A few works can be seen in 26,32 , called NRL1, which are defined as follows: where P(u) is a nonconvex regularization term to measure the sparsity of u, and A is a transformation matrix. If A is a identity matrix, NRL1 model is to recover the sparse image u, and if A is a sensing matrix accumulated by a basis, it recovers the image Au that has the most sparse representation on the basis A. But differed from NRL1 regularizer P(u), our nonconvex regularizer is P(Bu) that is defined in TV domain. In image restoration application, such scheme can better preserve edges and contours than NRL1. Remark 2 The model (11) can effectively preserve edges and contours in the restoration u due to the use of nonconvex TV regularization term. Using a basis of a local framework (N, T), where N is normal direction defined as N = ∇u/|∇u| , and T is the corresponding tangent direction defined as T = ∇u ⊥ /|∇u| , we derive the Euler-Lagrange equation associated with (11), where u TT and u NN are the second derivatives of u in T and N directions, and P ′ (|Bu|)/|Bu| and P ′′ (|Bu|) can be seen as the adaptive diffusion velocity along T-direction and N-direction, respectively. It is obviously that P ′ (t) is a monotony decrease function and satisfies P ′′ (t) < 0 since P(t) is a nonconvex increasing function. Along T-direction, for the image pixels where |Bu| ≈ 0 (homogeneous regions), the diffusion Eq. (12) has strong smoothing effect since the diffusivity P ′ (|Bu|)/|Bu| is of a large value. And for the image pixels where the value of |Bu| is large (edges), the model (12) has weak smoothing effect since the value of diffusivity P ′ (|Bu|)/|Bu| is small. Along N-direction, the adaptive diffusivity always satisfies P ′′ (|Bu|) < 0 for each image pixel, which means that diffusion in normal direction is always reverse. Based above, we can conclude that the proposed model can effectively smooth the image homogeneous regions, while still preserving the edges and contours very well. www.nature.com/scientificreports/ The proposed algorithm. Obviously, model (11) is a nonconvex and nonsmooth optimization problem since the first term is nonconvex, and the second term is nonsmooth. In this section, we propose an efficient first-order algorithm to solve this model using ADMM framework. ADMM algorithm decouples the variables and makes the global problem easy to tackle, which is very suitable to solve the distributed optimization and high-dimensional optimization problems. With the use of an auxiliary Bu = v , ADMM algorithm is to solve the following linearly constrained reformulation of (11): Transforming (13) into an augmented Lagrangian formulation, we obtain where u and v are primal variables, p is the Lagrangian multiplier, also called dual variable, and ρ > 0 is a penalty parameter. Functional (14) can be simplified as: where C = 1 2ρ �p� 2 2 that can be neglected in the minimization problem. Then, we alternatively minimize (15) with respect to u and v, and then update the multiplier p. Specifically, the minimization solutions (u k+1 , v k+1 ) are obtained alternatively while the other variables are fixed, which leads to the following iteration scheme. • Step 1. Fixing variables v and p, we minimize the energy L(u, v; p) with respect to u. • Step 2. Fixing variables u and p, we minimize the energy L(u, v; p) with respect to v. • Step 3. Updating Lagrangian multiplier p as follows: ADMM algorithm solves the original model (11) by alternatively updating the above steps. In the following, we solve the subproblems (16) and (17) in detail. Solve the subproblem (16) with respect to u. Using an auxiliary variable w = u − f , we convert the minimization problem (16) into an equivalent form, Then, the optimal u k+1 can be computed by u k+1 = f + w k+1 . The w-subproblem (19) is actually l 1 -regularized least squares problem. We use a majorization-minimization (MM) scheme to solve this subproblem approximately. Specifically, let n k = v k − Bf − p k ρ , we majorize the quadratic functional �Bw − n k � 2 2 in the objective functional (19) with a simple surrogate functional by linearizing it at point w k , where d w k is the gradient of the quadratic functional �Bw − n k � 2 2 at point w k , computed by d w k = B T (Bw k − n k ) , and τ > 0 is a proximal parameter. Using such an approximation of Bw − n k 2 2 in (19), we denote the new energy as F(w, w k ) . Obviously, when the proximal parameter τ satisfies 1/τ > max B T B , where max B T B denotes the maximum eigenvalue of the matrix B T B , the new energy F(w, w k ) satisfies the classical MM conditions: (i) F(w, w k ) ≥ F(w) for all w, and (ii) F(w k , w k ) = F(w k ) . Minimizing the surrogate energy F(w, w k ) in stead of the original energy F(w), and neglecting the constant in the F(w, w k ) , we obtain the following minimization problem, (20) is a classical l 1 + l 2 minimization problem, which can be explicitly solved by a soft thresholding with the shrink operator, i.e., where shrink operator is defined as: Solve the subproblem (17) with respect to v. Let m k = Bu k+1 + p k ρ , the v-subproblem (17) can be computed by the proximity operator, i.e., where prox P,ρ is the proximity operator for the function P(·) with penalty ρ. The above states the algorithm that solves the augmented Lagrangian formulation L(u, v; p) with a fixed k. At last, in order to incorporate the algorithm into ADMM framework to solve the original nonconvex model (11), starting with the initial assignment as k = 0 , u 0 = f , v 0 = Bf and p 0 = 0 , we reiterate the above computing processes, each time updating the value of k as k + 1 . Consequently, the ADMM algorithm to our nonconvex variational model (11) is written as follows (Algorithm 1). We note that for the Algorithm 1, it only needs one loop to iteratively update the function values. The computation load in each iteration is matrix multiplication. So the complexity of the Algorithm 1 is O(mn), where m is the size of the input images, and n is the number of the loop iterations. Convergence analysis. In this subsection, we analyze the convergence property of the Algorithm 1. Note that the convergence issue of ADMM algorithm for the convex models has been well addressed, such as [42][43][44][45] , while there are very few studies on the nonconvex cases. Inspired by the approaches and conclusions in 46,47 , we derive the following results for convergence of the Algorithm 1. Firstly, several assumptions are introduced, which will be used in the following convergence analysis. Assumption 1 Function P(·) is closed, proper and lower semicontinuous. Assumption 2 The gradient of P(·) is Lipschitz continuous, i.e., for any x and y, there exists a positive constant K > 0 , such that: We note that here we use gradient ∇ rather than derivative since gradient is a generalization of the derivative. Assumption 3 The penalty parameter ρ is chosen large enough such that ρ > K . In this case, the v-subproblem (17) is strongly convex. www.nature.com/scientificreports/ Assumption 4 The energy E(u) is bounded below, i.e., E = min E(u) > −∞. We first show that the difference of the dual variable p in the iteration can be bounded above by that of the primal variable v. Lemma 1 Let (u k , v k ; p k ) be the sequence obtained by Algorithm 1, then we have following: Proof From the v update step (17), we have the following optimality condition: Combining with the dual variable update step (18), i.e., We have By the assumption that the gradient of P is Lipschitz continuous, we have The desired result is obtained. Next, we show that the augmented Lagrangian function L(u, v; p) is monotonically decreasing in the iterative process. Lemma 2 Let (u k , v k ; p k ) be the sequence obtained by Algorithm 1, then we have following: where Proof In (19), let In w-subproblem (20), we actually minimizes the following approximated objective of (19), Because w k+1 is the minimizer of F w, w k , we have which implies that: where max B T B denotes the maximum eigenvalue of the matrix B T B . The inequality (24) combining with (25) yields: The desired result is obtained. Lemma 3 Let (u k , v k ; p k ) be the sequence obtained by Algorithm 1, then we have following: where γ 2 is a positive constant associated with ρ. Proof By the assumption, ρ > K implies that L u k+1 , v; p k is strongly convex respect to the variable v. So, we can deduce that there must exist a positive constant γ 2 (ρ) such that: Because v k+1 is a minimizer of L u k+1 , v; p k , we have It follows from (27) and (28) that: The desired result is obtained. Lemma 4 Let (u k , v k ; p k ) be the sequence obtained by Algorithm 1, then we have following: Proof We first split the difference of the augmented Lagrangian function by: The first term in right side of (29) can be computed by: The second term in right side of (29) can be split as: which together with Lemmas 2 and 3 yields: Combining Eqs. (33) and (34), we obtain The desired result is obtained. Lemma 4 implies that if the condition ργ 2 > 2K 2 is satisfied, then which implies that the value of the augmented Lagrangian function will always decrease with the iteration progressing. We note that as long as parameter γ 2 = 0 , one can always find a suitable ρ large enough such that the condition ργ 2 > 2K 2 is satisfied, since ργ 2 is monotonically increasing with respect to ρ , and 2K 2 is a constant associated with the function P(·). Lemma 5 Let (u k , v k ; p k ) be the sequence obtained by Algorithm 1, then we have following: where E is the lower bound of E(u) defined in Assumption 4. Lemma 4 shows that the augmented Lagrangian function L u k , v k ; p k is monotonically decreasing, and Lemma 5 shows that L u k , v k ; p k is bounded below. So, we can conclude that the augmented Lagrangian function L u k , v k ; p k is convergent as k → ∞. Theorem 1 Let (u k , v k , p k ) be the sequence obtained by Algorithm 1, suppose that ρ > K , ργ 2 > 2K 2 and 1/τ > max (B T B) , then we have following: If U is a compact set, then the sequence z k = (u k , v k , p k ) converges a limit point z * = (u * , v * , p * ) . In addition, z * is a stationary point of the augmented Lagrangian function L(u, v; p). Proof We first prove part (i) of the theorem. By Lemmas 4 and 5, we can conclude that the augmented Lagrangian function L u k , v k ; p k is convergent as k → ∞ , which implies that: By Lemma 4, we have Since ργ 2 > 2K 2 and γ 1 > 0 , taking limit for (40), and combining (39), we have By Lemma 1, we further obtain With the fact that p k+1 = p k + ρ Bu k+1 − v k+1 , using (41), we have which implies that: Next, we prove part (ii) of the theorem. We first show that there exists a limit point for the sequence z k = (u k , v k , p k ) . Since U is a compact set, and lim k→∞ u k+1 − u k 2 2 = 0 , there must exist a convergent subsequence u k i 1 of u k such that u k i 1 → u * . Since B is a bounded linear operator, and U is a compact set, we can deduce that the map set BU = {v : Bu = v, u ∈ U} is also a compact set. With the fact that lim k→∞ Bu k − v k 2 2 = 0 and lim k→∞ v k+1 − v k 2 2 = 0 , we can deduce that v k also lies in the compact set, and exists a convergent subsequence v k i 2 such that v k i 2 → v * . Note that ∇P(v) is Lipschitz continuous, and BU is a compact set, we can deduce that ∇P(v)(v ∈ BU) is bounded, which implies that ∇P(v k ) is a bounded sequence. With the fact that p k = ∇P(v k ) and lim k→∞ p k+1 − p k 2 2 = 0 , there must exist a convergent subsequence p k i 3 such that p k i 3 → p * , Selecting the same indexes from Next, we show that any limit point of the sequence z k is the a stationary point of the augmented Lagrangian function L (u, v, p). By the optimality conditions, the sequence z k = (u k , v k , p k ) satisfies that: Since z k i → z * = (u * , v * , p * ) as k i → ∞ , passing to the limit in (42) along the subsequence z k i , we obtain (37) P(v k+1 ) + ∇P(v k+1 ), Bu k+1 − v k+1 P(Bu k+1 ). www.nature.com/scientificreports/ which implies that z * = (u * , v * , p * ) is a stationary point of the augmented Lagrangian function L (u, v, p). The desired result is obtained. Results In this section, we show the effectiveness of the proposed model and algorithm in image denoising application. The programs are coded in MATLAB, and run on a PC with Intel Core i5 2.5G CPU and 4.00G RAM. The peak signal to noise ratio (PSNR) and mean structural similarity (MSSIM) index 48 are used as the means of judging the performance. The main experimental content of this paper is as follow: (1) The effectiveness of the proposed model, and the convergence of the algorithm. (2) The effect of the nonconvex parameter q in the proposed model. In all experiments, the difference operator B in the model (11) is chosen as the gradient operator |∇| . Then, P(|∇u|) is the nonconvex total variation measure of the input u. Here, we give the definition of ∇ in the discrete case. Rearranging the two-dimensional image matrix u in (11) into a vector by scanning the column one by one, we define the gradient operator ∇ in a matrix form, where I n is the n-dimensional identity matrix, ⊗ denotes the Kronecker product, and ∇ 1 is difference elementary matrix defined as: Then, let u be an image in R n 2 . The gradient of u can be computed as: In the proposed model (11), we use l q -norm as the nonconvex penalty function, i.e., P(x) = |x| q . Here, we only use the l q -norm penalty since (1) it has a flexible parametric form; (2) it's proximity operator corresponds to a thresholding function that is easy to compute in the practice; (3) the popular hard-and soft-thresholding is the special cases of our l q thresholding. By Theorem 6, the parameter ρ must be chosen large enough to guarantee the convergence conditions. However, the ADMM algorithm would be every slow and impractical if with a very large value of ρ . In this paper, we adopt the scheme in 26 to address this problem. Starting with a properly small value of ρ , we gradually increase the values of ρ in the iteration until reaching the target value, i.e., 0 < ρ 0 < ρ 1 < · · · < ρ k · · · . The stopping criterion for the proposed algorithm is that the relative-change between the restored images of the successive iterations is smaller than ε = 10 −3 . The parameter τ is set as τ = 0.9/ max (B T B) ; is manually tuned such that the restoration achieves the largest PSNR value. The effectiveness of the proposed model. The first experiment aims to show the effectiveness of the proposed model and algorithm in image denoising application. The nonconvex regularization function is chosen as P(x) = |x| 1/2 whose corresponding proximity operator is computed by (9). Test images shown in the first column of Fig. 2 are two synthetic images and two real images with the size of 256 × 256 . The second column of Fig. 1 shows the corresponding noisy versions obtained by adding the salt and pepper noises with the density of 0.03 into the clean data. Here, Matlab built-in function imnoise are used to contaminate the images. The denoising results are shown in the last column of Fig. 1. From the results, we observe the following: (1) The proposed model is very effective for salt and pepper noises removing due to the use of the l 1 fidelity term. Almost all salt and pepper noises are removed in the restorations; (2)The image edges and contours can be preserved well by using nonconvex TV regularization. Next, we demonstrate the convergence property of the proposed algorithm by plotting two measures of the sequence u k conducted by Algorithm 1. Here, the test data are the images in the first experiment. Figure 3 shows the plots of the relative-change of the restorations versus iterations, where the relative-change of the restoration u in the iteration is computed by u k+1 − u k 2 / u k+1 2 . Figure 4 shows the plots of the energy E(u k ) computed by (11) versus iterations. From Fig. 4, we can see that the relative-change of u significantly decreases in the first few steps, and then converges to zero, which implies that u k+1 → u k as k → ∞ in the l 2 topology. And from Fig. 4, we observe that the energy E firstly decreases with the iteration progressing, and then converges to a constant, www.nature.com/scientificreports/ which implies that the limit point of the sequence u k is a local minimum point of the functional E(u). These two figures support the convergence analysis of the proposed algorithm in "The comparison experiment" section. The test of different nonconvex parameter q. In this section, we test the l q -norm nonconvex penalty functions with different q-values in the interval of (0, 1). In the proposed algorithm, the v-subproblem is updated by proximity operator with l q -norm penalty. We note that when q = 1/2 and q = 2/3 , the corresponding proximity operators are l 1/2 and l 2/3 thresholding functions, which can be explicitly computed by (9) and (10), respectively. For any other values of q, the corresponding proximity operators are computed by (2.9), in which we need to solve a zero point y * of the non-linear function h(y) = qy q−1 + ρy − ρ|t| . In the numerical implementation, the zero point y * is solved by Newton method since h(y) is a convex function. Figure 5 shows the denoising results of the proposed model with different values of q, q ∈ {0.2, 0.5, 0.7, 0.9} , for two test images (Synthetic image A and Cameraman) with the size of 256 × 256 , where the the noisy images are obtained by adding the salt and pepper noises with the density of 0.03 into the clean data. We observe that, with these different nonconvex functions, the models all can remove the salt and pepper noises while preserving edges and contours in the restorations. However, the PSNR values listed in Table 1 show that in restoring the synthetic image, q = 0.2 yields the best performance, which is different from the results in restoring the Cameraman image, where q = 0.7 yields the best performance. In our opinion, this is due to the nature that real images are not strictly sparse as the synthetic sparse images in the TV domain. The comparison experiment. In this subsection, we compare the proposed model with several state-ofthe-art models in denoising application. Table 3. From the numerical results, we have the following conclusions: • l 1-norm fidelity term is more effective for salt and pepper noise and outlier removing than l 2 -norm. We observe that TVL2 and NTVL2 models perform well for Gaussian noise removing, but fail for salt and pepper noise. Some impulsive points are still remained in the restorations obtained by TVL2 and NTVL2 models. For TVL1 and NTVL1 models, however, it can be seen that these two models can remove the salt and pepper noise successfully. Almost all impulsive points are removed from the restorations by TVL1 and NTVL1 models. • Nonconvex regularization has the better performance in edges and contours preserving than convex ones. Comparing the restorations of TVL2 and NTVL2 models, we observe that NTVL2 model keeps sharp features better than TVL2 model, even those impulsive points that are more prominent in the restorations by NTVL2 www.nature.com/scientificreports/ model. NTVL1 and TVL1 models can remove the impulsive points due to the use of the l 1 -norm fidelity. But obviously, the NTVL1 model significantly outperforms TVL1 model in preserving sharp contours and details. For example, in the face and camera of the cameraman, NTVL1 model restores more details and features than TVL1 model. In addition, Table 3 show that restorations by the NTVL1 model have slightly greater PSNR and MSSIM values than TVL1 model, which further demonstrates that nonconvex regularization has the better performance than convex regularization. • The proposed NTVL1 model has the largest PSNR and MSSIM values within these four models. It indicates that the combination of nonconvex regularization and l 1 -norm fidelity is promising in restoring the images contaminated by mixed Gaussian noise and salt and pepper noise. Next, we apply the proposed model to several real images. Test images shown in Fig. 5 are "Pepper", "House", "Boats" and "Man" images with the size of 256 × 256 , which contain lots of edges, contours, details, textures, inhomogeneous regions and features of low contrast and so on. The noisy versions are obtained by adding the mixed Gaussian noise with σ = 0.01 and salt and pepper noise with d = 0.03 to the clean data. The PSNR and MSSIM values of the test noisy images are shown in Table 2. Again, we compare the proposed model with TVL2, NTVL2 and TVL1 models. Figure 10 shows the restoration results. One can clearly see that TVL2 and NTVL2 model can successfully remove Gaussian noise while preserving edges and contours. But they fail to www.nature.com/scientificreports/ remove salt and pepper noise. Some impulsive points are still remained in the restorations. TVL1 and NTVL1 can simultaneously remove Gaussian noise and salt and pepper noise while preserving the edges. But we can see that the proposed model preserves more image contours and details than TVL1. Table 3 and Figure 11 show that NTVL1 model has the largest PSNR and MSSIM values, which further demonstrates that our model has the best performance in restoring the images contaminated by mixed Gaussian and salt and pepper noise in these four models due to the use of the combination of nonconvex TV regularization and l 1 -norm fidelity. The comparison with TGV, NLTV, NRL1, ASWMF and BM3D. In this experiment, we compare the proposed model with very famous total generalized variation (TGV) 10 , nonlocal total variation (NLTV) 13 , adaptive switching weighted median filter (ASWMF) 49 , nonconvex regularization model with l 1 -nrom fidelity (NRL1) 26 , and block-Matching and 3D filtering (BM3D) 50 . TGV is a high-order variation regularization model, which can www.nature.com/scientificreports/ well restore piecewise smooth regions while preserving the edges. NLTV uses patch-distance rather than pointdistance to measure the nonlocal similarity of the image, which can better restore the image details than classic TV based models. ASWMF is based on median filting which can well remove the salt and pepper noise. NRL1 is a robust sparse recovery model with l 1 -norm fidelity, which can well restore the sparse image, or image with sparse representation on some basises. In the experiment, as in 26 , the sensing matrix A in NRL1 (8) is chosen as a partial discrete cosine transformation matrix. BM3D is a hybrid model, which combines block-Matching, 3D linear transform thresholding, and Wiener filtering. It is probably one of the best methods so far in image denoising application. We use three images ("House", "Boats" and "Cameraman") with the size of 256 × 256 as the test data for comparisons. All images are contaminated by mixed Gaussian noise with σ = 0.015 and salt and pepper noise with d = 0.04 . The restoration results are shown in Fig. 12. To save space, we here only show the result of House. We can see that TGV and NLTV can remove the Gaussian noise, but fail to remove salt and pepper noise. NLTV obtain the results with higher visual quality than TGV. ASWMF can well remove the salt and pepper noise, but blurs the edges. Our model and NRL1 can successfully remove the mixed noises, while well preserving the images edges and contours. BM3D has the best performance in terms of visual quality. The PSNR values are listed in Table 4 and Fig. 13. From the results, we note that the proposed model obtains the results with higher www.nature.com/scientificreports/ PSNR values compared with TGV, NLTV, ASWMF and NRL1. BM3D has the largest PSNR values in these six models. Although BM3D works better than the proposed model, we think that the proposed model is still worthy of consideration since it needs lower computational complexity compared to BM3D, and outperforms other popular models. The comparison in Set5 and Set13 datasets. In the last experiment, to further show the effectiveness and adaptability of the proposed model, we test the proposed model on Set5 and Set13 datasets 28 . The test images in these two datasets are contaminated by mixed Gaussian noise and salt and pepper noise. Again, we compare the proposed model with five TV based models: TVL2, NTVL2, TVL1, TGV and NLTV. The PSNR values of the results are shown in Tables 5 and 6. The second column in the tables is the noise level. The two numbers are the variance of the Gaussian noise and the density of the salt and pepper noise, respectively. And Fig. 14 shows the line chart of the average PSNR on the two datasets. From the results, we observe that the proposed model achieves the Conclusions This paper introduces a novel variational regularization model to restore images contaminated by salt and pepper noise. Different from the very famous TVL1 model, the proposed model uses a nonconvex total variation TV q (0 < q < 1) as the regularizer, which enables the model to be more effective for edge-preserving. A firstorder algorithm based on ADMM combining with MM scheme and proximity operator to solve this nonconvex minimization problem. In addition, a sufficient condition for the convergence of the proposed algorithm is provided. Numerical results demonstrate that the proposed model can effectively remove salt and pepper noise while preserving image edges and contours. Moreover, compared with TVL2, NTVL2, TVL1, TGV, NLTV, NRL1 and ASWMF models, the proposed model shows the best performance in terms of PSNR and MSSIM values. It yields about 0.5 dB PSNR and 0.06 MSSIM improvements against all compared models. It should be point out that our nonconvex TV q regularization may lead to undesired artificial staircase in the restorations. In the future, we will focus on solving this problem by introducing some nonconvex high-order TV regularization. In addition, the ADMM algorithm used in this paper cannot guarantee to find the global optimum of the model. Therefor, another successive research is to combine some other algorithms, such as nature-inspired heuristic algorithms [51][52][53][54] , arithmetic optimization algorithms 55 . www.nature.com/scientificreports/ Figure 13. The plots of the relative-change of the restorations versus iterations. www.nature.com/scientificreports/ Data availability The data that support the findings of this study are available on request from the corresponding author.
v3-fos-license
2020-08-19T13:08:55.393Z
2020-08-19T00:00:00.000
221157702
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2020.01274/pdf", "pdf_hash": "c76d8786833684fc62eb9cf324d465034dfb5c63", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42627", "s2fieldsofstudy": [ "Biology" ], "sha1": "c76d8786833684fc62eb9cf324d465034dfb5c63", "year": 2020 }
pes2o/s2orc
Different Families of Retrotransposons and DNA Transposons Are Actively Transcribed and May Have Transposed Recently in Physcomitrium (Physcomitrella) patens Similarly to other plant genomes of similar size, more than half of the genome of P. patens is covered by Transposable Elements (TEs). However, the composition and distribution of P. patens TEs is quite peculiar, with Long Terminal Repeat (LTR)-retrotransposons, which form patches of TE-rich regions interleaved with gene-rich regions, accounting for the vast majority of the TE space. We have already shown that RLG1, the most abundant TE in P. patens, is expressed in non-stressed protonema tissue. Here we present a non-targeted analysis of the TE expression based on RNA-Seq data and confirmed by qRT-PCR analyses that shows that, at least four LTR-RTs (RLG1, RLG2, RLC4 and tRLC5) and one DNA transposon (PpTc2) are expressed in P. patens. These TEs are expressed during development or under stresses that P. patens frequently faces, such as dehydratation/rehydratation stresses, suggesting that TEs have ample possibilities to transpose during P. patens life cycle. Indeed, an analysis of the TE polymorphisms among four different P. patens accessions shows that different TE families have recently transposed in this species and have generated genetic variability that may have phenotypic consequences, as a fraction of the TE polymorphisms are within or close to genes. Among the transcribed and mobile TEs, tRLC5 is particularly interesting as it concentrates in a single position per chromosome that could coincide with the centromere, and its expression is specifically induced in young sporophyte, where meiosis takes place. INTRODUCTION Mosses are one of the oldest groups of land plants, forming a sister clade with vascular plants (Leebens-Mack et al., 2019). Since the demonstration, in 1997, that gene targeting via homologous recombination was possible in Physcomitrium (Physcomitrella) patens (Schaefer and Zrÿd, 2001) this moss has become a leading plant model for answering essential questions in life sciences and in particular for understanding the evolution of biological processes of land plants. The draft of the P. patens genome was published in 2008 (Rensing et al., 2008), and a chromosome-scale assembly of the P. patens genome has been published (Lang et al., 2018), highlighting the similarities and differences with other plant genomes. Transposable Elements (TEs) account for the 57% of the 462,3 Mb of the assembled P. patens genome. This TE coverage is not very different from that of other plant genomes of similar size (Tenaillon et al., 2010). On the contrary, the distribution of TEs in P. patens is unusual as compared to other plants. TE-rich regions alternate with gene-rich regions all along the P. patens chromosomes (Lang et al., 2018) whereas in most plant genomes TEs accumulate in pericentromeric heterochromatic region on each chromosome. Interestingly, in spite of the general patchy TE distribution, a family of retrotransposons of the copia superfamily, RLC5 (comprised of full length, from now on RLC5, and truncated, tRLC5, elements), clusters at a single location in each chromosome that could correspond to the centromere (Lang et al., 2018). The TE-rich regions distributed all along the chromosomes are mainly composed of a single family of LTR-retrotransposons of the gypsy superfamily named RLG1 (Lang et al., 2018). RLG1 integrase contains a chromodomain, a type of protein domain that has been previously found To direct retrotransposon integration into heterochromatin (Gao et al., 2008), suggesting that RLG1 could target heterochromatic TE islands for integration. Although most TE copies are located in heterochromatic TE islands, gene-rich regions also contain some TE copies, with some of them that inserted recently and are polymorphic between the Gransden and Villersexel accessions (Lang et al., 2018). Moreover, the RLG1 retrotransposon is transcribed in P. patens protonema cells, suggesting that it can transpose during P. patens development (Vives et al., 2016;Lang et al., 2018). Although these data suggest that TE activity may have shaped the genome of P. patens and may continue to generate variability that potentially impact P. patens evolution, the global analysis of the capacity of P. patens TEs to be expressed and transpose is still lacking. Here we present an unbiased analysis of TE expression in P. patens based on RNA-Seq analyses and confirmed by qRT-PCR, that has allowed uncovering the developmentally or stress-related expression of different TE families, including class I (retrotransposons) and class II (DNA transposons) TEs. The data presented here reinforce the idea that TEs have shaped the genome of P. patens and show that they continue to drive its evolution. RNA-Seq Data Used RNA-Seq data were obtained from the P. patens Gene Atlas library (Perroud et al., 2018). In particular, we used RNA-Seq data obtained from stress-treated tissues (protoplasts, ammonium treatment, de-and rehydration, heat stress, and UV-B), different developmental stages, including protonemata in BCD, BCDA or in Knopp medium, protonemata in liquid and solid medium, gametophores, leaflets, and sporophytes (green and brown stages) and some hormonal treatments (Auxin, ABA or the Jasmonic acid precursor OPDA). A complete list of the data set used can be found in Supplementary Table 1. Transposable Element Transcriptome Assembly and Quantification All selected reads where trimmed by quality using BBduk (https://sourceforge.net/projects/bbmap/). Reads mapping to the chloroplast, mitochondria or rRNA were discarded from the analysis. The remaining reads were mapped to the transposable element annotation (Hiss et al., 2017) using Bowtie2 (Langmead, 2013). All the reads that mapped were extracted using Samtools ). These reads were assembled to contigs using Trinity (Grabherr et al., 2011). In order to characterize and filter the assemblies, we aligned them to the TE library described in (Lang et al., 2018) using BLASTn (Altschul et al., 1990) with an e-value cutoff of 10 −5 . For transcripts corresponding to class I TEs, we kept only those showing alignments longer than 1000 nt. Manual inspection allowed discarding assemblies corresponding to poorly annotated TEs (i.e. repetitive genes like Leucine-Rich Repeat genes), solo LTR or chimeric TEs. The potentially coding domains of the selected assemblies were identified by a CDDsearch, which allowed defining the orientation of the potentially expressed TEs (Marchler-Bauer et al., 2015). In order to estimate the levels of expression of the elements corresponding to the selected assemblies, RNA-Seq reads were mapped to the selected assemblies using bowtie2 and only the reads potentially corresponding to sense transcripts were kept. To quantify the expression the number of mapping reads was normalized by the length of the assembly (Kb) and the total amount of trimmed reads for each condition without aligning the reads to the genome. The normalized expression data of each transcript and the sequence of the selected transcripts can be found in Supplementary Table 1. Plant Material P. patens Gransden accession was used for all the samples used, with exception of the protonema vs sporophytes induction test where the P. patens Reute accession (Hiss et al., 2017) was used. Protonemata were fragmented and plated on BCDAT medium overlaid with a cellophane disk in long-day conditions (16 h light 15 W m −2 to 8 h darkness) at 24°C for 7 days. Samples were collected at day 7 after 4 h of light. All the samples were frozen in liquid nitrogen immediately after harvesting and were kept at −80°C. Protoplasts were isolated from 6 days old protonemal cultures after 30 min incubation in 1% driselase (Sigma D8037), 0.48 M mannitol. The suspension was filtered through two superposed sieves of 80 and 40 µm. Protoplasts where sedimented by lowspeed centrifugation (600g for 5 min) and washed in 0.48 M mannitol. The ABA treatment was performed as previously described (Perroud et al., 2018). Briefly, protonemal cultures were grown for 6 days on a cellophane disk on BCD medium. At day 6, the cellophane disks containing the protonemata tissues were transferred to BCD medium as control or to BCD containing 50 µM abscisic acid (Sigma A1049) for 24 h before harvesting. Sporophyte RNA was obtained from Reute P. patens. Seven days old regenerated tissue from two consecutive rounds of a week old grinded material grown on solid BCDAT medium covered with cellophane was used as starting material. Six similar size small dots of moss tissue were plated in a 25 mm height petri dish (WVR international) containing BCD solid medium. They were grown for 40 days at 30 µmol m −2 s −1 constant white light regime and 25°C in a Sanyo MLR chamber. Then, plants were transferred to a Sanyo MLR chamber at an 8-h to 16-h light-dark cycle, 30 µmol m −2 s −1 light intensity and 15°C for reproductive gametangia induction and sporophyte development. After 20 days of post-reproductive induction (dpri), plants were submerged overnight in water to increase fertilization. Sporophyte samples were collected at 45 dpri showing a green round shape developmental stage. Each sporophyte was dissected under a Leica MZ16 stereomicroscope. Gametophyte tissue was discarded as much as possible and sporophyte was quickly frozen in liquid nitrogen. 40 dissected sporophytes were collected and used for RNA extraction. RNA Extraction and cDNA Production Sporophyte RNA was obtained using the QIAGEN RNeasy mini kit following manufacturer's protocol. DNA was removed by treating the samples with Ambion ™ DNAseI kit (AM2222) following the manufacturer's protocol. For all other tissues, RNA extraction and DNAse treatment was done using the Maxwell ® RSC Plant Kit (Promega). 500 ng of total RNA was used to synthetize the first-strand cDNA using the SuperScript ™ III reverse transcriptase (Thermofisher). qRT-PCR Quantitative real-time PCR were done in 96-well plates using the Roche LightCycler II instrument. SYBER Green I Master Mix (Roche Applied Science), primers at 1 µm and 1/20 dilution of the cDNA obtained from the reverse transcription were used for the qRT PCR. Each sample was run per triplicate with negative reverse transcriptase and non-template controls. The amplification conditions were: 95°C for 5 min, followed by 95°C for 10 s, 56°C for 10 s, and 72°C for 10 s, ending with the melting curve to check the specificity of the qRT-PCR. The housekeeping gene adenine phosphoribosyl transferase (APT) (Schaefer et al., 2010) was used to normalize the qRT-PCR results. The primers used to check TE expression were designed using the Primer3plus software (Untergasser et al., 2012). The list of the primers used in this study can be found in Supplementary Table S2. Detection of Potentially Expressed TE Copies in the Genome and LTR-Retrotransposon Age Estimation The TE copies most similar to the RNA assemblies, potentially representing the expressed elements, were identified by aligning the assemblies to the genome using Blastn with an e-value cutoff of 10E −90 . However, in many cases the RNA assembly is obtained from the assembly of reads potentially generated by the expression of similar but different copies, and therefore, this approach may not be suitable. In order to identify the subset of elements potentially expressed in those cases, we also searched for elements showing a similarity of 80% over 80% of the sequence of the assembly. In those cases, we estimated the age of the subset of elements most similar to the assembled transcript and compared it to the age of all the complete elements of the same family annotated in the genome. To do that, we estimated first the Kimura two-parameter distance (Kimura, 1980) between the two Long Terminal Repeats (LTRs) and estimated the age using the formula T = K/2 × r, where T = time of divergence, K = divergence and r = substitution rate (Bowen and McDonald, 2001). Taking into account an estimated substitution rate of 9E−09 (Rensing et al., 2007). Transposable Element Polymorphisms Annotation The publicly available DNA-seq resequencing data of three accessions of P. patens (Kaskaskia, SRX2234698; Reute, SRX1528135 and Villersexel, SRX030894) was used to look for TE polymorphisms with respect to the Gransden reference genome. Paired-end reads were mapped to the reference genome using BWA SW (Li and Durbin, 2009). TE insertions were detected using PoPoolationTE2 (Kofler et al., 2016) using the separate mode. To perform the analysis we kept only the non-reference insertions (insertions absent from the Gransden reference genome) predicted with a zygosity of at least 0.7. To establish the distance of these insertions to the closer genes the polymorphic TEs positions were intersected with that of the annotated genes using bedtools (Quinlan and Hall, 2010) using the function closestBed. Phylogenetic Analyses To look for sequences similar to P. patens TEs in other genomes we first performed a blastn search against the complete NCBI nucleotide database. As this only retrieve sequences with significant similarity to RLG1 element we complemented this search with a blastx search of the P. patens TEs first against the complete NCBI non-redundant protein sequence database excluding P. patens and subsequently, in order to increase the chance to detect plant sequences, to the NCBI green plant database (taxid:33090). We performed the tblastx with the default parameters with a maximum target sequence of 250. The most similar sequence for each species was chosen as representative of the species. All the protein sequences were aligned using Mafft (Katoh and Standley, 2013) and trimmed using TrimAl (Capella-Gutieŕrez et al., 2009). A phylogenetic tree was constructed using FastTree (Price et al., 2010) and visualized in iTOL (Letunic and Bork, 2019). RESULTS A New Approach to Measure the Expression of P. patens Transposable Elements More than half of the P. patens genome (57%) is occupied by TEs, a figure that is similar to that of other genomes of similar size (Tenaillon et al., 2010). As an example, the P. patens TE content is similar to that of two other genomes of similar sizes and for which the TE content has been annotated using the same REPET package (Flutre et al., 2011), such as rice (46.6%) (Ou et al., 2019) and melon (45,2%) (Castanera et al., 2020). However, P. patens has a very different TE composition as compared with these two genomes. Indeed, class II TEs account for 21.06% of the rice genome and 15.42% of the melon genome, in P. patens they only represent 6% of the genome ( Figure 1). More strikingly, a single retrotransposon family, RLG1 accounts for almost half (47.44%) of the genome space occupied by class I elements (Lang et al., 2018). RLG1 is actively expressed in nonstressed protonema cells, and it may have transposed recently during P. patens evolution, as some of its copies are polymorphic between P. patens Gransden and Villersexel ecotypes (Vives et al., 2016;Lang et al., 2018). RLG1 copies are concentrated in TE-rich heterochromatic islands and RLG1 transposition has therefore a limited capacity to induce gene variability. In order to explore the possibility that other TE families, apart from RLG1, could be expressed in particular developmental stages or stress situations, and could therefore generate new variability in gene regions, we took advantage of the large collection of P. patens RNA-Seq data available from the recently published P. patens gene atlas (Perroud et al., 2018), which includes data from different developmental stages and stress conditions. In addition of complete TEs, eukaryote genomes, and in particular those of plants, usually contain large amounts of defective and truncated elements that may be included in transcripts that are not the result of a genuine TE expression (Anderson et al., 2019). These transcripts can be sense or antisense with respect to the TE orientation and may in some cases participate in TE regulation, but cannot be considered as productive TE transcripts potentially involved in transposition. In P. patens, as it is common in eukaryote genomes and in particular in plants (Hoen et al., 2015;Bennetzen and Park, 2018), the fragmented and degenerated copies of TEs outnumber the complete and potentially functional copies. As a consequence, a quantification of the level of expression based on the number of RNA-Seq reads mapping to all TE-related sequences can lead to an overestimation of the expression of the different TE families. We have therefore decided to follow a strategy based on the detection of potentially complete transcripts obtained from an assembly of RNA-Seq reads, similar to what has previously been described for the analysis of the expression of human TEs (Guffanti et al., 2018). We used Trinity RNA-seq de novo assembly (Grabherr et al., 2011) to assemble reads showing similarity to annotated TEs (Lang et al., 2018). The 696 assemblies obtained were blasted back to the TE annotation to classify them. The vast majority (94%) of these 696 assemblies showed similarity to LTR-RT annotations, and an important fraction of them (72%) were short (less than 1000 nt) and corresponded to fragments of LTR-RTs, such as the LTRs. As an example, the assembly TRINITY_DN331_c0_g1 showed high sequence similarity to the LTR of RLC5 elements. A search for the genomic sequence most similar to that of the assembly identified a RLC5 solo-LTR located in the downstream proximal region of the Pp3c4_32070 gene annotation (Supplementary Figure 1). Interestingly, an analysis of the expression data available from the P. patens gene atlas (Perroud et al., 2018) showed that both the RLC5 solo-LTR and the Pp3c4_32070 annotated are specifically induced in gametophores treated with ABA, which strongly suggests that this solo-LTR is expressed as a consequence of read-through transcription from the gene promoter. In order to eliminate assemblies corresponding to the expression of fragments of LTR-RTs, and taking into account that FIGURE 1 | TE content of the P. patens, rice, and melon genomes. Genome coverage of class 1 and class 2 TEs is shown as red and blue boxes respectively. typical complete LTR-RTs are several kb long, we discarded all the LTR-RT assemblies shorter than 1,000 nt. The remaining 172 transcripts were analyzed for the potential presence of regions coding for the typical class I and class II TE protein domains and their alignments to annotated TE sequences were manually inspected to discard those showing similarities to poorly annotated transposable elements, and truncated or chimeric elements. As an example, Supplementary Figure 2 shows the analysis of TRINITY_DN99_c0_g1_i5 that corresponds to a complex region containing different degenerated TE fragments that seem to be transcribed as a single transcription unit. Among the 22 assemblies retained, some corresponded to the antisense strand of annotated TEs. After manual inspection, some of these were shown to correspond to LINE elements (see Supplementary Figure 3 for an example). These transcripts may participate in the control (e.g. silencing) of TE expression but cannot be considered as genuine TE transcription. The assemblies corresponding to potential antisense transcripts were discarded. An analysis of the remaining assemblies showed that they corresponded to 9 different potentially complete annotated TEs and were selected for further analysis. Both Retrotransposons and DNA Transposons Are Expressed in P. patens The analysis of the transcript assemblies showed that they correspond to 9 different P. patens TEs: 2 LTR retrotransposons of the gypsy superfamily (RLG1 and RLG2), two of the copia superfamily (RLC4 and RLC5), with one of them potentially corresponding to the two different forms of RLC5, the full-length and the truncated form (RLC5/tRLC5) and two different DNA TEs belonging to the Mariner superfamily, that were not properly annotated in the P. patens TE annotation (Lang et al., 2018), but had been previously identified as PpTc1 and PpTc2 (Liu and Yang, 2014). In addition, the manual inspection of the alignments of the transcript assemblies with the annotated TEs allowed refining the annotation of two elements annotated as unclassified non-LTR retrotransposon that we could identify as a potentially expressed complete LINEs (LINE-1 and LINE-2). The RNA-Seq reads obtained from the RNAs generated by the expression of a TE family show a certain degree of sequence variability, and therefore, they are not all of them identical to the assembly that represents the complete RNA of the family. On the other hand, this assembly is in most cases not identical to any of the of the TE copies of that particular TE family. This suggests that, for most TE families, different elements are concomitantly expressed and that the RNA assembly should be considered as a consensus of the expressed RNAs. These results suggest that different families of both retrotransposons and DNA transposons are transcribed in P. patens. P. patens Contains TEs Closely Related to Fungal TEs A preliminary characterization of the two Mariner-like elements found to be potentially expressed suggested that these elements were different from other plant Mariner-like elements, they being more closely related to fungal TEs of the Mariner superfamily. As this result was somehow surprising, we searched for sequences potentially corresponding to transposases of similar elements in the phylogenetically related liverwort Marchantia polymorpha and in well-characterized dicotyledonous and monocotyledonous plants such as Arabidopsis and rice. These searches did not retrieve significant hits, suggesting that these genomes do not contain sequences related to Mariner-like elements similar to those found in P. patens. A phylogenetic analysis of the potential transposases of Mariner-like sequences present in public databases more similar to those of the two Mariner-like elements found in P. patens, and including other Mariner-like sequences from plants, shows that the P. patens elements are closely related to elements found in fungal genomes, and are not related to Marchantia polymorpha or other plant sequences (Figure 2). These results may indicate a horizontal transfer of these TEs from fungi. In order to explore whether other TEs may have also experienced a similar phenomenon, we extended the phylogenetic analysis performed for the two Mariner-like elements to the other P. patens TE families here described. These analyses showed that, in contrast to what happens for the two Mariner-like elements, databases contain plant sequences with significant similarity to the rest of TE families here described. However, the phylogenetic analyses performed show that whereas the trees obtained for P. patens RLG2, RLC4, LINE-1 and LINE-2 retrotransposons are congruent with the phylogenetic relationships of the species, this is less obvious for RLG1 and tRLC5 ( Supplementary Figures 4-8). This may suggest that, in addition to the two Mariner-like elements, other P. patens TEs may have been transferred horizontally from fungal species. Developmental and Stress-Related Expression of P. patens TEs The availability of RNA-Seq data from different developmental stages and stress conditions (Perroud et al., 2018) allowed us to perform and unbiased analysis of the patterns of expression of the different transcribed P. patens TEs. We have previously shown that RLG1 is expressed in non-stressed protonema cells and its expression is reduced in protonema-derived protoplasts. RLG1 seems thus to be repressed by stress, in clear contrast with the stress-related expression of most TEs, as already discussed (Vives et al., 2016;Lang et al., 2018). Here we confirm that RLG1 is expressed in protonema, its expression increasing as the protonema develops and decreasing when gametophores develop, and is repressed in protoplasts (Figures 3 and 4). On the other hand, RLG1 does not seem to be expressed in other tissues and it is repressed by several of the stresses analyzed, in particular by heat shock and UV-B light (Figures 3 and 4). We confirmed the RLG1 expression in protonema cells and its repression in protonema-derived protoplasts by qRT-PCR (Supplementary Figure 9). A comparison of the RLG1 assembled RNA with all the RLG1 genomic copies suggests that only a subset of the RLG1 elements is expressed ( Table 1). An analysis of the putative ages of these elements, by analyzing the sequence differences between the two LTRs of each element, suggests that only the youngest RLG1 elements are transcribed ( Figure 5A). RLG1 is the TE expressed at the highest level in P. patens but, as already mentioned, we show here that other TEs are also expressed during P. patens development or under particular environmental conditions. The second Gypsy-like LTR-RT family found to be expressed, RLG2, is also expressed in protonema cells, and its expression increases in gametophores ( Figure 3). On the other hand, the expression of RLG2 is strongly induced by ABA and heat stress in protonema, and repressed when gametophores are submitted to dehydratation and rehydratation ( Figure 4). We confirmed the induction of RLG2 expression by ABA by qRT-PCR analyses (Supplementary Figure 10). Similarly, to RLG1, the comparison of the RLG2 assembled RNA with the RLG2 genomic copies shows that only the youngest RLG2 elements are transcribed in the conditions tested (Table 1 and Figure 5B). The two copia retrotransposon families found here to be expressed, show low levels of expression during P. patens development. RLC4 seems to be particularly expressed in gametophores, whereas tRLC5 seems to be more expressed in sporophytes. RLC4 expression seems to be repressed in most stress conditions, although the levels of expression are very low in all cases. tRLC5 is a particularly interesting family of TEs, as tRLC5 copies have been proposed to mark the centromere and participate in the centromeric function (Lang et al., 2018). The data presented suggest that tRLC5 may be particularly expressed in green sporophytes (Figure 3). In order to confirm this pattern of expression we performed qRT-PCR experiments. As the Gransden ecotype produces few sporophytes, which makes it difficult to analyze sporophyte-specific expression, we used Reute tissues, as this ecotype produces many more sporophytes in laboratory conditions (Hiss et al., 2017). This analysis confirmed that tRLC5 expression is induced in young sporophytes (Supplementary Figure 11). A comparison of the tRLC5 assembled RNA with the tRLC5 genomic copies suggests that only the youngest tRLC5 elements are transcribed (Table 1 and Figure 5C). LINE-1 seems to be expressed at a very low level in all conditions and we have not detected any relevant change in expression upon stress (not shown). On the other hand, LINE-2 is also expressed at a low level in most tissues but shows an increased expression in sporophytes and germinating spores (Figure 3). A comparison of the LINE-2 assembled RNA with the genomic copies suggests that the expressed LINE-2 is located in the close vicinity of an annotated gene (Pp3c16_3270) and the mapping of the RNA seq reads to this region suggests that LINE-2 could be expressed as the result of a readthrough transcription of this gene (Supplementary Figure 12). Indeed, although there are some minor differences, the patterns of expression of Pp3c16_3270 and LINE-2 during development or under particular stress conditions are mostly coincident (not shown). Finally, of the two Mariner-like elements analyzed, only PpTc1 is expressed in non-stressed tissues, with a particularly high expression in gametophores and leaflets (Figure 4), but both PpTc1 and PpTc2 are strongly induced by stress. PpTc1 expression is particularly induced by heat stress, whereas PpTc2 is only expressed after ABA induction or after dehydration or rehydration of gametophores (Figure 4). A comparison of the two Mariner-like assembled RNAs with their genomic copies identified the two elements potentially transcribed. Both elements are located close to a gene, and the analysis of the patterns of expression of both genes provides information on the possible expression of the two Mariner-like elements. In the case of PpTc1 the TE is only expressed in the conditions where the gene (Pp3c20_23510V3.1) is expressed (Supplementary Figure 13), which suggests that the expression detected for PpTc1 could be the result of read-through transcription from the neighboring gene. On the contrary, the expression of PpTc2 and the gene located nearby (Pp3c9_17220V3.1) do not overlap. Indeed, only PpTc2, and not the gene located nearby, is expressed in gametophores submitted to dehydration and rehydration and its expression is strongly induced in protonema treated with ABA which is not the case for the close by gene (Supplementary Figure 14). We confirmed the induction of PpTc2 by ABA by qRT-PCR (Supplementary Figure 15). Therefore, whereas we cannot rule out the possibility that PpTc1 expression could be the result of a readthrough expression from a neighboring gene, the transcript corresponding to PpTc2 seems to be the result of a genuine TE transcription. Moreover, the sequence variability of the RNA-Seq reads corresponding to PpTc2, suggests that other PpTc2 elements may also be expressed. Indeed, although the PpTc2 copy located in the vicinity of the Pp3c9_17220V3.1 gene is almost identical to the RNA assembly (99.4%), other PpTc2 copies also show high similarity to the assembly (Table 1) and may also be expressed. TE Mobility During Recent P. patens Evolution The transcription of a copy of the TE in case of retrotransposons, and/or of the proteins necessary to mobilize the element, is the first and obligatory step of TE transposition. Therefore, the transcription of the different TEs reported here suggests that different TEs may have recently moved during P. patens evolution. We have already reported that this is indeed the case for RLG1, as RLG1 elements are polymorphic between the Gransden and Villersexel accessions. Here we decided to expand the analysis for possible insertion polymorphisms to all P. patens TEs using data from 4 different P. patens accessions, Reute, Kaskaskia, Villersexel, and the one from which the reference genome has been obtained, Gransden. To this end we used PopoolationTE2 to look for TE polymorphisms among these accessions using paired-end short-read resequencing data from Reute, Kaskaskia, Villersexel, that we mapped to the Gransden reference genome. We found an important number of RLG1 polymorphisms in the three analyzed accessions with respect to Gransden ( Table 2). The number of polymorphisms in Reute was much smaller than in the two other accessions, which is in accordance with the close genetic relationship between Gransden and Reute (Hiss et al., 2017). Interestingly, in addition to polymorphisms related to RLG1 elements, we also detected polymorphic insertions of RLG2, RLG3, tRLC5/RLC5 and PpTc1 (Table 2). In general, the number of polymorphisms is higher in Villersexel and smaller in Reute, as seen for RLG1. The number of polymorphic insertions was particularly high for RLG3 and tRLC5/RLC5. In order to start analyzing the potential impact of the polymorphic insertions described here in the phenotypic differences between the four P. patens ecotypes, we analyzed the locations of the polymorphic TE insertions (Supplementary Table 3) and found that 20% of them are located close to genes, with potential consequences on their coding capacity or expression ( Table 3). The Challenging Analysis of TE Transcription Different programs to measure TE transcription from NGS data exist (Jin et al., 2015;Lerat et al., 2017). These programs usually rely on mapping RNA-Seq reads to a TE annotation or a consensus of a TE family. Although these programs can be very useful for certain genomes and particular TE families, they may not be adequate in others. Indeed, most eukaryote genomes, and in particular those of plants, contain an important number of fragmented or degenerated TE copies in addition to full copies of TEs. As the TE fragments can also be included in transcripts, and outnumber the complete copies (Hoen et al., 2015;Bennetzen and Park, 2018), an estimation of the expression of TEs that would not discriminate between transcripts corresponding to TE fragments or to complete elements will overestimate the expression of certain families and will lead to erroneous results. This is what we came across when starting to analyze the expression of P. patens TEs. As an example, as already explained, among the short assemblies discarded there was one (TRINITY_DN331_c0_g1) corresponding to a RLC5 solo-LTR. An analysis of the RNA-Seq reads matching this assembly showed their specific accumulation in ABA-treated protonema cells and in gametophores under dehydration/ rehydration stress. The results presented here show that the RLC5 solo-LTR is expressed as the result of read-through transcription from the ABA-induced Pp3c4_32070 gene located just upstream of it. An analysis of RLC5 expression based solely on mapping RNA-Seq reads to the TE annotation would have led to the wrong conclusion that RLC5 is induced by ABA and drought stresses. On the contrary, the approach described here, which is similar to the one previously described for the analysis of the expression of human TEs (Guffanti et al., 2018), allows for the assessment of the expression of RNAs corresponding to complete elements potentially resulting from genuine TE transcription. Different Retrotransposon and DNA Transposon Families Are Transcribed in P. patens The results presented here show that at least four LTR-RTs (RLG1, RLG2, RLC4 and tRLC5) and one DNA transposon (PpTc2) are expressed in P. patens. Among those, RLG1 and RLG2 are highly expressed during normal P. patens development, RLG1 being expressed mainly in protonema tissues whereas the expression of RLG2 is increased in gametophores. RLC4 seems also to be expressed in gametophores, albeit at a low level, and tRLC5 is expressed in young sporophytes. Therefore, during P. patens development, there is an important expression of different transposons. In addition, although RLG1 seems to be repressed by most stresses, different TEs are activated by stress. RLG2 is overexpressed under heat shock and ABA treatment, and PpTc2 is induced by ABA and by dehydration and rehydration treatments. Mosses are known to be tolerant to dehydration and rehydration (Cuming et al., 2007;Cui et al., 2012), which, together with the associated changes of temperature, are part of their natural lifestyle. The dehydration/rehydration stresses and the ABA treatment, known to mediate the responses to those stresses (Cuming et al., 2007), and to some extent heat stress, could thus be considered as part of the normal development of P. patens or, at least, frequent stresses P. patens has to face. Recent Mobilization of P. patens TEs The expression of different TEs in normal P. patens growing conditions could allow the mobilization of TEs and the generation of genetic variability that could potentially affect gene expression/function in this haploid species. The analysis presented here shows that many TE insertions are polymorphic between different P. patens accessions. Indeed, we have detected an important number of polymorphic insertions of RLG1, RLG3 and tRLC5/RLC5 elements. The high number of polymorphisms related to RLG3 is intriguing as we did not detected expression. RLG3 may therefore be expressed under different environmental conditions not tested here. Alternatively, RLG3 may have lost the ability to transcribe and transpose recently during evolution. In all cases, the highest number of polymorphisms with respect to the Gransden accession is found in Villersexel and the lowest in Reute, which is in accordance with the number of SNPs these accessions show with respect to the Gransden reference genome (Lang et al., 2018). We have also found a small number of polymorphic insertions of RLG2, RLC4 and PpTc1. The number of detected TE polymorphisms with respect to the Gransden reference genome in these accessions is probably underestimated, as none of the programs available to look for TE polymorphisms, including the one used here, can detect polymorphic TE insertion sitting in repetitive regions (Vendrell-Mir et al., 2019). In any case, the polymorphisms detected here illustrate the potential of TEs to generate genetic variability in P. patens. Moreover, an important fraction of the polymorphisms detected are within or close (less than 1 Kb) to a gene, which suggests that TE movement may have impacted gene coding or gene regulation, and therefore may have contributed to the phenotypic variability of P. patens. The Heterochromatic tRLC5 Elements Are Transcribed in Sporophytes In addition to generate new alleles or new gene regulations, TEs are also involved in chromosome structure and function. In plants, TEs have been shown to provide origins of replication in heterochromatic regions (Sequeira-Mendes et al., 2019), and are frequently part of centromeres (Lermontova et al., 2015). Different retrotransposon have been found to specifically accumulate in the centromeres of the green algae Coccomyxa subellipsoidea (Blanc et al., 2012) or the liverwort M. polymorpha (Diop et al., 2020) were they could support centromere function. Interestingly, tRLC5 was previously proposed to mark the centromere and participate to the centromere function in P. patens (Lang et al., 2018). We show here that tRLC5 is transcribed in P. patens. In spite of its heterochromatic nature, centromere sequences have been shown to be transcribed in yeast, animals and plants and this transcription seems vital for the maintenance of the centromere chromatin identity and in several aspects of centromere function (Chan and Wong, 2012;Perea-Resa and Blower, 2018). Young sporophytes are a key developmental stage of P. patens where meiosis takes place (Charlot et al., 2014). We show here that most meiosis-specific genes (Mercier et al., 2015) are highly induced in green sporophytes (Supplementary Figure 16), the developmental stage where tRLC5 is expressed. It has been proposed that demethylation of centromeric DNA during meiosis may allow the transcription of centromeric sequences, which could serve as markers recognized by other factors and allow centromere assembly (Liu et al., 2015). The expression of tRLC5 in the centromere, at the moment meiosis takes place, could thus play a role in centromere assembly and function during this key process. On the other hand, the transcription pattern of tRLC5, specifically activated in young sporophytes, is reminiscent of the expression of the Athila retrotransposon of Arabidopsis, which also concentrates in the centromere and is expressed in the pollen grain (Keith Slotkin, 2010). It has been proposed that TE expression in the vegetative nurse cells of the pollen may allow re-establishing its silencing in the sperm cells (Martıńez et al., 2016). The expression of tRLC5 in the sporophyte could also fulfill a similar role. Further experimental work will be required to explore any of these two non-exclusive hypotheses. Are Some of the P. patens TE Families the Result of a Horizontal Transfer from Fungal Species? In addition to the characterization of the transcriptional activity of P. patens TEs, the work presented also allowed us to better characterize two Mariner-like elements. These P. patens elements, that are transcribed and mobile, are more closely related to fungal elements than to any Mariner-like element found in plants, suggesting that they may have been horizontally transmitted from fungi. Interestingly, another P. patens Mariner-like element already described was also shown to be closely similar to fungal TEs (Castanera et al., 2016), which suggest that the horizontal transfer of Mariner-like elements from fungi to P. patens may have been a frequent event. The Mariner TE family is ubiquitous in the genomes of virtually all extant eukaryotic species and seem to be particularly prone to horizontal transfer, probably because they contain a transcriptionally promiscuous "blurry" promoter (Palazzo et al., 2019). Early land plants were aided by mutualistic interactions with fungi and these symbiotic interactions with fungi have been maintained in some bryophytes such as M. polymorpha (Humphreys et al., 2010). Surprisingly, although P. patens contains the strigolactone signaling pathway, which induce mycorrhizal signaling, it has not been shown to establish mycorrhizal interactions (Delaux et al., 2013;Field and Pressel, 2018;Rensing, 2018). The potential horizontal transfer of Marinerlike elements could be a remnant of this lost interaction, although an ulterior close contact between P. patens and different fungi may have also be at the origin of these horizontal transfers. It is interesting to note that P. patens is the only plant that shares with fungi the traces of past infections of giant virus relatives (Maumus et al., 2014), which also highlights the close relationship with fungi that P. patens has maintained during its recent evolution. CONCLUSION In summary, the results presented here show that TEs have an important activity in P. patens, with the transcriptional activation of different TE families in normal P. patens growing conditions, suggesting that TEs may have shaped P. patens genome and may continue to contribute to its function, including adaptation to stresses and the intraspecific genetic variability. DATA AVAILABILITY STATEMENT All datasets generated for this study are included in the article/ Supplementary Material.
v3-fos-license
2023-06-22T13:21:53.056Z
2023-06-22T00:00:00.000
259214077
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphy.2022.916078/pdf", "pdf_hash": "597685ed9d4eeb1c218660896d28238f22faf97a", "pdf_src": "ArXiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42628", "s2fieldsofstudy": [ "Physics" ], "sha1": "7e15591f9de2714df4da688f846fcc429e7dc595", "year": 2023 }
pes2o/s2orc
Future accelerator projects: new physics at the energy frontier High-energy colliders provide direct access to the energy frontier, allowing to search for new physics at scales as high as the machine’s center-of-mass energy, perform precision measurements of the Standard Model (SM) parameters, including those related to the flavor sector, and determine the Higgs boson properties and their connection to electroweak symmetry breaking. Each proposed future collider option has its own specific science goals and capabilities, depending on the designed running energy (energies) amongst other parameters. In this paper, an overview of the discovery potential of future circular and linear colliders is presented. Results from searches for beyond the Standard Model (BSM) phenomena at proton–proton, proton–electron, electron–positron, and muon–antimuon colliders are summarized. Introduction Particle physics advances the fundamental description of "nature" at the smallest scales, leading and influencing global scientific efforts.The Large Hadron Collider (LHC) endeavor remains by far the major focus of the efforts and engagement of the particle physics community.During Run 1 and Run 2 of the LHC, the ATLAS and CMS experiments have produced first observations of fundamental processes, including the discovery of the Higgs boson in 2012 and the determination of its properties and couplings, and hundreds of Standard Model (SM) measurements and searches for new physics.Hints of lepton flavor universality violation (LFV) have been reported by the dedicated LHCb experiment, which also provides improved knowledge of quark mixing matrix parameters, such as the CKM angle, γ, and the discovery of many new hadronic states.Heavy ion studies are also pursued at the LHC by the specialized ALICE experiment, as well as by ATLAS and CMS.During the almost 12 years of LHC operation, there have been many experimental and theoretical advancements: it is remarkable that the precision of SM measurements and the sensitivity reach of new physics searches have exceeded, in some cases by far, the pre-LHC era expectations. The experimental success of the LHC is certainly a result of the excellent performance of the detectors and the accelerator complex, and its high luminosity upgrade, the HL-LHC, will maximize its potential.Scheduled to run until 2038-2040, the HL-LHC program will allow the general purpose detectors to collect an integrated luminosity of 3 ab −1 of proton-proton collisions at a center-of-mass energy (or s √ ) of 14 TeV.The determination of the Higgs boson properties, and their connection to electroweak symmetry breaking (EWSB), is one of the primary science goals of ATLAS and CMS [1].Outstanding opportunities will emerge for measurements of fundamental importance, such as the first direct constraints on the Higgs trilinear self-coupling and its natural width.SM processes and parameters, including those related to the flavor sector, will be tested by performing measurements with unprecedented precision [2], such as the production of pairs or triplets of EW gauge bosons, the effective weak mixing angle, and the masses of the top-quark and W-boson.On the latter, it is noticeable that the Large Hadron electron Collider (LHeC) [3], designed to collide an intense electron beam produced by an energy recovery linac with the HL-LHC proton beam, would allow to reduce the uncertainty on this fundamental parameter to 2 MeV.The search for physics beyond the SM (BSM) will remain the main driver of the exploration program at the HL-LHC [4].In most BSM scenarios, the HL-LHC will increase the present reach in mass and coupling by at least 20%-50% and will allow searches for, among others, additional Higgs bosons in EWSB scenarios, new resonances, candidates for dark matter (DM), and hidden sectors to be performed.The LHCb program will continue to focus on heavy-flavor physics throughout the HL-LHC phase [5], possibly confirming the anomalies in the lepton-flavor sector and, as such, establishing LFV and opening a new era of discoveries.LHCb will also search for feebly interacting new particles arising in hidden sectors, complementing ATLAS, CMS, and other beyond-collider experiments in these searches. The conclusion of the European Strategy for Particle Physics update (ESPPU) and the ongoing Snowmass and Particle Physics Project Prioritization Panel (P5) process presents an opportunity to revisit the landscape of future accelerators beyond the HL-LHC.Several lepton and hadron collider options have been considered, each with merits and challenges, and emphasis has been given on attainable physics targets as well as technological requirements and drivers. This review focuses on the proposed accelerators' potential for discovering physics beyond the SM, and it is primarily based on feasibility studies reported in the literature at the time of the ESPPU process.If they were available at the time of writing, updates on searches and measurements submitted through the Snowmass process have been included as well.Considerations regarding the status of the proposed machines are also presented. Overview of proposed collider options Extensive activities have been carried out worldwide to assess the future of collider experiments beyond the HL-LHC.A summary of the accelerator-based projects proposed by the community in recent years and considered in this paper is presented in Table 1. Electron-positron colliders (linear or circular) with O (100 GeV) center-of-mass energy such as the Future Circular Collider e + e − , FCC-ee [6], the International Linear Collider, ILC [64], and the Compact Linear Collider, CLIC [7] represent primarily a Higgs factory providing unprecedented precision measurements of the Higgs boson properties.At the FCC-ee, this would come in combination with improvements in the knowledge of the SM couplings from the planned Tera-Z facility, also enabling an interesting flavor physics program.At linear colliders, a significant advance in precision is also expected thanks to the available beam polarizations.The e + e − machines are also discovery machines, with high center-of-mass energy options such as CLIC 3000 , extending the sensitivity to high-mass phenomena and, in some areas, yielding a reach comparable to high-energy pp colliders.For similar center-of-mass energies and integrated luminosities, the CepC [8] physics program and potential are comparable to those of the FCC-ee, chosen here as a representative case. The future 100 TeV center-of-mass energy proton-proton collider at the FCC, FCC-hh [9], expected to run after the completion of the FCC-ee stages, offers several unique possibilities for a breakthrough in particle physics.Aiming to collect integrated luminosities up to 20 ab −1 per experiment in 10 years of operation, FCC-hh will allow measurements of the Higgs self-coupling at the few percent level, measurements of quartic Higgs self-coupling, and direct searches for new physics at the highest energy, with the possibility to extend by an order of magnitude the LHC sensitivity above the EWSB scale.Precise differential cross-section measurements for high-transverse momentum Higgs boson production and measurements of rare processes will be possible due to the large datasets.With the addition of an energy recovery electron linac of 60 GeV, electron-proton TABLE 1 Collider options whose physics case is discussed in this paper.Configurations and parameters are those proposed at the time of writing (see Section 10 for more details).In case multiple stages are foreseen for a specific project, all center-of-mass energies and benchmark luminosity values are reported.In the text and figures, the ILC stages are indicated as ILC 250 , ILC 500 , and ILC 1000 ; CLIC's three phases are referred to as CLIC 380 , CLIC 1500 , and CLIC 3000 .A similar nomenclature is adopted for FCC-ee and the muon collider (MuC) where relevant.The integrated luminosity is per interaction point (IP), except for the FCC-ee and the CepC, where it corresponds to two IPs. interactions could be explored [10], providing additional inputs to achieve the ultimate Higgs physics precision at the FCC-hh, QCD precision measurements, and searches for new physics.The FCC hadron complex would also allow for a dedicated heavy ion program, with lead-lead and proton-lead, and electron-lead collisions possible at the FCC-hh and FCC-eh, respectively. 1An alternative idea for a pp collider considers the possibility of increasing the energy of the LHC machine up to 27-30 TeV, turning the current accelerator into a high-energy machine (HE-LHC [12]).This would allow utilizing the current tunnel and the entire CERN infrastructure with future magnet technologies to collect large datasets at s √ at least two times the one of the HL-LHC.A μ + μ − collider, MuC [13], could give the opportunity to achieve a multi-TeV energy domain beyond the reach of the e + e − colliders and within a much shorter circular tunnel than for a pp collider.The picture emerging from studies of the past years is that a 10 TeV muon collider could combine the advantages of pp and e + e − colliders due to the large s √ available for direct exploration and to the achievable accuracy for precise measurements of the SM.By exploiting the copious rate of vector-boson fusion and vector-boson scattering processes, a MuC provides the opportunity to probe details of the EWSB mechanism.Muonphilic new physics scenarios, possibly explaining the g − 2 [14] and B-physics anomalies [15], are additional natural targets.Because a muon production and cooling complex could be used at all energies, and muon acceleration proceeds through a sequence of rings, a μ + μ − collider can be built in stages, with 3 TeV center-ofmass energy foreseen as the first stage. Future collider concepts [16] not explicitly listed previously are also being considered within the Snowmass/P5 process.Among those, the C 3 linear collider project [17] could fit on the Fermilab site and would have a similar potential to that of the aforementioned O (100 GeV) machines, with a starting center-of-mass energy of 250 GeV to be potentially increased to 550 GeV and to 3 TeV by extending the accelerator's length.Other options based on highgradient superconducting radio frequency (SRF) technology, such as the compact SRF Higgs-Energy LEptoN (HELEN) linear collider, are also being investigated [18].In terms of circular machines, a 16-km circumference circular e + e − collider with center-of-mass energy between 90 GeV and 240 GeV is being examined with Fermilab as a potential site, as well as a possible proton-proton collider with center-of-mass energy between 24 and 27 TeV located in the same tunnel, with a reach similar to that of a HE-LHC.Finally, a Super proton-proton Collider (SppC [19]) is proposed as a machine located in China, running after the CepC, and using the same tunnel complex and infrastructure in a multi-staged approach similar to that envisaged for the CERN FCC.More details are given in Section 10. The planning spans a 30-year horizon, as major acceleratorbased projects require developments on that timescale.Comparing the physics potentials, the required technology and prospects for its availability, and the cost-to-benefit ratio of the proposed machines is extremely challenging.Each collider program, to varying degrees and dependent in part on the center-of-mass energy considered, gives good coverage of almost all fundamental physics questions.They also have unique synergies with the neutrino and precision frontiers, as well as with astrophysics and cosmological investigations ongoing or planned during the next decades.An overview of those complementarities is, however, beyond the scope of this paper. Searches at colliders: physics landscape The SM has been proven very successful in describing elementary particles and their interactions.It has been validated extensively through precision experiments, and the discovery of the Higgs boson has certainly been a major milestone in this respect.However, there are a number of shortcomings and several open questions that the SM fails to answer.Severe fine-tuned cancellations of large quantum corrections are required to obtain a Higgs boson mass close to the EW scale, leading to the so-called hierarchy problem.The SM does not incorporate gravity as described by general relativity, or account for the accelerating expansion of the universe.It does not contain any viable dark matter particle and fails to explain in full baryon asymmetry, neutrino oscillations, and nonzero neutrino masses.As such, a plethora of theories beyond the SM have been developed in the past decades, and the search for them is at the core of the particle physics community's experimental activities.While formulating an exhaustive and complete classification of all existing BSM models is not possible, it is evident that the exploration of the unknown is one of the main drivers of all future colliders: Several DM models predict the presence of mediator particles, whose exchange may be responsible for the annihilation processes that determine the DM particle abundance and can be directly searched at colliders.If the DM particle is lighter than m h /2 and it is coupled to the Higgs, a compelling exploration channel is an invisible Higgs decay.Of particular interest are the cases of spin-1/2 particles transforming as doublets or triplets under SU(2) symmetry.• An alternative possibility for new physics is that particles responsible for the still unexplained phenomena have not been detected because they interact too feebly with SM particles.These particles could belong to an entirely new sector, the socalled hidden or dark sector.While masses and interactions of particles in the dark sector are largely unknown, the mass range between the keV and tens of GeV appears interesting, both theoretically and experimentally.• Heavy new physics can induce, through the exchange of virtual particles, processes that are extremely rare in the SM, such as flavor-changing neutral current (FCNC) effects in the top-quark sector.The expected intensity of some of the future lepton collider proposals at critical production thresholds will allow improvements in sensitivity.Hadron colliders at very high luminosities and muon colliders at multi-TeV center-of-mass energy are also complementary when looking for rare processes.• In the absence of evidence for new physics at low energy and assuming that BSM is realized at a scale Λ much larger than the collider s √ , the effective field theory (EFT) formalism is adopted as a framework to study BSM physics with a modelindependent approach.Several EFT representations exist, and a subset of those is considered in this review.This program is continuously evolving and broadening in response to results from the current LHC, the HL-LHC, and other ongoing and future non-collider experiments.Beyond-collider projects in construction, planned, or proposed to further exploit the LHC accelerator complex will significantly boost the discovery potential in the next two decades, offering complementarities and synergies in the quest for new physics.Among those, FASER [20] and SND@LHC [21] will start operations during the Run 3 of the LHC; others like MATHUSLA [22], CODEX-b [23], MilliQan [24], and the LHeC [25] are foreseen to operate in parallel to the HL-LHC.An extensive proposal on the Forward Physics Facility at CERN has been presented recently [26], while Ref. [27] presents detectors for fixed-target experiments and beam-dump experiments at the ILC complex. A subset of representative prospective results is reported in the rest of this review.They originate from hundreds of studies of varying degrees of sophistication carried out over several years.Detailed or fast simulations are used in some cases, whilst simple detector parameterizations, direct extrapolations of results from existing data, or even simple rescaling are performed in other cases.The reader is referred to the original publications for details on the analyses and on the approach and hypotheses made. New resonances On-shell resonances decaying into visible SM particles are a distinct signature of several BSM theories, ranging from new models of EWSB to extensions of the SM gauge group.Classic scenarios include singly produced resonances with integer spin or pairproduced heavy resonances.Direct access requires the center-ofmass energy of the collider to be large enough to produce them.Performances can then be evaluated considering the reach in mass, or the reach in mass vs. coupling, with the results depending on the assumptions on the couplings of the new particle to quarks and leptons.If the colliders' center-of-mass energies are below the mass of the new hypothetical resonance, indirect access can be achieved by studying deviations in SM observables. One of the most widely used benchmark scenarios predicts a new high-mass vector (spin-1) boson, the Z′.Examples [28] are the sequential SM (SSM), B− L, and E 6 Z′ models, as well as models of little Higgs or extra dimensions.The primary discovery mode for a Z′ at hadron colliders is the Drell-Yan production of a dilepton resonance, but hadronic final states are also widely studied.The mass reach is typically in the (0.3 − 0.5) s √ range, given sufficient statistics.FCC-hh [29] could discover a SSM Z′ with a mass of up to 43 TeV if it decays into an electron or muon pair, assuming 30 ab −1 of luminosity.Masses between 20 and 30 TeV could be reached if decays to τ + τ − or to t t are instead considered.A multi-TeV MuC could become competitive in accessing directly a Z′, especially in the case of muon-philic models [30], where the new vector boson dominantly couples to μ + μ − , e.g., via lefthanded currents. Given the current mass limits from the LHC experiments, a direct observation of these new resonances is not expected at the currently planned linear and circular e + e − accelerators for most of the scenarios considered in the literature.Nonetheless, the presence of high-mass resonances can be inferred indirectly using an EFT approach to describe BSM virtual effects.In the EPPSU studies, a benchmark model dubbed "Y-Universal Z′" has been used for a quantitative assessment of the potential of future colliders to search for new gauge bosons, directly and indirectly [31].Figure 1 (left) shows the 95% confidence level (CL) limits in mass vs. coupling at various colliders.The model assumes the same couplings, g Z′ , to quarks and leptons, and it was chosen because it allows for a fair comparison between hadron and lepton colliders.The direct constraints from FCC-hh are the most stringent at low g Z′ , while indirect reaches of both e + e − and pp colliders become superior at high g Z′ .At lepton colliders, an EFT framework allows to achieve sensitivity thanks to the EW precision measurements of the oblique parameter S [33].At hadron colliders, Drell-Yan predictions are also sensitive to the ratio between g Z′ and the Z′ mass.As such, very precise parton distribution function (PDF) fits obtained using ep collisions would further improve the sensitivity [3].Finally, the muon collider reach, not reported in this figure, is estimated to be similar to that of CLIC 3000 for s √ 3 TeV and exceeding it proportionally to the increase in center-of-mass energy [34]. Expected sensitivity on the production and decay of spin-0 and -2 particles decaying into several different SM final states has also been studied at high-energy lepton and hadron colliders.Models considered include, among others, resonant double-Higgs production and heavy scalar singlets that could mix with the Higgs boson, i.e., see Refs [35][36][37]. Leptoquark (LQ) models, alongside Z′ models, have gained considerable renewed interest in recent years as they can give rise to lepton universality violating decays of heavy mesons at the tree level, provided that couplings are generation-dependent and they couple to the second and third generations of quarks.LQs are hypothetical particles that carry both baryon and lepton quantum numbers.They are color-triplets and carry fractional electric charge.The spin of a LQ state is either 0 (scalar) or 1 (vector).Models predicting a rather light LQ coupled predominantly to the third generation are a natural target for hadron colliders where scalar or vector LQs are pair-produced via strong interaction and results [31] are independent of the coupling to the lepton quark current.If discovered, FCC-eh could contribute to their characterization, assuming that the coupling to the first-generation quark is non-negligible and can be produced as an s-channel resonance [3].Muon colliders have the best sensitivity for a LQ model via μ + μ − → bs.With a few to 10 TeV center-of-mass energy and predicted luminosities of 1-10 ab −1 , a MuC could cover the entire parameter space that explains the flavor anomalies for both scalar and vector LQ.Results are shown in Figure 1 (right), from Ref. [32]. If new particles arising in BSM theories are much heavier than the energy reach for on-shell production even at future colliders, their existence can still be formalized through contact interactions (CIs).An effective four-fermion CI could represent the exchange of a virtual heavy particle, such as an LQ, a Z′, or elementary constituents of quarks and leptons in composite models.The effective CI scale represents the typical mass scale of the new particles, and the experimental sensitivity increases significantly with s √ .Lepton colliders are powerful in testing the neutralcurrent case, owing to the precision that can be achieved in analyses of di-fermion final states with suitable statistics.Linear colliders can also exploit different longitudinal polarizations of the two beams.Hadron colliders have excellent sensitivity up to their s √ via Drell-Yan production for both neutral and charged currents.The highest reach as reported in the EPSSU studies [31] is up to 120 TeV (CLIC 3000 ).The so-called two-fermion/two-boson CIs are also phenomenologically relevant for BSM theories of EWSB because they describe new physics effects in the interaction between the gauge and Higgs sectors.In this case, estimated reaches [31] are, at best, 30-35 TeV.Precision differential measurements of the ZH production provide the lead sensitivity for lepton colliders.Hadron colliders' sensitivity mostly comes from precision measurements of SM diboson production observables, as used in the FCC-hh studies.Additional studies on CIs related to new physics models possibly contributing to the muon g − 2 and to highenergy scattering processes have also been carried out at the muon collider [30]. Composite Higgs and extended sectors The role of the Higgs boson could be even more complex than that known so far in the SM formulation, and hence it is logical to also question its nature and whether or not it is a point-like particle.Composite Higgs models (CHMs) predict that the Higgs is not an Left: Exclusion reach of different colliders on the Y-Universal Z′ model parameters [31].Right: Muon collider sensitivity contours at 95% CL for scalar (upper panel) and vector (lower panel) LQ models via the process μ + μ − → bs, where y QLij indicates the coupling between the i-generation lepton and the jgeneration quark.For the various setups considered, see Ref. [32]. elementary particle and that new particles might arise as excitations of the composite Higgs, with mass possibly at the O(TeV) scale.The foundation of composite Higgs models is that the Higgs emerges as a bound state of a new strongly interacting confining composite sector, analogous to QCD, but with a much higher confinement scale.The Higgs, similarly to the pions in QCD, emerges as a Goldstone boson associated with a spontaneously broken global symmetry of the composite sector.The phenomenology of CHMs is mainly controlled by two parameters: the mass scale m * , which controls the mass of the new resonances, and the coupling g * , representing the interaction strength among particles originating from the composite sector.EFT operators that describe the indirect effects of Higgs compositeness at low energy are then defined, and their scale is set by g * . Figure 2 (left) shows the exclusion reach on m * and g * for FCChh, FCC-ee, and the high-energy stages of CLIC.Contours for the reach of HE-LHC, ILC, CepC, and CLIC 380 are also available in Ref. [31].The 95% CL exclusion contours of each collider project arise from effects on coupling measurements [38] of the Higgs boson related to its possible composite nature and, for the FCC-hh and the HL-LHC, also from direct searches for an EW triplet ρ vector resonance in dilepton and diboson final states [43].Figure 2 (right) presents the exclusion reach for the 10 TeV stage of the muon collider.The reach of HL-LHC is also reported and statistically combined in the global result [30].Other curves denote the contributions to the constraints from different processes, including that of searches for composite Higgs fermionic top partners. Theories predicting an extended Higgs sector acquired significant prominence in the experimental programs of collider experiments, with searches targeting a broad spectrum of models.In minimal scenarios, the Higgs sector is augmented by a singlet massive scalar field which, e.g., can mix with the SM Higgs boson with a mixing parameter γ.The presence of the singlet can either modify the SM Higgs boson properties or be detected as single production of the massive particle associated with the field, S, which subsequently decays into SM particles.Figure 3 (left) summarizes the reach [31], in the mass-sin 2 γ space, of direct searches and indirect constraints derived from the Higgs boson couplings measurements (horizontal lines).Among the indirect searches, those performed at CLIC 3000 are the most sensitive searches and allow to probe mixing angles for values as low as sin 2 γ ~10 -3 for any value of m S .In contrast, the reach of direct searches depends on the singlet's mass.The muon collider at s √ = 14 TeV explores masses as high as 9 TeV while extending the sensitivity to sin 2 γ by almost one order of magnitude with respect to the best indirect constraint from e + e − colliders.Thanks to the larger center-of-mass energy, the FCC-hh is sensitive to higher masses but yields a more limited reach in the mixing values. Under the no-mixing assumption, the singlet-associated particle S would be stable and thus searched for in events with significant missing transverse momentum.The best sensitivity is yielded by indirect searches based on the precision measurement of the SM Higgs couplings at CLIC 3000 , probing masses between 50 and 350 GeV and λ HS between 0.1 and 1, where λ HS is the coupling term in the potential V ~λHS |H| 2 S 2 [31].Experiments at the FCC-hh achieve a similar sensitivity through direct searches for the pair production of S. It is interesting to note that this region of phase space is compatible with a strong first-order EW phase transition, demonstrating that colliders have the potential to test models predicting the baryon asymmetry in the universe and gravitational waves.As such, the energy frontier complements the program at cosmology experiments, like the future gravitational wave experiment LISA.Another example is presented in Ref. [30], where the reach of a 3 TeV muon collider is compared to that of LISA. More complex scenarios extending the Higgs sector by a new SU [2] doublet, e.g., supersymmetry or more generically type-II two-Higgs doublet models, predict the existence of two CP-even scalars, h and H, one CP-odd scalar, A, and a charged scalar, H ± .This rich phenomenology leads to a variety of probes at future machines.As an example, Figure 3 (right) shows the constraints on m A as a FIGURE 2 Left: Exclusion reach on the composite Higgs model parameters of FCC-hh, FCC-ee, and of the high-energy stages of CLIC [31].Right: Comparison of the global reach for universal composite Higgs models at the HL-LHC and at a 10 TeV muon collider.For the latter, contributions to the constraints from different processes are also shown [30]. function of tan β, the ratio of the vacuum expectation value for the two Higgs doublets.While precision measurements of the Higgs couplings to third-generation fermions offer sensitivity to models at low values of m A (~1 TeV), the FCC-hh probes the existence of the new states for masses as high as 10-20 TeV for any values of tan β.Assuming efficient signal detection and background suppression, the sensitivity to the pair production of new states at lepton colliders goes up to m ≤ s √ /2.Measurements of flavor physics observables can also lead to constraints on the type-II two-Higgs doublet models (i.e., see Ref. [39]). Supersymmetry The phenomenology of SUSY is mostly driven by its breaking mechanism and breaking scale, which define the SUSY particle masses, the mass hierarchy, the field contents of physical particles, and thus their cross sections and decay modes.In addition, signal topologies crucially depend on whether R-parity, defined as R = (−1) 3(B−L)+2S , where B and L are baryon and lepton numbers, respectively, and S is the spin, is conserved or violated. Indirect constraints from flavor physics experiments, highprecision electroweak observables, including the discovery of the 125 GeV Higgs boson, and astrophysical data impose strong constraints on the allowed SUSY parameter space.Still, SUSY can be the key to understand Higgs naturalness, and in R-parity conserving scenarios, the lightest supersymmetric particle (LSP) is an excellent candidate for DM.These are certainly strong motivations to search for colored SUSY particles, squarks, and gluinos, for EW gauginos and Higgsinos that mix into neutralino and chargino mass states, collectively referred to as electroweakinos (EWkinos, χ), and for the superpartner of charged and neutral leptons, the sleptons. Squarks and gluinos are produced via the strong interaction and have the highest cross sections at hadron colliders.Scalar partners of the left-handed and right-handed chiral components of the bottomquark and top-quark mix to form mass eigenstates for which the bottom and top squarks are defined as the lighter of the two ( b1 and t1 , respectively) and might be significantly lighter than the other squarks and the gluinos.EWkinos cross sections depend on mixing parameters and are typically much smaller than those of colored superpartners at hadron colliders.For this reason, the EW sector remains more difficult to test at hadron machines, and searches at e + e − colliders would complement the SUSY parameter space coverage.Similar considerations can be made for slepton pair production, as cross sections can be up to two orders of magnitude smaller than those for EWino pair production. Prospects for SUSY searches are presented in terms of mass exclusion limits at 95% CL.The corresponding definitive observation with a significance of 5σ would be 5%-10% lower depending on the process.High-energy pp colliders provide the most stringent bounds on firstand second-generation squarks and gluinos.In R-parity conserving scenarios, gluino (squark) masses up to 17 [9] TeV could be reached by the FCC-hh exploiting the typical multijet plus missing transverse momentum SUSY signature for a massless LSP, while monojet-like analyses, where the SUSY particles recoil against an initial state radiation (ISR) jet, are most effective for FIGURE 4 Exclusion reach top squark at different hadron and lepton colliders [31].compressed scenarios.Lepton colliders are ineffective in the searches for gluinos, which are neutral with respect to the EW interaction, while current limits on firstand second-generation squark masses make the results not competitive.Similar conclusions can be drawn for top-squark pair-production searches if the preferred decay is t1 → tχ 0 1 and Δm( t, χ0 1 ) ≫ m t , where χ0 1 is the lightest neutralino and m t is the top-quark mass.On the other hand, for small mass splitting, the sensitivity of pp colliders significantly degrades so that high-energy lepton colliders, e.g., CLIC 3000 and MuC at 3-10 TeV, become competitive [34].Their stop mass reach is close to s √ /2 even for low Δm( t, χ0 1 ), although a loss in acceptance and efficiency could be expected for mass differences of the order of 50 GeV.The exclusion limits are summarized in Figure 4; see [31] and references therein for details on the assumptions. At hadron colliders, the largest production rates for EWkinos are obtained when the lightest chargino (χ ± 1 ) and next-to-lightest neutralino (χ 0 2 ) are Wino-like, forming an approximately mass degenerate SU(2) triplet referred to as the next-LSP (NLSP).Exclusion reaches for R-parity conserving Wino-like scenarios show that NLSP masses up to 3.3 TeV can be excluded at FCChh for a massless LSP, to be compared to a sensitivity of up to EWkino masses equal to s √ /2 for high-energy e + e − and μ + μ − colliders even for Δm(χ ± 1 , χ 0 1 ) as low as 1 GeV, with no loss in acceptance.If the Higgsino mass is much smaller than the gaugino masses, the production rates are smaller, χ0 1,2 and χ± 1 form an approximately mass degenerate SU(2) doublet, and the EWkino spectrum is compressed.Feasibility studies and projections are summarized in Figure 5 (for details, see [31] and references therein).The sensitivity of lepton colliders depends only weakly on the nature of the LSP as cross sections are less dependent on the choice of mixing parameters.The high-energy lepton colliders allow a reach close to the pair production threshold, approximately 1.3 TeV for CLIC 3000 , with the mass splitting down to approximately 0.5 GeV, and it is similar (not shown) for a MuC of 3 TeV center-of-mass energy.Beam polarization effects might also play a crucial role in searches for EWK SUSY at linear colliders [40,41].Analyses exploiting ISR jets and/or soft-momentum leptons show good prospects at hadron colliders in the case of Higgsino-like scenarios: χ± 1 , χ0 2 masses up to approximately 600 GeV can be probed at the HE-LHC for mass splittings Δm ≡ Δm(χ 0 2 , χ0 1 ) ≈ Δm(χ ± 1 , χ0 1 ) between 7 and 50 GeV.FCC-hh projections show expected 95% CL limits up to 1.3 TeV, also depending on Δm, with monojet searches possibly complementing the reach for very compressed scenarios.Prospects for ep colliders (LHeC and FCC-eh) performed using monojet-like signatures are also shown.Finally, if the lightest neutralino is either pure Higgsino or Wino, EWinos' mass splittings are theoretically calculated to be approximately 340 MeV and 160 MeV, respectively.In these cases, taking advantage of the long lifetime of the charginos, searches for disappearing charged tracks can be performed at hadron and electron-hadron colliders.Analyses exploiting displaced decays of the charged SUSY state have also been studied for lepton colliders.Results can be interpreted in the context of generic DM models and are reported in Section 7. Significant sensitivity to sleptons is expected at future accelerators.High-mass selectrons, smuons, and staus are best accessed by hadron colliders for large mass splitting between the slepton and the LSP masses, with limits up to or in excess of 5 TeV for the FCC-hh [42], significantly depending on the assumptions of slepton handedness and mass degeneracy.Dedicated searches for staus, on the other hand, might be particularly challenging at pp colliders because of the potentially high rate of misidentified tau leptons.The HE-LHC would provide sensitivity of up to 1.1 TeV, and an additional three-fold increase is expected for the FCC-hh [31].Prospect studies at linear lepton colliders [43][44][45] show excellent expected sensitivity to slepton masses up to close to s √ /2 and good potential for characterizing the nature of the new particles in case of discovery by exploiting beam polarization.The SUSY EW sector, comprising sleptons, can also account for the longstanding discrepancy of (g − 2)μ.Feasibility studies focusing on the relevant parameter region have been reported in the past year [46], showing good complementarity between HL-LHC and high-energy electron-positron colliders.Sensitivity to staus at lepton colliders would again be complementary to pp colliders in case of compressed scenarios, with substantial dependency on the assumptions on τ handedness and the beam polarization conditions.A multi-TeV muon collider would push the sensitivity up to half the center-of mass energy [34]. A systematic study of the potential of lepton and hadron colliders for R-parity violating (RPV) SUSY scenarios has not been attempted.RPV models might lead to very diverse signatures depending on which couplings are different from 0 and on their strength.The lightest neutralino, as LSP, would decay into SM particles so that final state events present high lepton and/or jet multiplicities and modest or no missing transverse momentum.If RPV couplings are small, particles might travel macroscopic distances before decaying and be long-lived.Searches for high-mass long-lived particles (e.g., gluinos and top squarks) at high-energy pp colliders can exploit the capability of reconstructing unconventional signatures such as massive displaced vertices or displaced leptons similarly to current and planned searches at the LHC and HL-LHC, respectively.Similar techniques can be adopted at lepton colliders for EWinos.To illustrate the potential to discover low-mass SUSY particles decaying non-promptly and as such eluding LHC constraints, interesting prospect studies have been made for O (100 GeV) center-of-mass energy e + e − colliders [47] and at ep colliders [48]. Dark matter Collider experiments could be sensitive to many possible thermal freeze-out scenarios which assume that DM was produced through interactions of unknown nature with SM particles in the early universe.If DM is part of an extended BSM sector and carries SM charges, a mechanism should exist, as in SUSY, to make it stable or very weakly interactive (WIMP).In case of DM being part of a richer hidden sector, several new massive particles might arise, and one or more could mix with SM particles.A hidden sector that contains DM is generically called a dark sector and can be connected to the SM by small but non-zero couplings through a mediator.The operator interacting or mixing with it is referred to as a portal. Depending on the model assumptions, the nature of DM and the new-physics phenomenology could be profoundly different.For instance, dark sectors might be characterized by an abundance of feebly interacting particles.Feasibility studies on these scenarios are reported in Section 8. WIMP DM is invisible to detectors due to the weak strength of its interaction with SM particles, and hence the main signature at colliders is the missing transverse momentum carried by the DM particle.Consequently, searches focus on the associated production of the undetectable DM with visible SM particle(s) like one (or more) jet(s), a Z boson, a photon, or a Higgs boson.Additional BSM mediators can lead to a variety of even more complex collider signatures in visible channels, i.e., involving heavy-flavor quarks. A straightforward model of DM thermal relic is that of a massive particle with EW gauge interactions only.The case of spin-1/2 particles transforming as doublets or triplets under SU (2) symmetry is considered an excellent benchmark model for future colliders.The production rate of the charged state in the DM EW multiplet is high, but it decays into the invisible DM plus a soft undetectable pion.The sensitivity to these models, usually referred to as Higgsino and Wino, respectively, is summarized in Figure 6. The direct reaches through the so-called disappearing track analyses are compared with indirect reaches at lepton colliders, derived from the sensitivity to the EW parameters W and Y. FCC-hh can conclusively test the hypothesis of thermal DM for both the Higgsino and Wino scenarios, while CLIC 3000 could cover in full the Higgsino case.A 3 TeV muon collider would reach masses slightly lower than CLIC 3000 for the Wino case, while a 10 TeV machine would be competitive with the FCC-hh [34].As usual, several caveats must be considered when comparing these projections.For instance, projections for future direct DM detection might suffer from uncertainties on the Winonucleon cross section, whilst indirect constraints might suffer from unknown halo-modeling uncertainties.More details can be found in Ref. [31] and references therein. If DM belongs to a richer BSM sector, the phenomenology might be very diverse.Simplified models are therefore used as benchmarks for collider searches to minimize the number of unknown parameters: a single mediator is introduced, which is either a new BSM particle or a SM particle such as the Higgs boson or the Z boson.In the models considered by the EPSSU studies, based on widely accepted benchmark proposals [49], the DM particle is a massive Dirac fermion (χ), and the mediator is either a spin-1 (axialvector) or a spin-0 (scalar) BSM particle.Figure 7 (left) reports the 2σ sensitivity on the mediator mass of collider experiments for axialvector models.Results are strongly dependent on the choice of couplings (indicated in the figure), and hence it is difficult to compare among accelerator projects. The sensitivity at pp colliders is driven by dijet and monojet searches, which decreases if couplings to quark decrease.Lepton colliders might reach reasonably high mediator masses through mono-photon analyses, so the achievable sensitivity depends on the strength of the mediator coupling to leptons.Similar results to those of CLIC 3000 can be achieved by a MuC of the same center-of-mass energy, with sensitivity provided by mono-photon and mono-Wboson analyses [30]. Mediators could also be SM particles, and a remarkable example is given by models where the Higgs boson acts as a mediator (or portal).If the DM mass is below half of the mass of the Higgs boson, the latter could decay into a DM pair.As such, precision measurements of the branching ratio (BR) of the Higgs boson decaying into invisible particles can be turned into exclusion limits on the spin-independent WIMP-nucleon scattering cross section.This is illustrated in Figure 7 (right): 90% CL limits for a simplified model with the Higgs boson decaying to Majorana DM particles are compared to current and future DM direct detection experiments.Low-energy e + e − colliders are particularly competitive in this scenario, thanks to unprecedented precision expected in measuring Higgs couplings, whilst hadron colliders remain competitive thanks to the large datasets and high production rates. Feebly interacting particles BSM theories extending the SM with a hidden sector populated by feebly interacting particles (or FIPs) are gaining significant attention as they can provide, depending on the model's implementation, an explanation for the origin of neutrino masses, matter-antimatter asymmetry in the universe, and cosmological inflation, as well as insights into the EWK hierarchy and the strong CP problem.A comprehensive overview of the vast program at both current and future collider-based, fixed-target, and beam-dump experiments can be found in Refs [31,51].In this review, the focus is on the minimal portal framework introduced in the aforementioned references.In these models, the FIPs, which are not charged under the SM gauge groups, interact with the SM through portals that can be classified based on the type and dimension of the mediator.The most studied cases, listed as follows according to the operator's spin, are the vector, Higgs, axion, and neutrino portals: where F μ] ′ is the field strength for the dark vector, which mixes with the hypercharge field strength B μ] ; S (sometimes referred to as ϕ) is the dark Higgs, a new scalar singlet that couples to the SM Higgs doublet; a is the axion or axion-like particle that couples to gauge and fermion fields (pseudoscalar portal, where f a is the axion decay constant); and N is a heavy neutral lepton (HNL) that couples to the SM left-handed leptons. In the minimal vector portal, the interaction between the SM and the hidden sector takes the form of a kinetic mixing between one dark and one visible Abelian gauge boson.In selected realizations of the portal, the new U (1) gauge boson in the hidden sector is a dark photon, A′, either massive or massless, with ϵ being the mixing coupling parameter between the dark and ordinary photon.Figure 8 presents the sensitivity of various experiments, demonstrating that future colliders can probe the MeV to TeV mass region, compatible with the hypothesis of DM as a thermal relic.Through searches for Drell-Yan production, pp → A′ → ℓ + ℓ − , high-energy hadron colliders explore scenarios with large couplings and heavy dark Left: Exclusion reach at 95% CL for axial-vector simplified models at future colliders assuming a DM mass of M DM =1 GeV.Right: Results from searches for invisible decays of the Higgs boson, assuming a Majorana-type DM, compared to constraints of current and future direct detection experiments on the spin-independent WIMP-nucleon scattering cross section [31].The region where the irreducible background from neutrinos may hide a DM signal, usually labeled as the "neutrino floor," is not reported.For further reading, a recent paper on the subject is given in Ref. [50]. FIGURE 8 Exclusion reach for dark photons at various experiments as a function of the dark photon mass m A and the mixing parameter between the dark and SM photon, ϵ.Exclusion limits are computed at 95% CL in the case of FCC-ee, FCC-hh, and CepC, while the contour for the FC-eh corresponds to the observation of 10 signal events.All other limits are computed at 90% CL [31]. photons, with the FCC-hh yielding a sensitivity to ϵ as small as 10 -3 for masses in the 10-100 GeV range [52] and thus complementing the reach of future LHCb upgrades [4].An integrated program of precision measurements of the Z boson properties and of direct searches exploiting the radiative return processes (e + e − → γA′) enables electron-positron colliders to significantly extend the sensitivity to lower couplings and lower masses, with FCC-ee probing couplings close to ϵ ~10 -4 for dark photon masses below the Z mass.For higher masses, the sensitivity of circular and linear e + e − colliders with similar luminosities is comparable.Searches for long-lived dark photons produced in ep → eA′ and decaying into two charged fermions enable the FCC-eh to probe for masses below 1 GeV and couplings in between 10 -5 and 10 -3 [53], filling the gap between LHCb, future e + e − and pp colliders, and lowenergy experiments.Recent results from searches at a muon collider are presented in Ref. [30], while Ref. [54] offers a comprehensive review on more general dark-photon models and corresponding searches.It is noted that hadron and lepton colliders could offer significant sensitivity to non-minimal models where dark photons are produced through BSM Higgs decays, as shown for the HL-LHC in Ref. [4].A detailed discussion of the sensitivity to non-minimal scenarios is, however, outside the scope of this review. The minimal dark Higgs model originates from the extension of the SM Higgs sector by the addition of a scalar singlet which mediates the interaction between the SM Higgs boson and the dark sector.In the context of general extended Higgs models, the Lagrangian contains a term proportional to sin θ (referred to as sin γ in Section 5), enabling the mixing between the SM Higgs and the new particle S associated with the singlet field, with mixing angles θ.The Lagrangian also contains a term proportional to λ HS S 2 , leading to the coupling between the h and two S particles.If either of the couplings sin θ or λ HS is non-zero, a rich phenomenology is expected.If the new scalar mixes with the SM Higgs boson, S can either be produced like a SM Higgs boson or originate from exotic decays of the SM Higgs boson.The corresponding cross sections and branching fractions would depend on the mixing angle.Once produced, the new scalar could decay like a SM Higgs boson, with probabilities reduced by sin θ, and into the SM Higgs boson itself if m S > 2m h .In the no-mixing scenarios, S can only be pairproduced through an off-shell or on-shell Higgs boson.The new scalar is stable in the no-mixing minimal models, leading to signatures with missing transverse momentum.The region of parameter space with larger couplings, sin 2 θ ≥ 10 -4 , is explored by searches for the associated production of ZS conducted using the recoil technique at e + e − colliders.The Tera-Z configuration of the FCC-ee extends the reach in couplings by one order of magnitude for masses in between a few GeV and half of the Z boson mass by exploring the exotic decays Z → ℓ + ℓ − S. Precision measurements of the Higgs couplings place constraints on the mass over a large range of sin 2 θ values: for a fixed luminosity, e + e − colliders yield a better sensitivity than those proton-proton machines included in this study, with CLIC 3000 covering masses as low as 6 GeV for sin 2 θ ≤ 10 -5 .Searches for h → SS in visible final states at the FCC-eh allow the experiments to probe intermediate values of sin 2 θ for masses m S between 3 and 30 GeV, while similar analyses at CLIC extend the sensitivity to lower values of the couplings for masses between 10 and 60 GeV.These searches assume λ HS ~10 -3 , corresponding to the level of precision on the measurements on the SM Higgs coupling expected at future e + e − colliders.A summary is presented in Figure 9, which also includes the relation between the relevant parameters under the maximal mixing assumption in relaxion models as they exhibit a similar phenomenology via relaxion-Higgs mixing. Axion-like particles (ALPs, a) are gauge-singlet pseudoscalar particles with derivative couplings to the SM.ALPs can mediate the interactions between the SM and the hidden sector by coupling to photons, gluons, W and Z bosons, and fermions.The interactions with the Higgs boson are suppressed since there is no dimension-5 operator at the tree level in the models considered here.At highenergy colliders, ALPs emerge from either resonant production or from exotic decays of the Z or Higgs bosons (Z → aγ, h → aZ, aa).In addition, they can be produced via vector-boson fusion at pp colliders and in association with a gauge or Higgs boson at lepton colliders (e + e − → aX with X = γ, Z, h).In ep machines, the incoming electron interacts with a photon from the proton, leading to e − γ → e − a.For ALP masses, m a , below the Z mass, the dominant decay modes are into gluons and photons, where the latter has received the most attention to date.Results from recent searches are therefore presented as a function of the ALP mass and coupling to photons (Figure 10).Thanks to excellent sensitivity to the process e + e − → Z → aγ(γγ), the Tera-Z configuration of the FCC-ee reaches the best sensitivity for ALP masses between the ~1 GeV and the Z mass, probing couplings g aγγ as small as 10 -8 .Searches for the same rare decay at the FCC-hh probe have a similar mass range but with somewhat limited coverage in couplings, as expected.On the other hand, hadron colliders offer excellent sensitivity to scenarios where, Exclusion limits are computed at 95% CL in the case of FCC-ee, FCChh, ILC, CLIC, and CepC, while all other limits are computed at 90% CL [31]. Portal Coupling Vector (dark vector, A μ ) Scalar (dark Higgs, S) ( μS + λ HS S 2 )H † H Pseudo-scalar (axion, a) Fermion (sterile neutrino, N) y N LHN e.g., the ALP originates from rare Higgs decays [55].The e + e − linear colliders extend the reach at larger masses thanks to their higher center-of-mass energy and probe couplings as small as 10 -5 . Experiments at ep colliders have a reach similar to those of lowenergy e + e − linear colliders by searching for the existence of eγ → ea. A detailed overview of the subject, comparing the reach at various machines, is provided in Ref. [55], while Ref. [56] goes into the details of searches at the FCC-ee, exploring all combinations of ALP production modes with visible and invisible decay modes, including those associated with long-lived ALPs.Recent studies at s √ 10 TeV muon collider, performed using a modified nomenclature, indicate a discovery potential up to an effective energy scale Λ of 238 TeV, where Λ controls the strength of the interactions [30]. Heavy neutral leptons (also referred to as heavy neutrinos or sterile neutrinos) are one of the most promising extensions of the SM to generate the light neutrino masses observed in neutrino oscillation experiments.At colliders, HNL can emerge from leptonic decays of the W, Z, and Higgs bosons with a probability proportional to the mixing with the SM neutrinos, where the mixing angles and their magnitude can be expressed as M and |Θ 2 | = i θ 2 (y ]i * are the Yukawa couplings, while M is the sterile neutrino degenerate mass, and VEV is the vacuum expectation value).Additional production mechanisms include the W-exchange t-channel at both e + e − and ep colliders (eq → Nq; e + e − → N]) and γW fusion at e + e − machines (eγ → NW).Subsequent decays of N occur via emission of a W, Z, h boson, if kinematically allowed.Depending on the value of the couplings and masses, the decay may be prompted or delayed.This rich phenomenology [57] offers opportunities for both direct searches for these new states, e.g., Z → N] or W → Nℓ, as well as indirect searches through precision measurements of the gauge and Higgs bosons' properties.The h → WW channel, e.g., is used to place indirect constraints on h → N]. Figure 11 (left) shows that the best sensitivity to the mixing parameter between the electron neutrino and HNL in the region between a few GeVs and the Z mass is yielded by a combination of the conventional and of the displaced-vertex searches performed at the FCC-ee at the Z-pole.For larger masses, FCC-eh provides the best sensitivity for couplings as small as 10 -6 through searches for lepton-flavor-violating decays. Further studies in Ref. [57] are presented in Figure 11 (right), where θ e = θ μ = θ i , θ i ≠ 0, and θ τ = 0.In these models, the sensitivity of indirect searches pushes the sensitivity of the FCC-ee to the TeV scale.Muon colliders could complement the FCC-ee capability in hierarchical scenarios where the mixing to the second generation is dominant.Finally, the recent work in Ref. [56] provides in-depth considerations about the reach of searches for long-lived HNL, the potential to discriminate between the Dirac and Majorana hypotheses, to measure the mass, and to probe regions of parameter space consistent with leptogenesis.The experimental sensitivity to heavy neutrinos embedded in UV complete theories, like supersymmetry or type III 2HDM, is discussed, e.g., for the ILC, in Ref. [27]. Rare processes and indirect BSM physics searches The presence of new phenomena at a high-energy scale can impact the production rate of processes that are otherwise very rare in the SM.Examples already mentioned in this review are the anomalies in measurements such as R K and R K * at LHCb [15] that can be explained by the presence of LQ or Z′.Flavorchanging neutral current effects in the heavy-quarks and gauge boson sectors are another case extensively investigated at future colliders.Prospect studies can be found in (32) and references therein, with the expected sensitivity of future lepton collider proposals at critical production thresholds complementary to that of hadron colliders at very high luminosities. In the absence of evidence for new physics, the formalism of EFT can be adopted as a global framework to perform modelindependent searches.Two effective field theory approaches are considered here.The first one, the Standard Model EFT or SMEFT, extends the SM with operators (O i ) at higher canonical dimension "d," constructed as combinations of SM fields, invariant under the Lorentz and SM gauge symmetries.If lepton and baryon number conservation is imposed to reduce the otherwise very large number of possible new operators, the first corrections to the SM are provided by operators of dimension six.BSM physics at energies below Λ is then described by a Lagrangian L L SM +L BSM , where depend on the structure of new physics.Furthermore, the first corrections to the SM are provided by operators of dimension six if lepton and baryon number conservation is imposed.Since BSMinduced corrections to the SM parameters can be grouped into sets of models, any deviations of the SM parameters from their expectations could provide an indication about ci Λ 2 .Thanks to its linearized Lagrangian, SMEFT is an excellent tool to probe for weakly coupled theories.Recent studies based on global fits to SMEFT operators are documented in Refs [30,38] and shown in Figure 12: these selected results, shown as a relative improvement compared to the HL-LHC results, indicate that BSM scales between 1 and several tens of TeVs can be probed at future colliders under the assumption of c i ~1.Precision measurements at future lepton colliders, in particular where Z-pole runs are planned, contribute substantially to the extraction of EW but also triple-gauge coupling parameters.Diboson and Zh, h → b b measurements in the boosted regime are among the probes most relevant for high-energy pp Exclusion reach for axion-like particle portal models in the m a − g aγγ plane at various experiments.Exclusion limits are computed at 90% CL, except for the FCC-eh, computed at 95% CL, the FCC-ee, reporting the observation of four signal events, and FCC-hh, reporting that of 100 signal events [31]. colliders.For muon colliders, high-energy measurements in two-totwo fermion processes as well as single-Higgs and di-Higgs precision measurements have been considered.Even though the SMEFT provides a consistent framework to describe the impact of BSM physics, it is important to highlight that the results of the global fit depend on the choice of operators, basis, selected observables and their correlations, experimental and theoretical systematic uncertainties, etc.A detailed discussion on the subject can be found in Ref. [58].For instance, off-diagonal flavor structures are not considered.If the imposed benchmark flavor symmetry is relaxed, top-quark FCNC interactions can be incorporated [59] into the SMEFT framework. The second EFT approach is the so-called Higgs EFT (HEFT).In this framework, the Higgs boson is not required to belong to an exact SU(2) L doublet, and the physical Higgs and the three EW Goldstone bosons are treated as independent objects.The physical Higgs is typically assigned to a singlet representation of the SM gauge groups. The HEFT, with a non-linear realization of the EWSB, offers the most general description of the Higgs couplings, and it is suitable to investigate a large set of distinct theories, including composite Higgs, and scenarios with modified trilinear Higgs couplings.Even though is outside the scope of this review, the reader is invited to find detailed comparative overviews of SMEFT and HEFT in Refs [60,61]. State of the art of the proposed colliders The broad and ambitious science program presented in the previous sections, and references therein, depends critically upon the performance of the accelerators and experiments, including both the instrumentation and software and computing elements.Advances in theoretical methods are also essential for the full exploitation of Exclusion limits are computed at 90% CL [31].Right: Comparison of exclusion reaches at 90% CL from searches at the HL-LHC, FCC-hh, and FCC-eh and precision measurements at the FCC-ee [57]. FIGURE 12 Probability reach for the Wilson coefficients computed at 68% CL from the global fit (solid bars).The vertical "T"-shaped lines report the results obtained if only one operator is generated by the UV dynamics.Left: Reach of all options considered in Ref. [38].Right: Sensitivity of the muon collider at both s √ 3 and 10 TeV, compared to that of the HL-LHC [30]. these machines' discovery potential.The technology needed to realize these projects might not exist yet, and cutting-edge and vigorous R&D is therefore being pursued by the global HEP experimental community.Extensive studies on detector concepts are ongoing within the broader worldwide detector R&D programs, as recently presented in Ref. [62].A succinct summary of the status of the machines considered in this paper, as well as recently proposed modifications and upgrades, is presented in the following paragraphs.The reader is invited to consult dedicated reviews to learn about the latest developments in advanced accelerator techniques, instrumentation, software and computing, and theoretical calculations and methods.The technical design report (TDR) of the ILC, released in 2013 [64], focused on the 250-500 GeV option (with a possible energy upgrade to 1 TeV).The recent comprehensive report submitted to Snowmass [27] reviews in detail the accelerator design, proposes new luminosity and energy running conditions (including technology options for multi-TeV upgrades), and presents robust solutions to deliver electron and positron beams in the energy region of the Higgs boson.In addition, it updates the proposal made in the detector TDR [63] for two detectors at the interaction region, the SiD and the ILD detector, and outlines that further R&D is needed.These new detector designs have been carried out at the level of a conceptual design report (CDR).In 2020, ICFA approved the formation of the ILC International Development Team as the first step toward the preparatory phase of the ILC project, with a mandate to make preparations for the ILC Pre-Lab in Japan [64] while waiting for a decision by Japan about hosting the facility.If approved, the machine is expected to deliver physics data in the mid-2030s. As a direct response to a high-priority request from the 2013 update of the European Strategy for Particle Physics, CERN developed the design for the Future Circular Collider.The CDR for the FCC housed in a 100-km-long tunnel at CERN was delivered in 2018 with preliminary cost estimates and feasibility assessments [6,9,42].Updates were presented within the 2018 process for the European Strategy and, more recently, in the context of the Snowmass community planning process.The latest proposals [65][66][67] include a phased approach with an e + e − machine at various center-of-mass energies (including at the Higgs mass), followed by the 100 TeV pp, the ep, and the heavy ion programs.Under the auspices of CERN, the FCC Collaboration is also considering a tunnel of 91 km.Assuming a timely completion of the R&D for the FCC-ee, start of operations is expected in the 2040s, with data taking lasting till 2060.The FCC-hh is scheduled to run between 2070 and 2090.The program may be modified to focus on the pp, the ep, and the heavy ion programs if an e + e − collider other than the FCC-ee is approved for construction.Two to four experiments could be hosted by the FCC complex at each stage. After the release of the CepC CDR and subsequent studies documented in Ref. [68], the CepC accelerator study group entered the TDR phase, expected to be completed by the end of 2022.Meanwhile, an update to the design of the CepC and SppC, including a target center-of-mass energy of 125 TeV for pp collisions, is discussed in Ref. [69] and Ref. [70], respectively.According to the currently envisioned schedules, the CEPC (SppC) starts operations in the mid-2030s (2050s). The CLIC CDR [7], documenting the 3 TeV machine, dates back to 2012, while a project implementation plan, PIP [71], was finalized in 2018 and included the option for the 380 GeV running.The recent Snowmass report [72], building on both the CDR and the PIP, describes recent achievements in accelerator design, technology development, system tests, and beam tests for CLIC, demonstrating that the performance goals are realistic.In addition, results from ongoing R&D are likely to allow for further upgrades, both in s √ and instantaneous luminosity.Assuming project approval in 2028 (after the next ESPP), construction can start in ~2030 and operations ~7 years later. The option of a muon collider has gained substantial interest in the past two years.Documented in Ref. [73] is the latest proposal for a muon collider with three tentative target center-of-mass energies: 3, 10, and 14 TeV.Other energy conditions are also explored, e.g., operations at s √ 125 GeV.It is recognized that the muon collider is not as mature as the other high-energy lepton collider options listed previously.However, no major technical limitations are identified to date, and the outlined R&D path to address the remaining challenges makes the 3 TeV viable, with start of data taking in the mid-2040s. As mentioned in Section 2, the newly proposed C 3 linear accelerator [17] benefits from recent advances that increase the efficiency and operating gradient of a normal conducting accelerator and provides a solution to a multi-TeV machine.The current proposal is for a compact 8-km-long cold copper-distributed coupling complex that could fit on the Fermilab site.The Snowmass 2021-contributed paper [18] instead discusses the Higgs-Energy LEptoN (HELEN) e + e − linear collider, based on advances in superconducting radio frequency technology, but with potential cost and AC power savings and smaller footprint (relative to the ILC).If the ILC cannot be realized in Japan in a timely fashion, the HELEN collider would be another viable option for a Higgs factory in the U.S. Conclusion Several collider projects have been proposed and discussed in recent years.Each proposal offers compelling opportunities for precision measurements and searches for new physics, albeit carrying challenges in accelerator, detector, and computing technologies.The ESPPU and the Snowmass/P5 processes have outlined future prospects at linear and circular e + e − , high-energy pp, ep, and high-energy μ + μ − colliders, along with their capability to solve long-standing problems, such as the understanding the EWSB mechanism, the origin and nature of dark matter, the flavor problem, the origin of neutrino masses, the strong CP problem, and baryogenesis.This review has briefly summarized the outcomes of those processes, resulting from the huge combined effort of the theory and experimental particle physics communities during the last 5 years, in the context of direct and indirect searches for new physics.Well-motivated BSM scenarios have been considered to provide quantitative comparisons between the reach of different proposed projects.The emerging picture shows that, while there are excellent chances for fundamental discoveries at the HL-LHC, the datasets might not be sufficient to fully characterize new physics if deviations from the SM are found, and future colliders will make this possible.Each future collider offers exciting prospects to enable the exploration of the unknown beyond the HL-LHC, and the realization of one or more of the proposed accelerator projects in the next decades should be strongly pursued by the scientific community to guarantee unique advancements in the understanding of the laws of nature. FIGURE 1 FIGURE 1Left: Exclusion reach of different colliders on the Y-Universal Z′ model parameters[31].Right: Muon collider sensitivity contours at 95% CL for scalar (upper panel) and vector (lower panel) LQ models via the process μ + μ − → bs, where y QLij indicates the coupling between the i-generation lepton and the jgeneration quark.For the various setups considered, see Ref.[32]. FIGURE 3 FIGURE 3 Exclusion reach obtained from the precision measurements of the Higgs boson couplings and from direct searches for new states.Left: Sensitivity in the plane sin 2 γ − m S , where γ and m S are the mixing angle and the mass of the singlet mixing with the SM Higgs boson, respectively.Right: Sensitivity in the plane tan β − m A , where tan β is the ratio of the vacuum expectation value for the two Higgs doublets and m A is the mass of the CP-odd scalar Higgs boson, in a type-II two-Higgs doublet model [31]. FIGURE 5 FIGURE 5Exclusion reach at 95% CL for Higgsino-like charginos and nextto-lightest neutralinos with equal mass m (NLSP) as a function of the mass difference Δm between NLSP and LSP.Exclusion reaches using monojet searches at pp and ep colliders are also superimposed[31]. FIGURE 7 FIGURE 7Left: Exclusion reach at 95% CL for axial-vector simplified models at future colliders assuming a DM mass of M DM =1 GeV.Right: Results from searches for invisible decays of the Higgs boson, assuming a Majorana-type DM, compared to constraints of current and future direct detection experiments on the spin-independent WIMP-nucleon scattering cross section[31].The region where the irreducible background from neutrinos may hide a DM signal, usually labeled as the "neutrino floor," is not reported.For further reading, a recent paper on the subject is given in Ref.[50]. FIGURE 9 FIGURE 9Exclusion reach in the m S − sin 2 θ plane at various experiments.The symbols S and ϕ are used interchangeably to identify the singlet.Exclusion limits are computed at 95% CL in the case of FCC-ee, FCChh, ILC, CLIC, and CepC, while all other limits are computed at 90% CL[31]. FIGURE 10 FIGURE 10Exclusion reach for axion-like particle portal models in the m a − g aγγ plane at various experiments.Exclusion limits are computed at 90% CL, except for the FCC-eh, computed at 95% CL, the FCC-ee, reporting the observation of four signal events, and FCC-hh, reporting that of 100 signal events[31]. FIGURE 11 FIGURE 11Left: Exclusion reach for minimal heavy lepton portal models assuming electron-coupling dominance in the |Θ| 2 − m N plane at various experiments.Exclusion limits are computed at 90% CL[31].Right: Comparison of exclusion reaches at 90% CL from searches at the HL-LHC, FCC-hh, and FCC-eh and precision measurements at the FCC-ee[57].
v3-fos-license
2018-04-03T01:02:00.278Z
2017-11-07T00:00:00.000
25067636
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/ecam/2017/7601486.pdf", "pdf_hash": "a364b27eb7d902fc2704b5162d7fcdac4b974bb4", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42629", "s2fieldsofstudy": [ "Medicine" ], "sha1": "0e777befb4308724bd8823fb8a625a3998c4f5a1", "year": 2017 }
pes2o/s2orc
Optimized-SopungSunkiwon, a Herbal Formula, Attenuates Aβ Oligomer-Induced Neurotoxicity in Alzheimer's Disease Models Alzheimer's disease (AD), the most common form of dementia, is an age-related neurodegenerative disease that is characterized by memory dysfunction, neuronal cell damage, and neuroinflammation. It is believed that AD-related pathology is mostly due to the overproduction of Aβ, especially the oligomeric form (AβO), in the brain. Evidence of the effects of multifunctional medicinal herbs in the treatment of AD has been steadily increasing. Optimized-SopungSunkiwon (OSS), a multiherbal formulation that is composed of six medicinal herbs derived from SopungSunkiwon, is a traditional medicine that is prescribed for neurodegenerative disorders in elderly patients. We previously reported that OSS showed an antiamnesic and memory enhancing effect in mice, but it is unknown whether OSS has a protective effect against AβO neurotoxicity. In this study, we investigated the effects of OSS in AD models induced by AβO in vitro and in vivo. We found that OSS protected neuronal cells and inhibited the generation of nitric oxide and reactive oxygen species against AβO toxicity in vitro. These results were confirmed by in vivo data that oral administration of OSS for 14 days attenuated memory impairments and neuronal cell death by modulating gliosis, glutathione depletion, and synaptic damage in the mouse hippocampus induced by AβO. Introduction Alzheimer's disease (AD) is characterized by progressive memory and learning disorders coupled with severe neuronal degeneration [1]. Although the exact mechanisms of AD pathogenesis remain to be established, it is widely known that amyloid-(A ) deposits play a key role in the disease [2]. Among the different forms of A , the oligomeric form (A O) is thought to be primarily related to the pathogenesis of AD because of its neurotoxicity, which impairs functional synaptic plasticity and induces memory loss by inhibiting hippocampal long-term potentiation (LTP) [3][4][5]. A O has also been implicated in triggering neuronal cell death by activating glial cells and generating reactive oxygen species (ROS) in AD brains [6][7][8]. These characteristics of A O indicate the potential that A O-induced experimental models to show various pathological features of AD may be useful. The paradigm of drug discovery for neurodegenerative diseases is currently diverging from a single-target to a multitarget approach, because the effects of single-target drugs are too limited to allow for effective treatment of complex neurodegenerative diseases such as AD [9]. Recent studies have provided considerable evidence showing that the multimodal effects of several herbal extracts or herbal formulations are highly effective in the treatment of AD [10,11]. For example, EGb761, a standardized extract of Ginkgo biloba leaves, inhibits A -induced ROS accumulation, neuronal damage, and formation of A fibrils [12][13][14]. B401, a herbal formulation that is famous in traditional Chinese 2 Evidence-Based Complementary and Alternative Medicine medicine and is widely used to treat brain diseases, attenuates glutamate-induced neuronal cell death in SH-SY5Y cells and cognitive dysfunction in triple transgenic AD mice by reducing AD-related pathological proteins including A and tau [15]. Therefore, traditional herbal medicines, which have multitarget and multipotent effects, are emerging as potential treatments options for AD. Optimized-SopungSunkiwon (OSS) is traditionally prescribed to treat senile constipation and it has been reported that it also works effectively in hyperglycemia, hyperlipidemia, and diabetic nephropathy [16,17]. OSS consists of the following six medicinal herbs: Bombyx mori L., Plantago asiatica L., Rheum palmatum L., Poria cocos Wolf, Gardenia jasminoides Ellis, and Cuscuta chinensis Lam. A previous study showed that Bombycis excrementum, the herb that is present in the largest proportion in the composition of OSS, protects hippocampal neurons and ameliorates memory impairment in mice in which AD-like pathological features are induced by intrahippocampal injection of A O 1-42 [18]. Moreover, we previously confirmed that OSS treatment results in memory enhancing activity as well as recovery from scopolamineinduced memory loss via the facilitation of acetylcholine release and regulation of synaptic proteins in mice [19]. However, the effect of OSS against A neurotoxicity is yet to be investigated. In this study, we examined whether OSS displays neuroprotective effects against cognitive deficits, neuronal cell death, neuroinflammation, and synaptic loss in A O 1-42induced AD models in vitro and in vivo. Preparation of OSS Extract. OSS was prepared as has been previously described [19]. Briefly, OSS was made from a mixture of the following six herbs: Bombycis excrementum, Plantaginis Semen, Rhei Rhizoma, Gardenia Fructus, Poria, and Cuscutae Semen (1.5 : 1.5 : 0.5 : 1 : 1 : 1) obtained from the Kyongdong local market (Seoul, Korea). Each herb mixture (400 g) was extracted three times with sonication in distilled water for 2 h. Following filtration, the solution was evaporated in a vacuum and lyophilized (yield: 1.925%). The powder was kept at 4 ∘ C before use. This extract was previously standardized by analysis of sennoside A, crocin, and geniposide contents [19]. DPPH Radical Scavenging Activity Assay. Various concentrations of OSS were mixed with 0.20 mM DPPH ethanolic solution (1 : 1). After incubation at dark room temperature for 30 min, the mixture determined at the absorbance of 517 nm using spectrophotometer. Also, the antioxidant activity of OSS was expressed as half maximal inhibiting concentration (IC 50 ) which is defined as the concentration of OSS required to scavenge 50% of DPPH radicals. IC 50 values were estimated by a nonlinear regression. DPPH radical scavenging activity (%) = {control -(sample -blank)}/control × 100. ABTS Cation Scavenging Activity Assay. 7.40 mM ABTS solution was added to 2.60 mM potassium phosphate 1 day before starting the experiment in the dark. Various concentrations of OSS were mixed with 7.40 mM ABTS solution and 2.60 mM potassium phosphate. After incubation at room temperature for 5 min, the mixture determined at the absorbance of 732 nm using spectrophotometer. Also, the antioxidant activity of OSS was expressed as IC 50 , which were estimated by a nonlinear regression. ABTS cation scavenging activity (%) = (control − sample)/control × 100. 2.6. Cell Culture and Treatment. Rat pheochromocytoma PC12 cells were maintained in RPMI, supplemented with 5% heat-inactivated FBS, 10% HS, and 1% P/S in an atmosphere of 5% CO 2 at 37 ∘ C. Mouse BV-2 microglial cells were maintained in DMEM, supplemented with 10% heat-inactivated FBS and 1% P/S in the same conditions. All experiments were carried out 12 h after PC12 and BV-2 cells were seeded in 96well plates at a density of 2.0 × 10 5 cells/ml. After the cells were about 70% confluent, various concentrations (0.1-100 g/ml) Evidence-Based Complementary and Alternative Medicine 3 of OSS in FBS free media were added to the cells for 24 h at 37 ∘ C, with or without 1 M A O 1-42 . An equal volume of vehicles was administered to the control and toxin groups, for each. Measurement of Cell Viability. PC12 cells were seeded on 96-well plates and were treated with OSS at doses of 0.1-100 g/ml for 24 h or pretreated with OSS for 1 h. They were then stimulated with 1 M A O 1-42 for 23 additional hours (pretreatment) or 1 M A O 1-42 was added for 1 h before treatment with OSS for 23 additional hours (posttreatment). After the treatment, supernatants were removed, and 1 mg/ml of tetrazolium dye (MTT) was added to the cells for 3 h. MTT medium was carefully removed from the wells, and the MTT formazan dye was eluted using dimethyl sulfoxide (DMSO). Absorbance was measured at a wavelength of 570 nm using a spectrophotometer (Versamax microplate reader, Molecular Device; Sunnyvale, CA, USA). Data were expressed as percentages of the values obtained for the controls. Measurement of Extracellular NO. The accumulated level of NO in culture supernatants was measured using a colorimetric reaction with Griess reagent using a slightly modified variant of the methods that have previously been described [20]. The supernatants (100 l) were transferred to a separate plate and added to 100 l of Griess reagent in the dark for 10 min at room temperature. Absorbance at 550 nm was measured. For each experiment, freshly prepared sodium nitrite that had been serially diluted was used as a standard, in parallel with culture supernatants. 2.9. Measurement of Intracellular ROS. Intracellular ROS generation was measured with DCFH-DA fluorescence dye, using a slightly modified version of previously described methods [20]. DCFH-DA enters cells passively and is converted into nonfluorescent DCFH, which reacts with ROS to form the fluorescent product dichlorofluorescin (DCF). Cells were seeded onto coverslips in 24-well plates and treated with OSS at 0.1, 1, and 10 g/mL for 1 h. Then, they were stimulated with 1 M A O 1-42 and incubated for an additional 30 min. The cells were incubated with 25 M DCFH-DA for 30 min. The fluorescence intensity was determined at 485 nm excitation and 535 nm emission, using a fluorescence microplate reader (SpectraMax Gemini EM; Molecular Device, Sunnyvale, CA, USA). Representative images were obtained using a fluorescence microscope (Olympus Microscope System BX51; Olympus, Tokyo, Japan). Measurement of Total Glutathione. The levels of total glutathione (GSH) were measured using the Total Glutathione Quantification kit (Dojindo Molecular Tech., Tokyo, Japan) according to the instruction manual and previously described method [21]. Briefly, hippocampal tissues were lysed and treated with 5% 5-sulfosalicylic acid. A coenzyme working solution, buffer solution, and enzyme working solution were added to each well at 37 ∘ C for 5 min. Then, a GSH standard solution, sample solution, and substrate working solution were added for 10 min each. Absorbance was measured using a spectrophotometer at a wavelength of 405 nm, and concentrations of GSH were determined in the sample solution using a GSH standard curve. Animals and Surgery Procedure. Male ICR mice (8 weeks, 27-30 g) were purchased from Daehan Biolink Co. Ltd. (Eumseong, Korea). Animals were housed in cages of 5 or 6, had free access to water and food, and were maintained under a constant temperature (23 ± 1 ∘ C), humidity (60 ± 10%), and a 12 h light/dark cycle. Animal treatment and maintenance were carried out in accordance with the Principle of Laboratory Animal Care (NIH publication number 85-23, revised 1985) and the Animal Care and Use Guidelines of Kyung Hee University, Seoul, Korea. Stereotaxic injections of A O 1-42 into mouse hippocampi were performed as previously described [18,22]. In brief, mice were anesthetized and mounted in a stereotaxic apparatus (myNeuroLab, St. Louis, MO, USA). Each mouse was unilaterally injected (at a rate of 0.5 l/min) with 3 l of A O 1-42 (10 M) into the granule cell layer (GCL) of the hippocampus (coordinates with respect to bregma in mm: AP −2.0, ML 1.5, DV 2.0), according to a stereotaxic atlas of the mouse brain [23]. Shamoperated mice were injected with the same volume of saline alone. The accuracy of stereotaxic injection to the targeted region was monitored in all animals by examination of the needle tract within brain sections. In all groups, saline and OSS solutions were administered intraorally. OSS dissolved in saline was administered once per day for 14 days (5 days before surgery and for 9 days after surgery). 2.13. Step-through Passive Avoidance Test. The step-through passive avoidance test (PAT) was performed according to a method described previously [18]. A learning and memory test was performed using a two-compartment step-through passive avoidance test apparatus. The box was divided into bright and dark compartments (21 × 21 × 21 cm 3 each) by a guillotine door. The bright compartment contained a 50 W electric lamp, and the floor of the dark compartment was composed of 2 mm stainless steel rods spaced 1 cm apart. Mice were treated with either OSS or vehicle 1 h before the acquisition trial and were initially placed in the bright compartment for the acquisition trial. The door between the two compartments was opened 10 s later. When the hind legs of the mice entered the dark chamber, the guillotine door was closed and an electrical foot shock (0.6 mA) was delivered through the grid floor for 3 s. The mice were again placed in the bright chamber for the retention trial, which was conducted 24 h after the acquisition trial. The time taken for a mouse to enter the dark chamber after the door was 4 Evidence-Based Complementary and Alternative Medicine opened was defined as the latency time. This was recorded for latencies of up to 300 s. Novel Object Recognition Test. The novel object recognition test (NORT) was performed according to a method described previously [18]. The experiments were carried out in a grey open field box (45 × 45 × 50 cm 3 ). Prior to the test, mice were habituated to the test box for 5 min without the presence of objects. After the habituation period, mice were placed into the test box containing two identical objects and were allowed to explore for 3 min. The objects used in this study were wooden blocks of the same size but different shape. The time spent by the animal exploring each object was measured (defined as the training session). Twenty-four hours after the training session, mice were allowed to explore the objects in the test box for 3 min, during which the familiar object used in the previous training session was placed with a novel object. The time that the animals spent exploring the novel and the familiar objects was recorded (defined as the test session). Animals were considered to be exploring an object when they were facing, sniffing, or biting it. The test box and objects were cleaned with 70% ethanol between sessions. Results were expressed as percentages of novel object recognition time (time percentage = exploring time for novel object/[exploring time for novel object + exploring time for familiar object] × 100). Brain Tissue Preparation. At 24 h after the memory examination, hippocampal tissue was dissected from the brains of 3 mice from each group in order to measure total glutathione levels. The remaining mice were transcardially perfused with 0.05 M phosphate-buffered saline (PBS) and then fixed with cold 4% PFA in 0.1 M phosphate buffer for cresyl violet staining and immunohistochemistry (n = 5 per group). The perfused brains were removed (whole) and postfixed overnight at 4 ∘ C in 0.1 M phosphate buffer containing 4% PFA. The brains were then immersed in a solution containing 30% sucrose in 0.05 M PBS for cryoprotection. Coronal sections (30 m) were serially cut using a freezing microtome (Leica, Nussloch, Germany) and stored in cryoprotectant (25% ethylene glycol, 25% glycerol, 0.05 M phosphate buffer) at 4 ∘ C until use in immunohistochemistry. Cresyl Violet Staining and Immunohistochemistry. For histological assessment of cell loss, free floating sections of mice brains were processed for cresyl violet staining and immunohistochemistry as described in the section above, following a method that had previously been used [18]. For cresyl violet staining, the sections were stained with 0.5% cresyl violet, after which they were mounted onto gelatin-coated slides, dehydrated through graded alcohols (70%, 80%, 90%, and 100%), placed in xylene, and coverslipped using histomount medium. For immunohistochemistry, brain sections were briefly rinsed in PBS and treated with 1% hydrogen peroxide for 15 min. The sections were incubated with a rabbit anti-GFAP antibody (1 : 3000 dilution), a rat anti-Mac-1 (1 : 1000 dilution), a mouse anti-SYN (1 : 200 dilution), or a rabbit anti-PSD-95 antibody (1 : 500 dilution) overnight at 4 ∘ C in the presence of 0.3% triton x-100 and NGS or NRS. After rinsing in PBS, the sections were then incubated with biotinylated anti-rabbit IgG, anti-rat IgG, or anti-goat IgG (1 : 200 dilution) for 90 min and with ABC (1 : 100 dilution) for 1 h at room temperature. Peroxidase activity was visualized by incubating sections with DAB in 0.05 M tris-buffered saline (pH 7.6). After several rinses with PBS, sections were mounted onto gelatin-coated slides, dehydrated, and coverslipped with histomount medium. The optical densities of cresyl violet, GFAP, Mac-1, SYN, and PSD-95-positive cells in the dentate gyrus (DG) or CA3 region of the hippocampus were analyzed using ImageJ software (Bethesda, MD, USA). The images were taken at a 400x magnification using an optical light microscope (Olympus Microscope System BX51; Olympus, Tokyo, Japan) equipped with a 20x objective lens. Data are presented as percentages of the sham group values obtained. 2.17. Statistical Analysis. All statistical parameters were calculated using GraphPad Prism 5.0 software. Values are expressed as the mean ± standard error of the mean (SEM). Results were analyzed by one-way analysis of variance (ANOVA) analysis followed by the Newman-Keuls multiple comparison post hoc test. Differences with a p value lower than 0.05 were considered statistically significant. Effect of OSS against 1-42 -Induced Neurotoxicity In Vitro. It has been reported that A O 1-42 induces PC12 cell death by inducing apoptosis [24]. In this study, we investigated whether OSS provides protection against A O 1-42induced cell death in vitro. Treatment with OSS only at 0.1-100 g/ml for 24 h showed no significant difference in cell viability compared to the control group (Figure 1(a)). Pretreatment with OSS at 10 and 100 g/ml significantly inhibited the reduction of cell viability (82.40 ± 3.02% and 88.40 ± 3.60%, resp.) compared with that of the 1 M A O 1-42 only treatment group (68.20 ± 2.16%; Figure 1(b)). Posttreatment with OSS at 10 g/ml also significantly ameliorated cell viability (67.73 ± 2.59%) compared with that of A O 1-42 only treatment group (55.10 ± 0.96%; Figure 1(c)). Effect of OSS 1-42 -Induced NO Generation In Vitro. NO plays a key role in a variety of inflammatory statuses, being released in response to pathological stimuli [25]. Excessive concentrations of NO also lead to the formation of oxidative stress cascades, thereby contributing to a neurotoxic cascade such as A -mediated neurodegeneration [26]. To examine the anti-inflammatory effects of OSS against A O 1-42 , we evaluated whether OSS inhibits NO production in activated microglia cells induced by A O 1-42 . Incubation with 1 M A O 1-42 increased NO production up to about 10 M. Compared to the group treated with A O 1-42 only, the group that underwent pretreatment with OSS at 1 and 10 g/ml significantly inhibited NO generation (6.18 ± 0.46 M and 5.98 ± 0.64 M, resp.) (Figure 2(a)). Posttreatment with OSS showed that NO generation was inhibited compared to the production levels observed in the A O 1-42 only treatment group. However, the observed difference was not significant (Figure 2(b)). Antioxidant Effects of OSS In Vitro and In Vivo. To evaluate the antioxidant potential of OSS, we performed the DPPH free radical and ABTS cation scavenging assay. We found that OSS showed higher scavenging activity than an extract of Scutellaria baicalensis Georgi (SBE), used as a positive control in both the DPPH and ABTS assays ( Table 1). This trend is in accordance with the inhibitory effects of OSS against A O 1-42 -induced ROS generation. In this study, pretreatment with OSS at 10 g/ml significantly inhibited ROS generation (144.53 ± 11.44%) when compared to the values obtained with the A O 1-42 only treatment group (168.77 ± 14.53%; Figure 3(a)). Posttreatment with OSS at 10 g/ml also led to significantly lower ROS generation values (109.61 ± 8.25%) after A O 1-42 insult compared to those obtained from the A O 1-42 only treatment group (139.74 ± 7.20%; Figure 3(b)). Moreover, we investigated the effects of OSS on the induction of GSH as an antioxidant in the mouse hippocampus. The levels of GSH, the most prevalent antioxidant in the brain, consistently decrease with increasing oxidative stress in AD [27,28]. Injection in Mice. To investigate whether OSS ameliorated memory impairment in mice receiving an intrahippocampal injection of A O 1-42 , NORT and PAT were performed in this study. In the NORT, the A O 1-42injected mice spent similar amounts of time (50.90 ± 2.29%) exploring the novel object and the familiar object during the test session. In contrast, sham-operated mice spent more time exploring the novel object (70.28 ± 2.30%). Treatment with OSS at 50, 100, and 200 mg/kg/day significantly improved A O 1-42 -induced cognitive deficits in this test (61.76 ± 0.81%, 63.75 ± 1.62%, and 58.34 ± 0.60%, resp.; Figure 4(a)). No significant differences in novel object recognition time were found between any of the tested groups during the training session. Effect of OSS on Memory Impairment Induced by Intrahippocampal In PAT, the mean latency time of the A O 1-42 -injected vehicle-treated group (85.13 ± 6.04 s) was significantly shorter than that of the sham-operated group (229.42 ± 9.72 s). OSS administered at 50, 100, and 200 mg/kg/day significantly reversed the observed effect of the A O 1-42 -injected vehicletreatment in this test (140.27 ± 11.16 s, 186.08 ± 14.08 s, and 140.51 ± 7.48 s, resp.; Figure 4(b)). No differences in latency time were observed between any of the tested groups during the acquisition trial. Effect of OSS on 1-42 -Triggered Neuronal Atrophy in the Mouse Hippocampus. Brain atrophy caused by neuronal death is a pathological hallmark of AD in humans and hippocampal atrophy, in particular, is closely related to memory dysfunction [29,30]. To further understand the mechanisms underlying the recovery of memory function, the inhibition of A O 1-42 -triggered hippocampal neuronal death by OSS was investigated using cresyl violet staining. The A O 1-42 -induced reductions in neuronal density in the granule cell layer (GCL) of the DG (89.23 ± 1.47%) and CA3 (60.25 ± 2.45%) hippocampal regions were significant compared to those of the sham-operated group. This loss was significantly inhibited by OSS treatment at 50, 100, and 200 mg/kg/day in the CA3 region of the mouse hippocampus ( Figure 5). Effects of OSS on 1-42 -Induced Astrocyte and Microglia Activation in the Mouse Hippocampus. It is known that the activation of astrocyte and microglia under neuroinflammatory conditions plays an important role in the destruction of neurons and leads to synaptic dysfunction, thereby resulting in memory deficits [31]. The intensity of GFAP, a specific marker for astrocytes, in the hilus region of the DG was significantly increased in the A O 1-42 -injected group (190.37 ± 5.10%) as compared with the sham-operated group. This intensity was significantly reduced after OSS treatment at 100 and 200 mg/kg/day (160.89 ± 4.62% and 160.27 ± 6.30%, resp.; Figure 6(a)). The intensity of mac-1, a specific marker for microglia, in the hilus region of the DG was also nearly doubled in the A O 1-42 -injected group (188.56 ± 9.92%) compared with the sham-operated group. In contrast, mac-1-positive intensity of A O 1-42 -injected mice treated with OSS at 100 and 200 mg/kg/day was significantly decreased (159.25 ± 2.76% and 156.56 ± 4.44%, resp.; Figure 6(b)). Taken together, OSS treatment effectively inhibits hyperactivation of astrocyte and microglia triggered by A O 1-42 toxicity. in the Mouse Hippocampus. Growing evidence shows that A O induce depletion of hippocampal synaptic proteins such as SYN and PSD-95, resulting in memory dysfunction in AD [32,33]. As shown in Figure 7, the immunoreactivity of both SYN and PSD-95 in the hippocampal CA3 region was markedly decreased in the A O 1-42 -injected group (75.21 ± 2.66% and 75.95 ± 1.27%, resp.) compared with shamoperated group. This trend was significantly reversed by OSS treatment at 50, 100, and 200 mg/kg/day for both SYN (85.10 ± 2.27%, 88.10 ± 1.96%, and 92.65 ± 2.70%, resp.) and PSD-95 (87.18 ± 2.12%, 92.32 ± 0.93%, and 92.74 ± 1.38%, resp.). These data suggest that OSS restores A O 1-42 -induced synaptic disruption, which is linked to the amelioration of memory impairment. Discussion A O, the most toxic form of A , is considered to play a central role in AD pathogenesis rather than A monomers or fibrils [34,35]. He et al. demonstrated that memory impairment and hippocampal CA1 neuronal damage were more remarkable in A O 1-42 -infused rats than in those where features of AD pathology were induced by A 1-42 fibrils due to the observation that A O 1-42 more evidently exhibited proinflammatory factor stimulation than A 1-42 fibrils [36]. Our present data shows that systemic treatment with OSS ameliorates memory dysfunction by blocking A O 1-42 -induced hippocampal cell damage, hippocampal GSH depletion, glial hyperactivation, and synaptic disruption in a mouse model of AD. It was also confirmed that OSS directly inhibited A O 1-42 -induced cell degeneration as well as overproduction of NO and ROS in vitro. Oxidative stress is an important pathological factor of AD [37]. Several studies indicate that A 1-42 peptide is at the center of oxidative damage as it is an indicator of ROS generation in AD brains [38]. Additionally, increased A -mediated ROS generation can damage the endogenous antioxidant GSH and enzymes such as superoxide dismutase, GSH peroxidase, and catalase, thus inducing A deposits to form in the brain [39]. A deposits stimulate activation of nearby microglia and astrocytes, generating an inflammatory response through the release of proinflammatory mediators [40]. It has been suggested that activated glia are involved in neuronal degeneration because they produce potent toxic molecules including NO and cytokines [41,42]. The present study demonstrates that OSS treatment inhibits ROS generation in PC12 cells and restores GSH contents depleted by A O 1-42 in hippocampal tissue. Intracellular ROS concentration and endogenous oxidant system normalized by OSS also has an influence on the deactivation of glial cells in the hippocampus as well as on the reduction of NO production in BV-2 microglia cells. The hippocampal synapse network originates from axons of the CA3 pyramidal region, which connect to almost all regions of the hippocampus [43,44]. Furthermore, CA3 synapses modulate homeostatic plasticity connected to hippocampal neurons [45]. Thus, a marked decrease of synaptic density in the hippocampal CA3 region is highly relevant to synaptic disruption, which is closely linked to memory decline in the pathogenesis of AD [46]. Soluble A O impair hippocampal LTP and can also induce memory dysfunction [3,32]. In this study, it has been demonstrated that OSS treatment rescues synaptic damage in the hippocampal CA3 region based on the results obtained using SYN and PSD-95 markers, which are specific pre-and postsynaptic proteins, respectively [47,48]. In this context, it can be hypothesized that the restoration of memory function after OSS treatment is mediated by facilitated hippocampal synapses. Other approaches investigating potential therapeutics for AD indicate that the design of multitarget drugs is increasingly necessary because most single-target candidates have been unsuccessful in the treatment of AD given that it is a complex and multifaceted pathogenesis [49,50]. This paradigm of drug discovery for AD is in accordance with the multifunctional actions of medicinal herbs. The neuroprotective effects of OSS observed in this study can be due to each individual herb of OSS. Water extract from silkworm feces (Bombycis excrementum), for example, was shown to protect hippocampal neurons and memory impairment induced by A O 1-42 in our previous report [18]. Rhaponticin and rhapontigenin isolated from rhubarb roots (Rhei Rhizoma) significantly inhibit A 1-42 -induced apoptotic mechanisms by regulating Bax/Bcl-2 proapoptotic genes in human neuroblastoma cells [51]. Additionally, the protective effects of Poria water extract against A 1-42mediated cell death in PC12 cells were also reported [52]. Furthermore, geniposide, one of the active compounds of Gardeniae fructus, has been shown to exhibit multifunctional neuroprotective effects by blocking receptors for advanced end product-mediated signaling in APP/PS1 transgenic mice and BV-2 microglia cells [53,54]. These constituents of OSS may have contributed to its neuroprotective effects against A O 1-42 neurotoxicity. Conclusion In summary, OSS treatment alleviates A O 1-42 -induced damage of memory function and hippocampal neurons. This effect is likely to be mediated by the inhibition of oxidative stress, neuroinflammation, and decline in hippocampal synaptic density. Further detailed investigation is required to reveal the underlying mechanisms that might explain how OSS treatment regulates neuroinflammation and hippocampal neuronal and synaptic damage. Taken together, our data suggest that OSS may be a potential multitargeted candidate for AD treatment.
v3-fos-license
2017-04-28T22:41:39.822Z
2010-06-22T00:00:00.000
764455
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.plantphysiol.org/content/153/4/1846.full.pdf", "pdf_hash": "16cd31812bbb3dc69773783b7d80f739732e8b34", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42630", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "48ef50a1fe800b59761974c5e1c007936207d61f", "year": 2010 }
pes2o/s2orc
Regulation of freezing tolerance and flowering in temperate cereals: the VRN-1 connection. In winter wheat (Triticum spp.) and barley (Hordeum vulgare) varieties, long exposures to nonfreezing cold temperatures accelerate flowering time (vernalization) and improve freezing tolerance (cold acclimation). However, when plants initiate their reproductive development, freezing tolerance decreases, suggesting a connection between the two processes. To better understand this connection, we used two diploid wheat (Triticum monococcum) mutants, maintained vegetative phase (mvp), that carry deletions encompassing VRN-1, the major vernalization gene in temperate cereals. Homozygous mvp/mvp plants never flower, whereas plants carrying at least one functional VRN-1 copy (Mvp/−) exhibit normal flowering and high transcript levels of VRN-1 under long days. The Mvp/− plants showed reduced freezing tolerance and reduced transcript levels of several cold-induced C-REPEAT BINDING FACTOR transcription factors and COLD REGULATED genes (COR) relative to the mvp/mvp plants. Diploid wheat accessions with mutations in the VRN-1 promoter, resulting in high transcript levels under both long and short days, showed a significant down-regulation of COR14b under long days but not under short days. Taken together, these studies suggest that VRN-1 is required for the initiation of the regulatory cascade that down-regulates the cold acclimation pathway but that additional genes regulated by long days are required for the down-regulation of the COR genes. In addition, our results show that allelic variation in VRN-1 is sufficient to determine differences in freezing tolerance, suggesting that quantitative trait loci for freezing tolerance previously mapped on this chromosome region are likely a pleiotropic effect of VRN-1 rather than the effect of a separate closely linked locus (FROST RESISTANCE-1), as proposed in early freezing tolerance studies. Exposure to low nonfreezing temperatures, a process known as cold acclimation, increases a plant's freezing tolerance (Thomashow, 1990(Thomashow, , 1999. Freezingtolerant plants that have not been cold acclimated are generally killed at approximately 23°C to 25°C, while cold-acclimated plants can survive much colder freezing temperatures. In addition, increasing the length of the cold acclimation period, up to a point, can also increase freezing tolerance. These two observations suggest that cold acclimation is an active process. Freezing tolerance is essential for fall-planted temperate cereals (wheat [Triticum spp.], barley [Hordeum vulgare], and rye [Secale cereale]) to survive freezing temperatures during the winter. In contrast, springsown genotypes do not require high levels of freezing tolerance, since they are not exposed to the freezing temperatures of winter. One feature that distinguishes winter and spring genotypes is the requirement of the former for a long period (several weeks) at cold temperature to accelerate the transition from the vegetative growth phase to the reproductive growth phase, a process called vernalization. Spring genotypes do not have a vernalization requirement and flower in the absence of the extended low-temperature treatment (for review, see Trevaskis et al., 2007;Distelfeld et al., 2009). The requirement for exposures to nonfreezing cold temperatures is common to both cold acclimation and vernalization, suggesting a potential connection between these two processes. Winter genotypes maintained under continuous cold, after an initial increase in freezing tolerance, exhibit a progressive decrease in their cold acclimation ability (Fowler et al., 1996a(Fowler et al., , 1996bFowler and Limin, 2004). This progressive decrease inversely parallels the fulfillment of the vernalization requirement. A clear decrease in freezing tolerance occurs after the shoot apical meristem advances to the double ridge stage (Fowler et al., 1996a(Fowler et al., , 1999Limin and Fowler, 2006). These studies suggest that a regulatory component of freezing tolerance is linked to a developmental shift between the vegetative and reproductive stages. Limin and Fowler (2006) suggested that the main vernalization gene, VRN-1, which is induced during vernalization, plays an important role in the decrease of the ability to cold acclimate with development. Early genetic studies also revealed a correlation between growth habit and freezing tolerance; wheat genotypes having a spring growth habit were less freezing tolerant than genotypes having a winter growth habit (Hayes and Aamodt, 1927). Subsequent studies carried out using wheat chromosome substitution lines revealed that homeologous group 5 chromosomes, where VRN-1 is located, have the largest effect (Roberts, 1986). The first major locus affecting freezing tolerance and winter hardiness on homeologous group 5 was designated FROST RESISTANCE-1 (FR-1; Sutka and Snape, 1989). However, since FR-1 cosegregates with VRN-1 in most genetic studies, it is still not clear if FR-1 is an independent gene or just a pleiotropic effect of VRN-1 (Brule-Babel and Fowler, 1988;Sutka and Snape, 1989;Roberts, 1990;Hayes et al., 1993;Francia et al., 2004;Galiba et al., 2009). More recently, a second locus associated with natural variation in freezing tolerance in wheat and barley was mapped on the long arm of homeologous group 5. This locus, designated FR-2, is approximately 30 centimorgans proximal to VRN-1 and includes a cluster of 11 (or more) C-REPEAT BINDING FACTOR (CBF) genes (Vágú jfalvi et al., 2003;Francia et al., 2004Francia et al., , 2007Miller et al., 2006;Skinner et al., 2006;Knox et al., 2010). The FR-2 CBF gene cluster has surfaced as a major quantitative trait locus (QTL) affecting freezing tolerance in a number of wheat and barley mapping populations (Vágú jfalvi et al., 2003;Francia et al., 2004Francia et al., , 2007Båga et al., 2007). The role of the CBF genes in freezing tolerance has been studied in detail in Arabidopsis (Arabidopsis thaliana). The CBFs are transcriptional activators that promote the expression of genes whose upstream regulatory sequences harbor the CRT/DRE low-temperature cis-acting DNA regulatory element (Stockinger et al., 1997). Approximately 20% of the Arabidopsis genes whose expression is altered during cold acclimation are directly or indirectly controlled by the CBF transcription factors (Vogel et al., 2005). Direct targets of the CBFs in Arabidopsis include the robustly induced COLD REGULATED (COR) genes (Jaglo-Ottosen et al., 1998). Similar candidate CBF target genes in the cereals, which also harbor CRT/DRE motifs in their upstream regulatory region, include COR14b, DHN5, and DHN8 (Choi et al., 1999;Dal Bosco et al., 2003). Many of these COR genes are induced to higher levels in genotypes exhibiting greater freezing tolerance than in those having lesser freezing tolerance (Houde et al., 1992;Danyluk et al., 1994Danyluk et al., , 1998Crosatti et al., 1996;Fowler et al., 1996b;Limin et al., 1997;Grossi et al., 1998;NDong et al., 2002). The use of COR14b as an expression QTL to map loci affecting COR expression levels revealed two major loci, one of which is coincident with VRN-1 and the second one with FR-2 (Vágú jfalvi et al., 2000;Francia et al., 2004). Notably, genotypes carrying the vrn-1 allele for winter growth habit express certain CBF genes at higher levels than genotypes carrying the Vrn-1 allele for spring growth habit . Moreover, once the winter genotypes carrying the vrn-1 allele are vernalized, CBF transcript levels are dampened relative to levels detected in nonvernalized plants . This suggests that VRN-1 somehow acts to repress expression of the CBFs at FR-2 and in turn decrease freezing tolerance. The molecular isolation of VRN-1 revealed that this gene encodes a MADS box protein similar to the Arabidopsis meristem identity gene APETALA1 (AP1; Danyluk et al., 2003;Trevaskis et al., 2003;Yan et al., 2003). The characterization of VRN-1 alleles associated with winter and spring genotypes showed that the primary differences were insertions and deletions in regulatory regions located in the promoter and first intron (Yan et al., 2004;Fu et al., 2005;Pidal et al., 2009). Deletions in the VRN-1 promoter affecting a small region tentatively designated the "VRN box" (Pidal et al., 2009) or large deletions/insertions in the VRN-1 first intron are both associated with spring growth habit (Fu et al., 2005;Pidal et al., 2009). Genotypes with a winter growth habit (vrn-1 allele) show very low VRN-1 transcript levels until plants are vernalized. In contrast, spring genotypes (Vrn-1 allele) constitutively express VRN-1 to high levels. Flowering is initiated once VRN-1 transcripts reach a critical threshold level (Loukoianov et al., 2005). In addition to vernalization, photoperiod also plays a role in VRN-1 regulation. In photoperiod-sensitive genetic backgrounds, long-day photoperiods enhance VRN-1 transcript accumulation while short-day photoperiods delay transcript accumulation. In both wheat and barley, the delay in the transition to floral initiation in plants grown under short-day photoperiods is associated with increased freezing tolerance (Fowler et al., 2001;Limin and Fowler, 2006). One of the cis-elements responsible for the lack of VRN-1 expression under short days in diploid wheat (Triticum monococcum) is thought to reside within the VRN-1 promoter CArG motif, a binding site for MADS box transcription factors located downstream of the VRN box Pidal et al., 2009). Under short days, diploid wheat (T. monococcum) plants carrying deletions in this CArG motif (Vrn-1f and Vrn-1g alleles) show accumulation of VRN-1 transcripts and a slow transition of the shoot apical meristem to the reproductive stage, whereas plants with intact CArG motifs (e.g. Vrn-1h and vrn-1) show no VRN-1 expression in short days and remain in the vegetative phase . Under long days, all accessions with VRN-1 alleles for spring growth habit show accumulation of VRN-1 transcripts and a rapid initiation of the transition to the reproductive stage . T. monococcum mutants with deletions of the VRN-1 gene fail to flower, indicating that this gene is indispensable for the transition to the reproductive phase (Shitsukawa et al., 2007b). Two independently induced nitrogen-ion-beam mutants, designated maintained vegetative phase1 (mvp-1) and mvp-2, were generated in different T. monococcum genetic backgrounds. The deletions in these two mutants encompass the complete VRN-1 gene and several closely linked genes (Distelfeld and Dubcovsky, 2010). To investigate the role of VRN-1 in freezing tolerance, we made use of the mvp mutants and of natural T. monococcum accessions that differ in their ability to express VRN-1 under short days. We found that freezing tolerance and transcript levels of several CBF and COR genes were higher in the mvp mutants relative to the plants carrying at least one functional VRN-1 copy. However, the expression of VRN-1 under short days was not as effective as under long days to downregulate COR14b gene transcription. Taken together, these results suggest that VRN-1 transcription is necessary but not sufficient to down-regulate the COR genes. Effect of the mvp Mutations on Freezing Tolerance The homozygous mvp-2 mutants (mvp-2/mvp-2) and the plants carrying at least one functional copy of VRN-1 (Mvp-2/2) were identified using a dominant molecular marker for VRN-1 as described in Supplemental Figure S1. Just before the freezing experiments, there were clear differences in apical development between plants from each group. The apices from the mvp-2/mvp-2 mutant were at the vegetative stage, whereas those from the Mvp-2/2 plants were already at the double ridge stage (Supplemental Fig. S2). Significant differences in survival rates were detected between the mvp-2/mvp-2 and Mvp-2/2 plants in controlled freezing experiments. Differences between the two genotypic classes were detected at both 29°C and 211°C freezing temperatures (Table I). In the group frozen to 29°C, none of the Mvp-2/2 plants survived, whereas 87% of the mvp-2/mvp-2 mutants survived (Table I). In the group frozen to 211°C, none of the Mvp-2/2 plants survived, whereas approximately half (46%) of the mvp-2/mvp-2 mutants survived (Table I). In a second freezing experiment performed under slightly different acclimation and freezing conditions (Supplemental Fig. S3), approximately 70% of the mvp-2/ mvp-2 mutants survived 212°C freezing temperatures, whereas only 40% of the Mvp-2/2 plants survived the same treatment (P = 0.009; Supplemental Fig. S3). In this second experiment, all plants from both genotypic classes were killed at 213°C (Supplemental Fig. S2). To evaluate the effect of freezing on the functionality of PSII, the maximum quantum yield of PSII photochemistry was measured by the ratio of variable (F v ) to maximal (F m ) fluorescence in a dark-adapted state, F v /F m (Butler and Kitajima, 1975). F v /F m ratios taken 2 to 4 h after returning the plants to 20°C paralleled the survival results (high F v /F m values indicate low freezing damage). Homozygous mvp-2/mvp-2 mutant plants showed significantly higher F v /F m values than the Mvp-2/2 plants (Table I; P , 0.0001), in agreement with the greater freezing tolerance of the mvp-2/mvp-2 plants. Along with the F v /F m measurements, samples of the crown and adjoining tissues were collected to measure relative conductivity. This measurement estimates the cellular electrolytes leached from the freeze-damaged tissue as a proportion of the total cellular electrolytes and is based on the principle that the greater the damage to cells from freezing injury, the greater the exosmosis of cellular electrolytes into a water solvent (Dexter, 1956). The Mvp-2/2 plants showed higher relative conductivity values than those from the homozygous mvp-2/mvp-2 mutants, both at 29°C (19% increase) and 211°C (45% increase). However, the differences between genotypes were only marginally significant (Table I; P = 0.05), likely due to the limited All Mvp-2/2 plants failed to regrow; therefore, there is no variance within this class to perform an ANOVA. The differences between mvp-2/mvp-2 and Mvp-2/2 were obvious. number of mutant plants sacrificed for relative conductivity measures in this first experiment (three mvp-2/mvp-2 and six Mvp-2/2 plants per temperature). A second experiment using leaves from nine plants per genotype-temperature combination confirmed the higher relative conductivity of the Mvp-2/2 plants compared with the mvp-2/mvp-2 plants (76% average increase over the three temperatures; P = 0.003; Supplemental Fig. S3). Taken together, these three sets of data indicate that the presence of the VRN-1 gene in the Mvp-2/2 lines is associated with a decrease in freezing tolerance relative to the mvp mutants. Effect of the mvp Mutations on CBF Transcript Levels Quantitative reverse transcription (qRT)-PCR was used to compare transcript levels of 11 CBF genes in 4-week-old Mvp-2/2 and mvp-2/mvp-2 plants both before (20°C) and after 8 h of cold treatment at 4°C (Fig. 1). With the exception of CBF2, which showed very low levels of expression both at 20°C and 4°C, the other 10 CBF genes showed very low transcript levels at 20°C and were significantly up-regulated after 8 h of cold treatment (P , 0.01). Five CBF genes (CBF2, CBF4, CBF9, CBF12, and CBF17) showed significantly higher expression levels (P , 0.01) in the mvp-2/mvp-2 homozygous mutant plants than in the Mvp-2/2 plants (Fig. 1). The same difference was marginally significant for CBF14 (P = 0.03) and not significant for the other CBF genes. A second experiment was carried out using 8-weekold Mvp-1/2 and mvp-1/mvp-1 plants and, as before, measurements of CBF transcript levels after 8 h at 4°C (Supplemental Fig. S4). However, in this second experiment, no samples were collected at 20°C because of the negligible CBF transcript levels observed in the first experiment at this temperature. In the Mvp-1/2 plants, the shoot apical meristems were between the double ridge and terminal spikelet stages, whereas in the mvp-1/ mvp-1 mutants, the shoot apical meristems were in the vegetative stage. As in the 4-week-old Mvp-2 plants used in the previous experiment, the older 8-week-old Mvp-1 plants showed significantly lower transcript levels of CBF2, CBF4, CBF9, CBF12, and CBF17 in the Mvp-1/2 plants relative to the homozygous mvp-1/mvp-1 mutants after the cold treatment. Whereas CBF14 transcript level differences between Mvp-2/2 and mvp-2/mvp-2 genotypes were marginally significant in the first experiment, in this second experiment the differences in CBF14 transcript levels between Mvp-1/2 and mvp-1/mvp-1 were not Figure 1. qRT-PCR analysis of transcript levels of the CBF genes present at the FR-2 locus relative to the ACTIN endogenous control. Samples were collected from leaves of 4-week-old mvp-2/mvp-2 and Mvp-2/2 plants (20°C) and again 1 d later at the same time following an 8-h cold treatment at 4°C. Values on the y axis were normalized and calibrated using the 2 2DDCT method (Livak and Schmittgen, 2001). The same calibrator was used for all genes, so scales are comparable across genes. Values are averages of eight biological replications 6 SE. The inset shows CBF14 transcript levels, which were significantly higher than the other genes at this locus. P values for the differences between mvp/mvp and Mvp/2 after the cold treatment were calculated using ANOVA and are indicated by asterisks: * P , 0.05, ** P , 0.01. Interactions between Freezing Tolerance and Flowering Plant Physiol. Vol. 153, 2010 significant. Curiously, transcript levels of CBF12 and CBF16 were much lower relative to the other genes in the 8-week-old Mvp-1 plants relative to the 4-week-old Mvp-2 plants. Quantitative PCR measurements of VRN-1 in both experiments revealed high levels of VRN-1 transcripts in the Mvp-1/2 and Mvp-2/2 plants, which were even higher than those of the highly expressed TRANSLA-TION ELONGATION FACTOR1 (TEF1) endogenous control gene in both cases. As expected, no VRN-1 transcripts were detected in the homozygous mvp/mvp mutants. Taken together, these results suggest that the presence of VRN-1 (or genes regulated by VRN-1) modulates the response of several CBF genes to cold. Effect of the mvp Mutations on COR Gene Transcript Levels Eight hours after transferring 8-week-old plants from 20°C to 4°C, COR14b transcripts were 2-fold higher (P , 0.05) in the homozygous mvp-1/mvp-1 plants than in the Mvp-1/2 plants (Supplemental Fig. S4). Two additional experiments, in which plants were exposed to longer periods of cold temperatures, were performed to further characterize the differences in COR14b transcript levels between mutants and nonmutants. In the first experiment, 4-week-old mvp-2/mvp-2 and Mvp-2/2 plants were transferred from room temperature to 4°C and kept at that temperature for 12 d. Leaf samples for RNA analysis were collected on the day prior to the cold treatment and 8 h, 32 h, 4 d, and 12 d after transferring the plants to 4°C. All samples were collected at 2 PM to avoid differences that might be caused by circadian effects. In this qRT-PCR experiment, transcript levels from both genotypes peaked at 32 h and then decayed slowly during the next 11 d. At each of the last three sampling points, the COR14b transcript levels were significantly higher (P , 0.001) in the mvp-2/mvp-2 homozygous plants than in Mvp-2/2 plants (Fig. 2). In the second experiment, steady-state transcript levels of COR14b and two additional COR genes (DHN5 and DHN8) were evaluated by RNA-blot analysis using a more gradual decrease in temperatures and longer exposure times to the inductive temperatures (19 d at 10°C followed by 12 d at 4°C, all under long days; Fig. 3). At the beginning of the cold induction, the Mvp-2/2 plants were already induced to flower and showed high levels of VRN-1 transcripts, whereas the mvp-2/mvp-2 plants were in the vegetative stage and, as expected from the homozygous deletion, showed no VRN-1 transcripts (Fig. 3). At all time points after the cold induction, the transcript levels of COR14b and DHN5 were higher in the mvp-2/mvp-2 plants (no VRN-1 transcripts) than in the Mvp-2/2 plants (high VRN-1 transcripts). The DHN8 gene did not show this alternate pattern between genotypes (Fig. 3). In mvp-2/mvp-2 homozygous mutants, COR14b and DHN5 levels remained high throughout the sampling time course (Fig. 3). Taken together, the expression data from these experiments showed that in plants with high VRN-1 transcript levels, several CBF and COR genes are down-regulated. To test if the down-regulation of these genes was a direct effect of the increase in VRN-1 transcript levels or a result of the transition to the reproductive phase, the relationship between VRN-1 and COR14b transcript levels was studied in T. monococcum lines that, under short days, differ in the expression of VRN-1 but show similar delays in the progression to the reproductive phase. Effect of VRN-1 Transcription on COR14b Transcript Levels under Short Days T. monococcum lines carrying a "wild-type" vrn-A1 allele and recessive vrn-A2 alleles have a spring growth habit and show no expression of VRN-1 under short days . However, T. monococcum lines carrying a 1-bp deletion in the VRN-1 promoter CArG box and an insertion in intron 1 (Vrn-1f allele), or a 34-bp deletion encompassing the complete CArG box (Vrn-1g allele), show high VRN-1 transcript levels under short days . These plants show a transition of the shoot apical meristem to the double ridge stage under short days, but further development of the spike is delayed until the plants are transferred to long days . Expression profiling of three independent accessions of T. monococcum lines carrying each of the three genotypes confirmed previously published results . Under short days, lines having the Vrn-1f or Vrn-1g allele showed high VRN-1 transcript levels, whereas those having the wild-type Figure 2. qRT-PCR transcript levels of COR14b relative to TEF1 endogenous control. Plants were 4 weeks old at the beginning of the experiment and were exposed to 4°C for 12 d. Values on the y axis were normalized and calibrated using the 2 2DDCT method (Livak and Schmittgen, 2001). Homozygous mvp-2/mvp-2 plants (null VRN-1) are indicated by black squares and lines, and Mvp-2/2 plants (one or two VRN-1 copies) are indicated by gray triangles and lines. Values are averages of eight biological replications in the untransformed scale 6 SE. P values were calculated using ANOVA of log (n + 1) -transformed values for each time point: ** P , 0.01. vrn-1 allele showed no VRN-1 transcripts (Fig. 4A). As expected, transcript levels of VRN-1 were higher at 6 weeks than at 2 weeks, and COR14b transcripts were high after cold temperatures and nearly absent in plants maintained at warm temperature (Fig. 4A). In the 6-week-old Vrn-1f and Vrn-1g lines grown under short days, COR14b was highly responsive to cold temperatures, despite relatively high VRN-1 transcript levels in these lines (Fig. 4A, arrows). In contrast, when 6-week-old plants of these same genotypes were grown under long-day conditions, the same cold treatment failed to induce COR14b to high levels ( Fig. 4A). This result indicates that, under short days, the expression of VRN-1 was not sufficient to downregulate COR14b. Analyses of the 2-week-old plants (Fig. 4A) further confirmed the inverse correlation between VRN-1 and COR14b. Under short days, the cold treatments resulted in a strong up-regulation of COR14b (low VRN-1 transcript levels in all genotypes), but under long days, the down-regulation of COR14b was not as strong as in the 6-week-old plants, likely because the VRN-1 transcript levels in 2-week-old plants was not as high as in the 6-week-old plants (Fig. 4A). To confirm these results, this experiment was repeated using qRT-PCR and the same genotypes at the same developmental stage (Fig. 4B). In plants grown under short days and exposed for 32 h to 4°C, COR14b transcript levels showed no significant differences (P = 0.95) between the wild type and Vrn-1 mutant lines (Vrn-1f and Vrn-1g), despite significant differences in VRN-1 transcript levels. In contrast, highly significant differences in COR14b transcript levels (P = 0.006) were detected under long days, in which COR14b transcript levels in the Vrn-1f and Vrn-1g mutant lines were lower than in the wild type (Fig. 4B). An additional experiment was carried out to compare COR14b expression levels in T. monococcum lines carrying the Vrn-1f and Vrn-1g alleles (early spring growth habit) with those in a T. monococcum accession carrying the Vrn-1h allele (late spring growth habit). The Vrn-1h allele has an almost identical repetitive element insertion in the first intron as Vrn-1f but lacks the CArG box mutation . As reported previously, the line with the Vrn-1h allele showed low VRN-1 transcripts under short days and maintained low VRN-1 transcript levels even when plants were grown under long days for 6 weeks. The results presented in Figure 4C confirmed that high VRN-1 transcript levels under short days (Vrn-1f and Vrn-1g) were not sufficient to suppress the induction of COR14b. In contrast, when plants were grown under long days, the Vrn-1h line (low VRN-1 transcript levels) showed a significantly stronger (P = 0.006) induction of COR14b than the Vrn-1f and Vrn-1g mutants (Fig. 4C). As in the previous experiments, both high VRN-1 transcript levels and long days were necessary for the down-regulation of COR14b. Taken together, the results from this and previous experiments suggest that VRN-1 expression is required to initiate the developmental processes that reduce the ability of COR14b to respond to cold temperatures but that VRN-1 transcription alone is not sufficient to produce this effect. DISCUSSION Homozygous mvp/mvp Mutants Show More Freezing Tolerance Than Mvp/2 Plants When the shoot apical meristem of the Mvp/2 plants transitions to the reproductive phase, it stops producing new leaves. Therefore, Mvp/2 plants are expected to exhibit a reduced ability to generate new leaves after freezing as they transition to the reproductive phase. In contrast, the shoot apical meristem of the mvp/mvp homozygous mutants never transitions to the reproductive phase; thus, their ability to generate new leaves after freezing should not be drastically altered with time. While changes in the susceptibility of the shoot apical meristem to freezing damage may account for the observed differences in regrowth after freezing, the mvp/mvp homozygous mutants also exhibited greater freezing tolerance in the existing leaves after acclimation than the Mvp/2 plants, as suggested by higher F v /F m values and lower relative conductivity after the freezing treatment. This suggests that the presence of VRN-1 transcript levels and the concomitant dampening of COR14b induction by cold contributed to a reduction of freezing tolerance in the existing leaves of the Mvp/2 plants. Barley COR14b and the related Arabidopsis COR15 are hydrophilic proteins targeted to the chloroplast stromal compartment (Crosatti et al., 1995). In Arabidopsis, increasing the levels of COR15 results in increased freezing tolerance (Artus et al., 1996). COR15 appears to stabilize membranes from freeze-induced injury, which would account for the reduced electro- lyte leakage and higher F v /F m values (Artus et al., 1996). In addition, the higher transcript levels of dehydrin genes such as DHN5 (Fig. 3) in the leaves of the mvp/mvp homozygous mutants relative to the Mvp/2 plants likely contribute to their improved freezing tolerance. Dehydrins have a highly conserved 15-amino acid segment (the "K segment") that interacts with acidic phospholipids in lipid vesicles. This interaction results in a conformational change of the protein structure that is hypothesized to stabilize membrane integrity (Koag et al., 2003(Koag et al., , 2009 The MADS box meristem identity gene VRN-1 plays an essential role in the regulation of the transition between vegetative and reproductive phases, and its deletion results in plants that fail to flower. In Arabidopsis, gene duplications of the VRN-1 homolog that occurred after the monocot-dicot divergence resulted in three paralogous genes, AP1, CAULIFLOWER (CAL), and FRUITFULL (FRU), that have retained partial ability to promote the transition of the vegetative shoot apical Abbreviations are as follows: W, warm conditions; C, decrease from 18°C to 6°C (occurring at daybreak); 2W, 2 weeks old; 6W, 6 weeks old; SD, short day; LD, long day; vrn-1, wild type; Vrn-1f, allele with a 1-bp deletion in the CArG box coupled with the VRN-1 intron 1 insertion; Vrn-1g, allele with a 34-bp deletion encompassing the CArG box; Vrn-1h, allele with an insertion in VRN-1 intron 1. The three accessions with the Vrn-1f allele carry the dominant Vrn-2 allele, whereas all the other accessions carry nonfunctional vrn-2 alleles. All the accessions have a spring growth habit. A, mRNA-blot analyses of three genotypes per promoter class (indicated by different PI numbers). Arrows and arrowheads identify the presence and absence, respectively, of COR14b transcripts in 6-week-old plants in the Vrn-1f and Vrn-1g natural mutants under short days and long days. B and C, qRT-PCR validation of VRN-1 and COR14b transcript levels relative to the ACTIN endogenous control at 4°C. Values on the y axes were normalized and calibrated using the 2 2DDCT method (Livak and Schmittgen, 2001). Lines carrying the wild-type allele (vrn-1) or Vrn-1h (spring, not induced in short days) are indicated by black bars, and lines carrying the Vrn-1f and Vrn-1g alleles (spring, induced in short days) are indicated by gray bars. Values are averages of five biological replications 6 SE. P values were calculated using contrasts between either vrn-1 or Vrn-1h and the average of the lines carrying the Vrn-1f and Vrn-1g alleles. Samples were collected when the plants were 2 weeks old (B) and 6 weeks old (C). meristem to the reproductive phase. Simultaneous deletions of all three genes are required to generate nonflowering Arabidopsis plants (Ferrandiz et al., 2000). An additional difference between the temperate cereals and Arabidopsis meristem identity genes is their spatial expression profile. VRN-1 transcripts are detected at high levels in the leaves of wheat (Danyluk et al., 2003;Yan et al., 2003;Li and Dubcovsky, 2008), barley (Schmitz et al., 2000;Trevaskis et al., 2003), Lolium (Petersen et al., 2004), and oat (Avena sativa; Preston and Kellogg, 2008), suggesting a similar profile among the temperate cereals. In contrast, the Arabidopsis meristem identity homologs are expressed primarily in the apical meristem and reproductive tissues. AP1 and CAL transcripts are abundant in the induced shoot apical meristem and floral primordia in Arabidopsis but are undetectable or present at much lower levels in some vegetative tissues such as the vascular tissues of cotyledons (Abe et al., 2005). FUL is also expressed primarily in the meristem and floral tissue, but it is also detected in cauline leaves (Teper-Bamnolker and Samach, 2005). In the winter cereals, the expression of VRN-1 in the leaves (and apices) occurs only after vernalization, providing a potentially useful regulatory signal to down-regulate the cold acclimation pathway in this tissue in the spring. Winter wheat lines exposed to continuous cold temperatures improve their freezing tolerance during the first 3 to 4 weeks of the treatment but then gradually start losing those gains. The inflection point in this freezing tolerance curve coincides with the transition of the shoot apical meristem to the double ridge stage and high VRN-1 transcript levels in the leaves (Danyluk et al., 2003). Limin and Fowler (2006) found that wheat near-isogenic lines for VRN-1 carrying the allele for winter growth habit tolerate freezing temperatures 11°C lower than lines carrying the VRN-1 allele for spring growth habit. The authors also showed that when the same near-isogenic spring lines were grown under short days, which are less promotive of VRN-1 expression than long days, the plants tolerated temperatures 8.5°C colder than the same lines grown under long days (Limin and Fowler, 2006). Based on these results, these authors hypothesized that the expression of VRN-1 might be an important signal to regulate the freezing tolerance pathway. Previous studies using near-isogenic lines (Limin and Fowler, 2006), and others using QTL mapping for freezing tolerance, all point to the VRN-1 region as important in the regulation of freezing tolerance (Sutka and Snape, 1989;Roberts, 1990;Hayes et al., 1993;Francia et al., 2004;Galiba et al., 2009). However, the recombination points flanking the VRN-1 gene in the lines used in these studies are not known and can encompass large chromosome regions including a large number of genes. The use of deletion mutants in this study provides a more precise delimitation of the chromosome region responsible for the differences in freezing tolerance. Nonetheless, some caution is still required in the interpretation of the mvp results, since the deletions present in these lines include other genes flanking VRN-1. A screening with probes for multiple genes in the VRN-1 region showed that the mvp deletions include the AGLG1, CYS, and PHYC genes but exclude flanking genes CYB5 and ADA2 (Distelfeld and Dubcovsky, 2010). The exclusion of ADA2 from the deleted regions is relevant because this gene is critical for CBF-mediated transactivation (Stockinger et al., 2001). Based on colinearity with Brachypodium, the mvp deletion is also predicted to include two additional genes, an oligopeptide transporter (Bradi1g08420) and a proteinase inhibitor I9 (Bradi1g08450; Distelfeld and Dubcovsky, 2010). Most of these additional genes are unlikely candidates for the improved freezing tolerance observed in the mvp mutants, with the exception of phytochrome PHYC, since phytochromes B and D have been shown to affect the CBF regulon in Arabidopsis (Franklin and Whitelam, 2007). Thus, we cannot rule out the possibility of the presence of additional genes with an effect on freezing tolerance in the mvp deleted region until it is completely sequenced. However, the expression studies provided an independent source of evidence pointing to VRN-1 as the best candidate gene for the down-regulation of the cold acclimation response. In all the T. monococcum accessions carrying the VRN-1f or VRN-1g allele, the higher transcript levels of VRN-1 were always associated with a significant down-regulation of COR14b when plants were grown in long-day conditions (Fig. 4, B and C). In addition, larger differences in VRN-1 transcript levels during development were correlated with larger differences in COR14b transcript levels (Fig. 4, B and C). All the expression results presented here support the hypothesis that VRN-1 is the best candidate for the dampening of the cold acclimation response among the genes present within the mvp deletions. We are currently developing TILLING mutants of the VRN-A1 and VRN-B1 genes in tetraploid wheat (Uauy et al., 2009) to provide an independent validation of this hypothesis. Homozygous mvp Mutants Exhibit Higher Transcript Levels of Several CBF and COR Genes after a Short Cold Treatment A negative association between VRN-1 and COR genes COR14b and DHN5 (=WCS120) transcript levels has also been reported in previous wheat and barley studies (Vágú jfalvi et al., 2000;Danyluk et al., 2003;Knox et al., 2008). In a doubled-haploid barley population segregating for VRN-H1, the lines carrying the recessive vrn-H1 allele showed higher transcript levels of CBF and COR genes than those carrying the dominant Vrn-H1 allele . In addition, lines grown under short days (reduced VRN-H1 levels) showed higher CBF and COR transcript levels than lines grown under long days when plants were transferred to the cold . The reductions in the transcript levels of multiple CBF genes and their downstream COR gene targets in plants with high VRN-1 transcript levels provide a simple explanation for the gradual decrease in freezing tolerance observed after the initiation of the reproductive phase. The mvp mutants characterized in this study exhibited a similar negative association between VRN-1 and both CBF and COR transcription profiles. Five of the 11 CBF genes tested by qRT-PCR and the COR14b gene showed significantly higher transcript levels 8 h after moving the plants to 4°C in the mvp deletion homozygotes than in those carrying at least one functional copy of VRN-1 ( Fig. 1; Supplemental Fig. S4). In plants maintained at 4°C for 12 d, the COR14b transcript levels were still 10-fold higher in the plants homozygous for the mvp deletion (Fig. 2). However, analyses of two additional COR genes, DHN5 and DHN8, showed that not all COR genes respond in the same way. Whereas DHN5 showed the same negative correlation with VRN-1 as COR14b, the DHN8 gene was not significantly affected by the change in the level of VRN-1 transcripts, which indicates that not all COR genes are down-regulated by VRN-1. Similarly, for half of the 11 CBF genes present in the FR-2 cluster, no significant differences in transcript levels were detected between Mvp/2 and mvp/mvp lines ( Fig. 1; Supplemental Fig. S4), suggesting that cold activation of these genes is not regulated by VRN-1. Thus, it appears that this VRN-1-mediated mechanism may play a role in the regulation of a specific subset of coldresponsive genes. Allelic Differences in VRN-1 Are Likely Sufficient to Explain Differences in Freezing Tolerances Previously Assigned to a Separate FR-1 Locus In earlier studies, differences in freezing tolerance mapped to the VRN-1 region in wheat were considered to be the result of a closely linked gene designated FR-1. However, only two studies have reported recombination between FR-1 and VRN-1, and they differ in the relative positions of these two genes, with FR-1 distal to VRN-1 in the initial mapping studies (Galiba et al., 1995(Galiba et al., , 1997) and proximal to VRN-1 in a later mapping study using deletion lines (Sutka et al., 1999). Although the differences in freezing tolerance across the deletion lines used in the latter study were clear, it is still possible that the reduced freezing tolerance observed in the larger deletion used to map FR-1 to a proximal deletion bin than VRN-1 was the result of the loss of a larger number of genes and an overall reduction in plant vigor rather than the effect of a single FR-1 gene. It is also possible that simultaneous segregation at the linked FR-2 locus, which was not known at the time of these two studies, affected the mapping results. The improved freezing tolerance and higher transcript levels of CBF and COR genes in the mvp mutants suggest that VRN-1 allelic differences are likely sufficient to explain differences in freezing tolerance previously considered to be the result of a separate FR-1 gene. Therefore, our results support the hypothesis that FR-1 is a pleiotropic effect of VRN-1 rather than a separate gene. This hypothesis is also supported by experiments showing that the repression of VRN-1 by short days in spring wheat genotypes is associated with increased freezing tolerance (Limin and Fowler, 2006) and that VRN-1 transcript levels in the different Triple Dirk near-isogenic lines are inversely correlated with freezing tolerance (Koemel et al., 2004). VRN-1 Transcription Is Not Sufficient to Promote the Down-Regulation of COR Genes The experiment using the T. monococcum lines carrying the VRN-1f and VRN-1g alleles (Fig. 4) shows that up-regulation of VRN-1 transcript levels under short days is insufficient to produce a significant down-regulation of COR14b as that observed under long days. Under short days, 6-week-old plants carrying these alleles show high transcript levels of VRN-1 and a transition of the vegetative apex to the double ridge stage. However, under continuous short days, spike development proceeds slowly and stems fail to elongate. Once plants are transferred to long days, genotypes having the VRN-1f and VRN-1g alleles complete their spike development faster and head earlier than genotypes with the wild-type VRN-1 allele due to their more advanced developmental state . When grown under long days, plants carrying the VRN-1f and VRN-1g alleles showed significantly lower levels of COR14b than lines carrying the vrn-1 or Vrn-1h allele. These results were consistent across three independent accessions for each of the VRN-1 alleles, supporting the hypothesis that the differences in COR14b were associated with the differences in the VRN-1 alleles. However, no differences in COR14b were observed among the same genotypes under short days, despite large differences in VRN-1 transcript levels. These results suggest that the down-regulation of the COR14b requires the presence of additional factors that are activated under long days and that require the expression of VRN-1. Taken together, the mvp mutant and VRN-1f and VRN-1g experiments suggest that VRN-1 expression is necessary but not sufficient to down-regulate several COR genes and reduce freezing tolerance in the leaves of wheat plants. A similar phenomenon has recently been described in Arabidopsis, where the floral activator MADS box gene SOC1 functions as a negative regulator of the cold response pathway through the direct repression of the CBF genes (Seo et al., 2009). In the Columbia wild type, SOC1 was expressed most strongly in leaves but was also detected in vegetative apices, inflorescences, stems of flowering plants, and roots (Lee et al., 2000). A microarray experiment comparing 7-d-old seedlings of a soc1 knockout mutant and a SOC1-overexpressing line with wild-type plants revealed that six COR genes were among the 20 genes most negatively regulated in the SOC1-overexpressing line. In addition, the expression level of the three Arabidopsis CBF genes increased in the soc1 mutants and decreased in SOC1-overexpressing lines, without affecting the transcript levels of the CBF regulatory genes ICE1, HOS1, or ZAT12. A chromatin immunoprecipitation experiment using a SOC1 antibody revealed that the CArG box regions in the CBF promoters were enriched in the SOC1-overexpressing line relative to the soc1 knockout, which suggests that SOC1 negatively regulates cold response through direct repression of the transcription of the CBF genes (Seo et al., 2009). It is interesting that the expression of the wheat homolog of Arabidopsis SOC1, WSOC1, is not affected by vernalization or photoperiod, suggesting different functions in these two species (Shitsukawa et al., 2007a). Although both SOC1 in Arabidopsis and VRN-1 in the temperate cereals seem to be associated with the down-regulation of the CBF and COR genes in the leaves, the effect of VRN-1 on the CBF genes does not seem to be as direct as the effect of SOC1 in Arabidopsis. The results from the experiments using T. monococcum accessions with differential expression of VRN-1 under short days suggest that additional genes operating downstream of VRN-1 and that are regulated by long days are required to mediate the negative effect of VRN-1 on freezing tolerance. The identification of these downstream genes and the understanding of their regulatory mechanisms could potentially lead to novel strategies to prevent the premature dampening of the cold acclimation pathway in environments where the premature activation of VRN-1 may increase the risk of freezing damage. Do Temperate Cereals Respond Differently to the Same Cool Temperatures in the Fall and the Spring? The system described above provides the temperate cereals with the ability to differentiate the same cool temperature in the fall and the spring. A cool temperature in the fall, when plants have low VRN-1 transcript levels, results in the induction of the CBF and the downstream COR genes, initiating the acclimation of the plants to cold temperatures. This is essential in the fall, when cool temperatures are an indication of the approaching freezing temperatures of the winter. The same cool temperature in the spring, when VRN-1 transcript levels in the leaves increase significantly in response to lengthening photoperiod, would result in a significantly lower up-regulation of several CBF and COR genes. Since cool temperatures in spring are generally not a prelude of coming freezing temperatures, a robust up-regulation of the CBF pathway response in the spring would likely not be advantageous for plant survival. A similar system seems to be operating in Arabidopsis. Arabidopsis soc1 null mutants show increased responsiveness of the CBF genes to cold and improved freeing tolerance, suggesting that low levels of SOC1 transcripts during the fall may favor plant acclimation to cold temperatures (Seo et al., 2009). SOC1 transcript levels increase significantly by the time of the initiation of Arabidopsis flowering (Lee et al., 2000), indicating that high SOC1 transcript levels will be present in the leaves in the spring, down-regulating the CBF genes and their downstream COR targets. The activation of the CBF regulon has a potentially high energetic cost to plants, since numerous COR genes are up-regulated in the leaves by these transcription factors (Fowler and Thomashow, 2002). In addition, CBF genes have also been shown to repress plant growth (Achard et al., 2008). Therefore, the down-regulation of the CBF genes during the spring has a potential adaptive value, ensuring the plant's rapid development under optimal conditions, and may explain the presence of related systems in Arabidopsis and the temperate cereals. Plant Materials mvp Mutants Two independent Triticum monococcum mutants (mvp-1 and mvp-2) that remain indefinitely in the vegetative state (Shitsukawa et al., 2007b) were used in this study. Since the two mutants carry similar deletions (Distelfeld and Dubcovsky, 2010), they were alternated among experiments depending on seed supply. Seeds from these mutants were kindly provided by K. Murai. These mutants were generated by ion-beam radiation, and both have large deletions that include VRN-1 (Shitsukawa et al., 2007b) and several flanking genes (Distelfeld and Dubcovsky, 2010). The mvp-1 mutation was generated in the KU104-2 background and the mvp-2 mutation in the KU104-1 background (Shitsukawa et al., 2007b). When grown in the greenhouse under long-day conditions (16-h photoperiods), KU104-2 flowered 10 weeks after planting while KU104-1 flowered 3 weeks later. Homozygous mvp individuals do not flower and therefore must be maintained in a heterozygous state. Genotyping was carried out using a dominant VRN-1 molecular marker based on a set of three primers that are described in Supplemental Figure S1. Using this assay, the lines carrying one or two functional VRN-1 copies are detected as a single genotypic class, referred throughout the text as Mvp/2. VRN-1f, VRN-1g, and VRN-1h Alleles T. monococcum lines with four different VRN-1 alleles were used to test the effect of their differential regulation under short days on COR14b transcript levels. T. monococcum lines PI355546, PI427927, and PI237659 carry the "wildtype" vrn-1 allele and a recessive vrn-2 allele that confers spring growth habit. Lines with the vrn-1 allele showed no expression under short days in previous studies . T. monococcum lines PI191097, PI192063, and PI393496 carry the Vrn-1f allele, which has a 1-bp deletion in a CArG box located in the promoter plus an insertion of a repetitive element in the first intron. T. monococcum lines PI326317, PI418582, and PI349049 carry the Vrn-1g allele, which has a 34-bp deletion including the promoter CArG box. Both the Vrn-1f and Vrn-1g alleles confer spring growth habit and show high levels of VRN-1 transcripts under short days . T. monococcum accession PI306540 has the VRN-1h allele, which has the same intron 1 insertion as Vrn-1f but lacks the CArG box mutation. This allele is not expressed under short days and confers a spring phenotype but with later flowering than the Vrn-1f and Vrn-1g alleles . Although the shoot apical meristem transitions to the double ridge stage, spike development progresses slowly and spikes fail to elongate if these plants are left under short days . Growth, Cold Acclimation, and Freezing Assays Freezing experiments with the mvp mutants were all carried out using long-day conditions (16 h of light/8 h of dark). Experiments with the T. monococcum lines with different VRN-1 alleles were carried out using both long-day and short-day (8 h of light/16 h of dark) photoperiod cycles. The RNA-blot analysis and the freezing experiment of the mvp-2 mutants were done in parallel. Seeds collected from Mvp-2/2 heterozygotes were grown under cool-white fluorescent lamps in the laboratory at room temperature for 13 d using a light intensity of 50 mmol m 22 s 21 . Genotyped seedlings were transplanted to wooden boxes having internal dimensions measuring 42 cm (length) 3 22 cm (width) 3 14 cm (height) and having 9.5-cm soil depth. The boxes were placed into a Conviron growth chamber (model PGW36; Controlled Environments) for an additional 47 d under cool-white fluorescent and incandescent bulbs using a light intensity of 130 mmol m 22 s 21 and a constant temperature of 23°C. The growth chamber temperature was decreased to 10°C and held at this temperature for 19 d. Wooden boxes were then transferred to Percival growth chambers (model CU-36L2X; Geneva Scientific) where the seedlings were held at 4°C for 12 d (the PGW36 growth chamber can only cool to +10°C). After cold acclimation, the CU-36L2X growth chamber temperatures were decreased to 22°C and held for 12 h. Ice nucleation was induced by spraying the leaves with ice water. Following 12 h at 22°C, the temperature was then decreased at 1°C h 21 to the target temperatures of 29°C and 211°C and held at these temperatures for 24 h. Afterward, the temperature was returned to +2°C for 12 h. During this time, the plants were kept in the dark. The chamber temperature was then raised to 20°C and returned to a 16-h photoperiod. Chlorophyll Fluorescence (F v /F m ) Chlorophyll fluorescence measurements were made using a hand-held portable pulse amplitude-modulated fluorometer (model OS-30p; Opti Sciences). F v /F m measurements were taken 2 to 4 h after the plants were returned to normal growth conditions. Leaves were dark adapted for approximately 10 min prior to taking the measurements. Relative Conductivity Conductivity measurements were taken on crown tissue consisting of a 1-to 1.5-cm segment of the white, nonphotosynthetic tissue between the upper photosynthetic green shoot and the primary root. Because the use of the crowns required the destruction of the plant, the number of replications in the first experiment was limited to three mvp/mvp homozygotes and six Mvp/2 plants. The second experiment used leaves instead of crown regions and nine plants from the temperature/genotype combination. The electrolyte leakage assay methods are described in detail in the methods used for Supplemental Figure S3. Tubes were shaken for 1 h at 300 rpm before reading the conductivity with an Accumet Basic AB30 electrical conductivity meter (Fisher Scientific). Tubes were then autoclaved for 20 min, cooled to room temperature, and shaken for 1 h at 300 rpm before measuring the total potential conductivity. Values were adjusted by subtracting the conductivity of the deionized water. Relative conductivity represents the adjusted mean ion leakage as a percentage of the total adjusted leakage from frozen-killed samples (for formula, see methods used for Supplemental Fig. S3). CBF qRT-PCR Experiments Eight mvp-2/mvp-2 and Mvp-2/2 plants and 10 mvp-1/ mvp-1 and Mvp-1/2 plants were selected using the VRN-1 molecular marker and were grown in the greenhouse for 4 weeks and 8 weeks, respectively (20°C-25°C, long days). Plants were then transferred to a growth chamber at 4°C for 8 h. RNA samples were collected from leaves from eight mvp-2 and eight Mvp-2/2 plants before (20°C) and after (4°C) the cold treatment in the first experiment and from 10 mvp-1 and 10 Mvp-1/2 plants only after the cold treatment in the second experiment. COR14b qRT-PCR Time-Course Experiment Eight mvp-2 and eight Mvp-2/2 plants were selected using the VRN-1 molecular markers and grown in the greenhouse under the same conditions described above. After 4 weeks, when the Mvp-2/2 plants were still at the vegetative stage, plants were transferred to 4°C and were kept at this temperature for 12 d at the same light intensity indicated above (long days). Leaf samples for RNA analysis were collected 1 d before the cold treatment and after 8 h, 32 h, 4 d, and 12 d at 4°C. Samples were always collected at 2:00 PM (8 h after the subjective daybreak) to avoid potential differences at different times of the day. VRN-1/COR14b qRT-PCR Experiment Five plants with each VRN-1 allele were grown for either 2 or 6 weeks under short-or long-day conditions before transferring to 4°C. RNA samples were collected from leaves after 32 h of cold treatment. For all qRT-PCR experiments, RNA was extracted using the Spectrum Plant Total RNA Kit (Sigma-Aldrich). First-strand cDNA was synthesized from 1 mg of total RNA using the QuantiTect Reverse Transcription Kit (Qiagen). Primers for qRT-PCR expression analyses are presented in Supplemental Table S1. The TEF1 and ACTIN genes were used as endogenous controls (Distelfeld and Dubcovsky, 2010). RNA-Blot Analyses Tissue samples were collected 13 to 14 h after the subjective daybreak from both cold-acclimating plants and nonacclimated plants. Total RNA was isolated using RNeasy Plant Mini kits (Qiagen). Seven micrograms of total RNA was loaded per lane. RNA samples consisted of RNAs pooled from the crown tissue of 10 plants (Mvp-2/2 experiment) or five plants (mvp-1 and VRN-1 promoter deletion mutant experiments). Supplemental Data The following materials are available in the online version of this article. Supplemental Figure S4. qRT-PCR analysis of transcript levels of the CBF genes. ACKNOWLEDGMENT We thank Dr. Koji Murai for providing the seeds for the mvp-1 and mvp-2 mutants. The cold-inducible CBF1 factor-dependent signaling pathway modulates the accumulation of the growth-repressing DELLA proteins via its effect on gibberellin metabolism. Plant Cell 20: 2117-2129 Artus NN, Uemura M, Steponkus PL, Gilmour SJ, Lin C, Thomashow MF (1996) Constitutive expression of the cold-regulated Arabidopsis thaliana COR15a gene affects both chloroplast and protoplast freezing tolerance. Proc Natl Acad Sci USA 93: 13404-13409
v3-fos-license
2021-07-27T13:49:31.242Z
2021-07-27T00:00:00.000
236438628
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journalimplantdent.springeropen.com/track/pdf/10.1186/s40729-021-00352-9", "pdf_hash": "e22266c1f7b650926dc5ce77cde8bb22db3deb39", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42632", "s2fieldsofstudy": [ "Medicine" ], "sha1": "1fb56f5c375622e2aee6bf1b5dd1b98a5f748667", "year": 2021 }
pes2o/s2orc
Trueness and precision of digital implant impressions by intraoral scanners: a literature review Background With the development of intraoral scanners, their trueness and precision have been evaluated in various studies. Through these studies, the amount of accuracy that can be expected from intraoral scanners has gradually been disclosed, at the same time, it was difficult to integrate the results of individual studies due to differences in evaluation methods between studies. The purpose of this article was to review the currently available evidence, summarise what is currently known about IOS, analyse the evaluation methods of each study, and list points to note when interpreting the results. Main text Most of the studies were conducted in vitro. The accuracy is evaluated in situations such as single missing teeth, partially edentulous ridges with multiple missing teeth, and fully edentulous jaws. To evaluate the accuracy, direct measurement of distance or angle by coordinate measuring machines and calculation of surface deviation by superimposing surface data were predominantly performed. The influence of parameters such as the number of implants, distance between implants, angle between implants, and experience of the operator was evaluated. Many studies have shown that trueness tends to decrease as the distance between the implants and the scan range increases. It was agreed that the implant angle did not affect either trueness or precision. Regarding other factors, the results varied among studies. Therefore, the effects of these parameters are not clear. Conclusions Heterogeneity in the research methodology was prevalent among the studies considered in this review. Therefore, we cannot make a decisive statement regarding the trueness and precision of digital implant impressions by IOSs. So far, the comparison of the numerical values of error between studies has yet to elucidate any clear answers, despite small methodological differences. Background One of the most significant developments in dentistry during this century was the introduction of digital technology into dental treatment, denoted as digital dentistry. Digital impressions made with intraoral optical scanners (IOSs) have played a significant role in the facilitation of digital dentistry, dramatically changing the workflow of prosthetic treatment [1]. The advantages of digital impression techniques have already been well documented in several studies, with reports on simple data communication and storage [2], comfort for patients during the impression-making procedure [3], and options for an immediate evaluation of tooth preparations. Conventional impression procedures that use silicone impression materials and stone models are prone to dimensional changes, often because silicone impression materials shrink as a result of ongoing chemical reactions. Dental stones also expand owing to secondary reactions during setting. However, direct digital scanning of teeth is theoretically not associated with such changes. Consequently, digital impressions are expected to be more accurate than conventional impression methods, as demonstrated in several studies [4]. However, the use of this technique remains controversial. This is because the impression accuracy is affected by a variety of factors, such as the condition under which the impression is made [5]. In implant-supported prostheses, especially in screwretained cases, a high impression accuracy is required because the passive fit of prostheses for implant platforms is crucial for the long-term stability of patients' clinical outcomes [6]. Therefore, many studies have evaluated the accuracy of digital implant impressions using intraoral scanners under a variety of conditions. This article aimed to review the accuracy of digital implant impressions by IOSs that have been published and then summarise the results. Additionally, any methodological issues of note will be mentioned, particularly when reviewing the literature regarding the accuracy of IOSs; otherwise, the results of the studies may be misinterpreted. Terminology When discussing impression accuracy, the terms "accuracy", "trueness", and "precision" should be distinguished from each other. According to the definition by the International Standard Organization (ISO) in 1994, "accuracy" indicates the combination of "trueness" and "precision" [7], where "trueness" is defined as the "closeness of agreement between the arithmetic mean of a large number of test results and the true or accepted reference value" Meanwhile, "precision" was defined as "the closeness of agreement between different test results" (Fig. 1). Although "accuracy" is used as a synonym for "trueness" in some studies [8][9][10], this review follows the above-mentioned definition by the ISO. Methodological issues in evaluating trueness and precision Establishment of gold standard data In order to evaluate "trueness", the gold standard data to be used as the "true value" needs to be identified by the methods listed below. Coordinate measuring machine A coordinate measuring machine (CMM) is a device used to measure the geometry of an object. It has been used as a benchmark for accuracy in measuring solid objects for over five decades in the industrial field. Therefore, CMMs have been utilised in many studies to evaluate the accuracy of digital impression data [10][11][12][13][14][15][16][17][18][19][20]. CMMs typically specify a probe's position in terms of its displacement from a reference position in a threedimensional Cartesian coordinate system (i.e., with XYZ axes). Various types of probes are used in CMMs, including mechanical, optical, laser, and white light. The disadvantages of a CMM are that it lacks scan speed, and the number of points acquired from the model surface is limited when compared to industrial 3D scanners. Additionally, to acquire a precise measurement of a complicated shape using CMM, surface shape information is necessary before scanning can be performed. In addition, a CMM with a mechanical probe cannot detect small morphological structures such as fissure lines and gingival margins because the tip of the tactile probe has a certain diameter that limits its sensitivity. Industrial 3D scanner Industrial 3D scanners have been introduced in the industry over the last two decades. Data scanned with industrial 3D scanners are reported to be sufficiently accurate for use as a reference [21]. The size of the machine is smaller and costs less than the CMMs. Unlike CMMs, industrial 3D scanners can capture millions of points on an object's surface simultaneously, even if the shape of the surface is complex. Currently available industrial 3D scanners display maximum deviations within a few micrometres [8]. Dental laboratory scanner Several studies have used dental laboratory scanners instead of industrial 3D scanners to acquire reference data [9,22]. Dental laboratory scanners are utilised to scan cast models produced from a conventional impression and create surface 3D data, which are then exported to CAD software to design the restorations. As listed in Table 1, the accuracy of industrial scanners ranges from 1 to 10 μm, Fig. 1 Conceptual image of the relationship between trueness, precision, and accuracy, as defined by ISO (1994). The centre of the target represents the "true value" provided by the reference data. The black dots represent test data obtained by repeated measurements. ISO, International Standard Organization whereas a laboratory scanner's accuracy ranges from 2 to 10 μm, suggesting that the accuracy of digital impressions obtained by dental laboratory scanners is comparable to that of the industrial 3D scanner [14]. The industrial 3D scanners and laboratory scanners used in the studies are listed in Table 1, along with their trueness and precision. Data acquisition and evaluation For the evaluation of trueness and precision, the parameters to be compared must be determined and calculated, a process also known as data reduction. To evaluate their trueness, some studies compared the given distance and angulation measured by the IOS to those acquired by the gold standard method. In other studies, the 3D surface image data captured by the IOS and the gold standard method were superimposed, and their discrepancies were then calculated. To evaluate their precision, these parameters were compared between repeated measurements by IOSs. Measurement of distance and angle error The linear error (or distortion) was measured as the deviation of certain positions between the reference and test data. The angle error was calculated by comparing the long axis angle of each scan body of the test data and the reference data with respect to the XYZ coordinate axes (Fig. 2). To determine the inter-implant distances, the midpoints on the upper surface of the scan body were measured (Fig. 3). For the inter-implant angle, the angle between the long axes of the scan body was measured (Fig. 3). The discrepancy in the inter-implant distances and inter-implant angles between the reference data and test data are termed the inter-implant distance error and inter-implant angle error, respectively. Superimposition of surface data To find discrepancies using a method other than distance and angle measurements, the STL data obtained by the IOS were superimposed on the reference data obtained with an industrial 3D scanner, laboratory scanner, or optical CMM in order to evaluate the discrepancy between the m [19]. The superimposition of the digital surface data is implemented using a "best-fit algorithm" [9,22,23,25,27,32]. A best-fit algorithm is a method of alignment that causes a set of measured points or a set of actual feature centroids to match, as closely as possible, to that of their counterpart. The least-squares algorithm of the best-fit algorithm aligns the two-point sets by transforming one of the sets such that the sum of the squared distances between matching points in the two sets is minimal. The advantage of the best-fit algorithm is that it can automatically calculate discrepancies between images. In addition, it is easy to intuitively understand the results by visualising the discrepancies between the images by colour. The disadvantage of the best-fit algorithm is that the deviation calculated using the best-fit algorithm may not be identical to the actual deviation that occurs during the scan. Owing to its calculation methodology, the best-fit algorithm aligns the test data with the reference data as closely as possible to its theoretical counterpart. Therefore, the actual positional relationship between the reference data and test data may deviate significantly, and the deviation between the images may be underestimated (Fig. 4). For scans right up to one quadrant, the best-fit algorithm seems to be suitable because the error caused by the superimposition itself between the test and reference data is within an acceptable range [33,34]. However, the larger and more different the data, the greater the influence of the error, owing to the superimposition process [35]. Calculation of the magnitude of the error When measuring using CMM, the three-dimensional position is defined by the values of the x-, y-, and z-axes of the Cartesian system. Therefore, the threedimensional linear error and angle error require mathematical integration of the x, y, and z values. In terms of linear error, the integration method differed for each study. For example, some studies use the root sum square (RSS) formula, formulated as √(x 2 + y 2 + z 2 ) [11,12,14,16,19,31,36,37], while others use different formulas, such as the root mean square (RMS) √((x 2 + y 2 + z 2 )/3) [9,22,23,28,30,38] [26,30]. Hence, the difference in the calculation methods should be noted when interpreting these results. Some suggest that the measurements should not be broken down into x-, y-, and z-components; rather, they should be directly carried out using engineering software. This is because the coordinate system defined for the measured data is not identical to the true coordinate system [13]. Therefore, different models can only be registered in a virtual common coordinate system. As the registration is based on the surface of the models and as these show minor errors, the positions of the model differ slightly. This introduces an error in their relative positions, making it unreliable to compare measurements broken down into x-, y-, and z-components. Studies that used RSS, RMS, or other specific formulas are listed in Tables 2, 3, 4, 5, and 6. Search strategy Online electronic databases, including the MEDLINE database and the Cochrane Central Register of Controlled Trials, were searched by a reviewer (MS) without any language filters for articles published between 2010 and 1 May 2020. The search terms included "intraoral scanner", "accuracy", "trueness", "precision", "digital impression", "Dental Impression Technique" [MeSH] In addition, reference lists of relevant articles were manually searched to identify eligible studies. The two authors (MS and KM) independently screened the titles and abstracts of the retrieved articles to identify studies that fulfilled the predetermined eligibility criteria. They also reviewed the full texts of the shortlisted articles to arrive at the final selection of studies for inclusion in this narrative review. In addition, previous review articles on the subject were searched, as well as the reference lists of the articles already identified for further potentially relevant publications. Although there was no language restriction, the minimum requirement was access to an English version of the title and the abstract. Trueness evaluation Due to the limited access of CMMs, industrial 3D scanners, or dental laboratory scanners into the oral cavity. It is generally impossible to establish reference data in real patients. Indeed, there is no in vivo study that has investigated the trueness of the digital impression for dental implant and all of the following reviewed in vitro studies are laboratory-based. Linear and angle error evaluation using CMM (Table 2) Studies that evaluated digital impression compared to conventional methods Gintaute et al. evaluated the trueness of digital impressions and conventional impressions using four types of reference models with different inter-implant distances and inter-implant angles: (1) two straight, (2) four straight, (3) two straight and two tilted, and (4) six straight dental implants [29]. The inter-implant distances and inter-implant angles of the reference models were measured as reference data using CMM. As test groups, digital impressions of the reference models that were acquired using TDS and STL data were analysed using 3D evaluation software. Polyether and vinyl polysiloxane impressions were utilised for the conventional impressions, and stone casts were made from the impressions and subsequently measured using CMM. Regarding implant orientations (1), (3), and (4), digital impressions showed significantly lower inter-implant test data towards reference data through a best-fit algorithm. Due to its calculation methodology, the best-fit algorithm aligns test data with the reference data as closely as possible to its theoretical counterpart. Therefore, the actual positional relationship between the reference data and test data may deviate much more, and the deviation between the images may be underestimated distance errors than conventional impressions. In terms of inter-implant angle error, the digital impressions exhibited significantly higher trueness than the conventional impression in all four reference models. However, the inter-implant distance and inter-implant angle errors were within 100 μm and 0.5°, respectively, which the authors judged to be clinically acceptable ( Table 2). Ajioka et al. evaluated the trueness of the digital impression by COS and the influence of the height of the abutments on the angle error [14]. A reference model with two implants in a partially edentulous model (#35 and #36) was prepared. Conventional models made of plaster were fabricated from a reference model using a silicone impression. For the distance measurements, two ball abutments were connected to the implants, and the distance between the centres of the balls of the abutments was measured. For the angle measurements, pairs of healing abutments that were 5 mm or 7 mm tall were connected, and the angulation between the healing abutments was measured. The reference model and conventional models were measured using CMM. The distance errors of the digital impressions were slightly greater than those of the conventional impressions. The angulation error was also greater for the 5-mm digital impressions but was not significantly different from the conventional method when 7-mm abutments were connected. Suggesting that a longer abutment or scan bodies may improve the trueness of digital impressions ( Table 2). Chia et al. evaluated the trueness of digital impressions for a three-unit bridge supported by two implants with three different inter-implant angles [31]. Three reference models with buccolingual inter-implant angulations of 0°, 10°, and 20°were fabricated. The scanned bodies connected to the reference models were scanned using IOS. The conventional impressions of each reference model were made using polyether impression materials, and conventional plaster models were fabricated. The reference and conventional models were measured using CMM. The impression technique (p = 0.012) and implant angulations (p = 0.007) had a significant effect on the linear error. In terms of the angle effect, the digital impression group showed consistent linear and angle errors, irrespective of inter-implant angulation. In addition, digital impressions tended to replicate the implant position more apically than the actual position ( Table 2). Menini et al. compared the trueness of digital impressions and conventional impressions using a full-arch edentulous reference model with four implants [39]. CMM was used to measure the implant angulation and inter-implant distances in the reference model as well as on the conventionally fabricated casts. Conventional impression data and digital impression data were compared with the reference data measured using the reference model. The trueness of the conventional group, as evaluated by the linear error, was inferior to that of the digital impression data (Table 2). Tan et al. compared the trueness of digital impressions using two IOSs (Trios and TDS) to conventional impressions [10]. They used two reference models with edentulous maxillary arches with six or eight implants. The inter-implant distances were approximately 20 mm in the six implant models and 13 mm in the eight implant models. The centre positions at the implant platform level on the reference models were detected using the CMM. The results of this study showed that narrower inter-implant distances might decrease IOS linear errors. In addition, TDS showed a greater linear error than Trios (Table 2). Alikhasi et al. investigated the trueness of digital impressions by Trios using two maxillary edentulous reference models with different internal or external implant connections, with two anterior straight and two posterior angulated implants [19]. Conventional plaster models were fabricated from silicone impressions using an open tray or closed tray. The conventional and reference models were measured using an optical CMM. STL datasets from the digital impression were superimposed on the reference data to assess the angle and linear errors. Digital impressions demonstrated superior outcomes compared to conventional methods. While the trueness of digital impressions was not affected by the type of connection and angulation, conventional impressions were significantly affected by these factors ( Table 2). Studies that exclusively evaluated digital impressions Giménez et al. conducted two studies evaluating the trueness of a digital impression by COS using a reference model with six implants (#27, #25, #22, #12, #15, and #17). The implant at #25 was mesially inclined by 30°, the implant at #15 was distally inclined by 30°, and the implants at #22 and #12 were placed 2 mm and 4 mm subgingivally, respectively [18,40]. Two experienced and two inexperienced operators performed the scans. The CMM was used to measure the reference model, and the linear error was calculated. The angulation (p = .195) and depth of the implant (p = .399) measured by digital impression did not deviate significantly from the true values. Additionally, the experience of the operator significantly influenced the trueness of digital impressions (Table 3). Sami et al. evaluated the trueness of digital impressions from four IOSs (TDS, TRIOS, Omnicam, and Emerald Scanner) [30]. An edentulous reference mandible model with six implants was fabricated and measured using four IOSs and an optical CMM. Data from the four IOSs were superimposed on the reference data, and the discrepancy between them was evaluated. The The direction and magnitude of deviation differed among jaw regions and IOSs. All the IOSs were similar for unilateral arch scanning, while i500 and TRIOS 3 showed better trueness than the others for partially edentulous model. Depends on IOS ○ ○: significant effect was observed, △: effect was observed without statistical significance, ×: no effect was observed impression from 8 IOSs (TDS, Trios, Omnicam, 3D progress, CS3500, CS3600, Planmeca Emelard, and Dental Wings) in a full-arch implant-supported FPD [42]. An acrylic model of an edentulous mandible with six implants was used as the reference model. They evaluated the 3D position of the scan bodies and inter-implant distances captured by the IOSs in comparison to those captured by the CMM. The deviations of the 3D positions of the scan bodies were calculated using the best-fit algorithm. The distances between all combinations of the six scan bodies (15 pairs) were calculated from the STL data using analysis software and were compared to the reference data measured by CMM. The 3D position results of the implants, as measured by each IOS, showed that the TDS and Trios showed the best trueness among the IOSs, followed by Omnicam and CS3600 with average performance; CS3500 and Planmeca Emelard presented a middle-low performance, while the 3D progress and Dental Wings showed the lowest performance. The inter-implant distance analysis showed that shorter inter-implant distances corresponded to better trueness when using the True Definition and CS3600 devices ( Table 3). Summary of the results of studies that utilised CMM for trueness evaluation Except for one study, digital impressions showed superior trueness to conventional impressions. A longer interimplant distance tended to deteriorate trueness. Three studies found a difference in trueness among manufacturers of IOS, while one study did not. The experience of operators in digital impressions positively affected the trueness of digital impressions. A longer scan body seemed to contribute to better trueness. The interimplant angle and the difference in platform configuration (internal or external) did not affect the trueness of digital impressions. Linear and angle errors by industrial 3D scanners Studies that evaluated digital impression compared to conventional methods Amin et al. evaluated the trueness of digital impressions from two IOSs (Omnicam and TDS) using a full mandibular edentulous reference model with five implants [23]. The three median implants were parallel to each other. The far-left and far-right implants were inclined by 10°and 15°distally, respectively. A splinted open-tray technique was used for conventional polyether impressions to fabricate conventional models. The reference and conventional models were scanned using an industrial 3D scanner. The digital impression data from the reference model that was captured by the IOSs and the data from the conventional model captured by the industrial 3D scanner were superimposed with the reference data and evaluated using the best-fit algorithm. The full-arch digital impression using TDS and Omnicam showed significantly higher trueness than the conventional impressions using the splinted open-tray method ( Table 4). Studies that exclusively evaluated digital impression Van der Meer et al. evaluated the trueness of three IOSs using dentate reference models with three implant analogues (#36, #41, #46) [13]. They measured the interimplant distances and inter-implant angles of #36-41 and #36-46. An industrial 3D scanner and engineering software were used to obtain the reference data. The inter-implant distances and inter-implant angles captured by the IOSs were compared with the reference data, and the trueness of each scanner was evaluated. The distance discrepancies between the IOS data and reference data varied depending on the IOS and scanning range. An increase in distance and/or angle errors were associated with a larger scanning range but this trend was not statistically significant (Table 4). Imburgia et al. compared the trueness of four IOSs (CS3600, Trios3, Omnicam, TDS) using a partially edentulous model with three implants and a fully edentulous model with six implants. The reference data were acquired using an industrial 3D scanner, which was superimposed with the scanned data from each IOS [27]. Trueness differed among IOSs. For all scanners, the trueness values obtained from the partially edentulous model were significantly better than those obtained from the fully edentulous model (Table 4). Arcuri et al. evaluated the influence of implant scan body materials on digital impressions using an IOS (Trios3) [26]. An edentulous maxillary model with six internal connection implants was scanned using an industrial 3D scanner to acquire the reference data. Scanned bodies made of three different materials (polyetheretherketone (peek), titanium, and polyetheretherketone with a titanium base (peek-titanium)) were scanned by three operators using the IOS. These data were superimposed on the reference data using a best-fit algorithm. Linear and angle errors were assessed, and a significant influence of the type of material was identified (p < 0.0001), where the peak showed the best results in terms of both linear and angular measurements, followed by titanium and the peek-titanium (Table 4). Kim et al. evaluated the trueness of digital impressions by five IOSs using a partially edentulous model [28]. A 3D printed partially edentulous mandible model made of Co-Cr, with six bilaterally positioned implants in the canine, second premolar, and second molar area served as the reference model. Reference data were acquired with an industrial 3D scanner, and the test data were obtained from five IOSs (Omnicam, CS3600, i500, iTero Element, and TRIOS3). For data from each IOS, the XYZ coordinates of the implants were obtained, and the deviations from the reference data were calculated. The linear and angle errors differed depending on the implant position and the IOS. Regardless of the IOS type, the implants positioned on the left second molar, nearest to the scanning start point, showed the smallest linear error. The error generally increased further away from the scanning start point towards the right second molar ( Table 4). Summary of results from studies that utilised an industrial 3D scanner for trueness evaluation Results from the studies that used an industrial 3D scanner for the acquisition of reference data showed that digital impressions by IOSs showed superior trueness compared to open-tray silicone impression in both edentulous and dentate models. A larger impression range tends to deteriorate the trueness of digital impressions. Linear and angle errors by laboratory scanners Studies that evaluated digital impression compared to conventional methods Papaspyridakos et al. evaluated the trueness of digital impressions using Trios and the conventional impression of completely edentulous mandibles [37]. A reference model of an edentulous mandible with five implants was fabricated. Four conventional models were fabricated through conventional polyether impressions using both splinted and non-splinted techniques for both implant-and abutment-level impressions. The reference model and conventional models were scanned using a dental laboratory scanner as the reference and control data, respectively. The STL data from the digital impression and the four conventional impressions were superimposed with the STL data from the reference model to assess the 3D deviations. The trueness of the digital impression did not differ from the following conventional impressions: splinted implant level, splinted abutment level, and non-splinted abutment level models. On the other hand, the trueness of the non-splinted implant-level impressions was inferior to that of digital impressions. Additionally, an implant angulation of up to 15°did not affect the trueness of the digital impression and conventional impressions (Table 5). Roig et al. evaluated the trueness of digital impressions using a reference model of a partially edentulous maxilla, which accommodated two parallel implants at #14 and #16 [38]. The reference model was scanned using four IOSs (Omnicam, TDS, TRIOS3, and CS 3600) as test data and a dental laboratory scanner (D810) as the reference data. Three types of conventional impressions (closed tray, open tray non-splinted, and open tray splinted) were created, and the stone models were scanned using a dental laboratory scanner. The STL data acquired by the IOSs and dental laboratory scanners were superimposed using a best-fit algorithm to measure the linear and angle errors between the reference and test data. TRIOS3 and CS3600 showed significantly better trueness than the conventional impression with a closed tray and digital impression with Omnicam and TDS ( Table 5). Studies that exclusively evaluated digital impression Mangano et al. compared the trueness of four IOSs using partially and fully edentulous maxilla models [25]. They used a partially edentulous model with missing teeth #21, 24, 25, and 26, with three implants in #21, 24, and 26. The fully edentulous model had six implants in #16, 14, 11, 21, 24, and 26. Reference data were acquired using a dental laboratory scanner. The trueness evaluations were implemented by superimposing the digital impression data obtained using the IOS with reference data. There were no differences in trueness between the partially and completely edentulous models, whereas significant differences were found between the IOSs (Table 5). They also conducted the same type of study with five different IOSs using a model with a single missing tooth, a model with a partially edentulous space with multiple missing teeth in a row, and a model with a fully edentulous jaw [24]. The reference models were scanned using five IOSs and a dental laboratory scanner. Unlike the previous study, statistically significant differences were found between the different edentulous types. The different IOSs significantly influenced trueness, as shown in a previous study ( Table 5). Summary of results from studies that utilised dental laboratory scanner for trueness evaluation Most studies showed that trueness was affected by the IOS manufacturer. Differences in the extent of edentulous space had a significant effect on trueness in some studies, but not in other studies. Precision evaluation by distance and angulation in scanned data Flügge et al. evaluated the precision of digital impressions using three IOSs (iTero, Trios, and TDS) and a dental laboratory scanner by measuring different interimplant distances and inter-implant angles [15]. They used two different reference models of the mandible: one had an intermediate edentulous space in the lower left and contained two neighbouring implants in #35 and #36, and the other model represented a Kennedy Class I edentulous mandible, with implants in #37, #36, #34, #45, and #47. These models were scanned by the IOSs and a dental laboratory scanner (D250, 3 shapes). The distance and angle between the respective scan bodies were measured on the STL data using the analysis software. The standard deviation of the repeated distance measurements by TDS tended to increase with longer inter-implant distances, whereas iTero and Trios did not show the same tendencies. On the other hand, the angle measurement did not show any deterioration in precision for longer inter-implant distances in any of the scanners (Table 6). Precision evaluation by superimposing repeated scanned data by IOSs Mangano et al. compared the precision of four IOSs (Trios2, CS 3500, Zfx Intrascan, and Planscan) [25]. Two reference models were prepared, representing a partially edentulous model with three implants and a fully edentulous maxilla with six implants. These reference models were scanned by the four IOSs, and the data acquired by the same scanner were superimposed using a best-fit algorithm to evaluate the precision of each IOS. Trios2 and CS 3500 showed significantly better precision than Zfx Intrascan and Planscan, and Zfx Intrascan was significantly better than Planscan (Table 6). They also conducted the same comparisons for five IOSs (CS 3600, Trios3, Omnicam, DWIO, and Emerald) in another study that investigated the impressions of single missing teeth models as well as in partially edentulous and fully edentulous models [24]. In the single missing tooth situation, CS3600 had the best precision, followed by Trios3, DWIO, Omnicam, and Emerald. In the partially edentulous model, CS 3600 had the best precision, followed by Trios3, Emerald, DWIO, and Omnicam. For the full arch, Trios3 had the best precision, followed by CS3600, Emerald, Omnicam, and DWIO. Significant differences in precision were found between the IOSs and the magnitude of missing teeth (Table 6). Imburgia et al. compared the precision of four IOSs (CS3600, Trios3, Omnicam, and TDS) in a partially edentulous model with three implants and a fully edentulous model with six implants [27]. The reference models were scanned by each IOS, and the data acquired by the same scanner were superimposed using a best-fit algorithm to evaluate precision. In both the partially and fully edentulous models, they found no statistically significant differences among the different IOSs. For CS 3600, Omnicam, and TDS, the values obtained from the partially edentulous model were significantly better than those obtained from the fully edentulous model. However, no significant differences were found for Trios3 (Table 6). Miyoshi et al. evaluated the effect of the scanning range on precision [41]. A reference model of an edentulous maxilla with six implants was scanned using four IOSs and a dental laboratory scanner. Conventional silicone impressions were also made, and the stone models were scanned using a dental laboratory scanner. Nine scanning ranges were defined based on the length and number of implants included. In each scanning range, impressions were obtained using each impression method. The data from the repeated scans were superimposed on each other using a best-fit algorithm, and the discrepancies were evaluated. The enlargement of the scanning range deteriorated the precision of the IOSs and conventional impressions. In comparison, the precision of the dental laboratory scanner remained stable irrespective of the size of the scanning range. They concluded that digital impressions by IOSs may show clinically acceptable precision as long as the scanning range is limited, such as within a 3-unit superstructure supported by two implants (Table 6). Roig et al. evaluated the precision of digital impressions using a reference model of a partially edentulous maxilla, accommodating two parallel implants at #14 and #16 [38]. The reference models were scanned using four IOSs (Omnicam, TDS, TRIOS3, and CS 3600) as test data. Three types of conventional impressions (closed tray, open tray non-splinted, and open tray splinted) were created, and the stone models were scanned with a dental laboratory scanner. The STL data from each repeated measurement for each technique were superimposed using a best-fit algorithm to measure the linear and angle errors between the scans. Digital impressions showed significantly better precision than conventional impression methods ( Table 6). Summary of the results of the precision evaluation Similar to the studies that evaluated trueness, the majority of the studies that evaluated the precision of digital impressions showed deterioration of precision as the inter-implant distance or scanning range expanded (Tables 7 and 8). The scanner manufacturer affected the precision of the digital impression. In comparison with conventional impressions, the precision of the digital impression showed comparable or superior results. Effects of manufacturers Although several articles compared different kinds of IOSs in terms of trueness and precision, the results are inconsistent among studies. Therefore, the available evidence does not provide decisive data regarding the type of IOS with the best trueness or precision [9, 12, 23-25, 27, 41, 42]. Effects of the orientation of implants on the accuracy of the digital impression Inter-implant distance Studies that examined the effects of inter-implant distance on the accuracy of digital impressions consistently suggest that a shorter inter-implant distance allows for better accuracy [15,42]. Some articles specifically recommend that the indications of digital impressions should be limited to short-span cases, such as 3-unit fixed partial dentures. On the other hand, the precision of many IOSs did not always deteriorate with longer inter-implant distances [15,24,25]. Angulation of the implants Digital impressions of mesially or distally tilted implants have been well documented [9,16,18,26,28,29,40,43,44]. One study reported that the angulation of the implant seems to have no detrimental effect on the digital impression accuracy by IOSs [31]. Another study reported high trueness with angulated implants in terms of distance and angle evaluation [43]. On the other hand, conventional impressions of angulated implants have been reported to compromise trueness and precision, probably because the impressions might be deformed when removed [31]. Effects of scan range Although some studies report consistent digital impression accuracy irrespective of the scan range [25], the majority of the studies report a gradual distortion of digital impression accuracy as the scan range expands [24,27,41]. This can be attributed to the accumulative error of the stitching process. Digital impressions covering large spans are inevitably associated with a larger amount of stitching, thereby making the scan procedure more prone to errors. Effects from operators Two studies evaluated the effects of the operator's IOS experience on the accuracy of the scanned images [26,40]. One study that evaluated trueness reported significant effects from the experience of the operators, while the other study that studied precision did not. Since the number of studies is limited, no conclusions can be drawn from the currently available literature. Discussion There has been much debate about the amount of inaccuracy that is acceptable for implant-supported prostheses. Generally, implant-supported prostheses require higher levels of accuracy than tooth-supported prostheses [45]. Therefore, clinicians and dental technicians must strive to make the prosthesis as accurate as possible. However, it has been reported that some degree of inaccuracy does not cause prosthetic or biological problems. The range of error that does not cause clinical problems is called "biological tolerance" [46] In animal experiments, prosthetic inaccuracy is compensated by the migration of osseointegrated implants to adapt to the prosthesis, which is called "bone adaptation" [47,48]. Some researchers have proposed a threshold for acceptable error. Andriessen et al. evaluated the accuracy of implants supporting bar attachments for overdentures [20]. They assumed that the threshold of acceptable linear error between two implants was 100 μm and that of the angle error was 0.2°. These are based on the 50-μm lateral movements of the implants when loaded. Therefore, the distance error between the two implants can be up to 100 μm (Kim 2005). The 0.2°angle error threshold is due to the fact that when the tip of a 15-mm implant used in this study was displaced by 50 μm, it tilted by 0.194°. Gintaute et al. adopted 100 μm as the linear error of the inter-implant distance and 0.5°as the angle error without any evidence or references [29]. To evaluate linear errors, the following two methods were used to evaluate linear errors in digital impressions. The first method compares particular inter-implant distances in the reference and test data. The difference in the corresponding inter-implant distance was reported as a linear error. The second method compares the three-dimensional scan body position for the reference and test data. In cases where two implants are used as abutments, the inter-implant distance is more critical for the fit of the prosthesis than the three-dimensional deviation because the prosthesis is rotated in order to minimise the error. Therefore, research evaluating inter-implant distances is useful. However, in studies with more than three implants, clinicians should refer to the data that incorporates three positional deviations, such as data with XYZ coordinates or cumulative 3D deviations that are aligned by best-fit algorithms. Superimposing the test data on the reference data using a best-fit algorithm makes the error between the data as small as possible. Therefore, the actual deviation of the test data is converted. Guth et al. attempted to solve this problem by placing a straight metal bar on the reference model and used it as a reference point for the superposition in an in vitro study, where they scanned a full arch of natural teeth [35]. Using this method, they found that the deviation of the first quadrant is smaller than that of the second quadrant, which cannot be detected by superimposition using a best-fit algorithm. As proposed in this article, it is possible to evaluate the actual deviation of the digital impression with an object or fixed reference point instead of a best-fit algorithm. When the implant position is defined by the XYZ coordinates, the three-dimensional distance is calculated by the root sum square (RSS = √(x 2 + y 2 + z 2 )) in most studies. However, in some studies, other parameters such as the root mean square (RMS = in some cases, the sum of (|x|+|y|+|z|) of the absolute values of √ ((x 2 + y 2 + z 2 )/3)) and XYZ were calculated and compared. Therefore, readers should be aware of the parameters used when referring to data from digital impression errors. Another method is to measure the three-dimensional distance directly with software, instead of dropping it into the coordinate axes, as described above. Van der Meer et al. argued that measuring using the XYZ coordinate system causes inaccuracies in measurements. Impression accuracy errors often result in very small values. Therefore, the results may change significantly owing to slight deviations of the coordinate axes. However, no study has validated measurements made using software alone, without using the coordinate system. Conclusion Heterogeneity in the research methodology is prevalent among the studies considered here. Therefore, we cannot make a decisive statement regarding the trueness and precision of digital implant impressions by IOSs. So far, the comparison of the numerical values of error between the studies has yet to elucidate any clear answers, despite small methodological differences. Definitions of the terms relating to impression accuracy as well as the development of a standardised methodology for measurement accuracy that includes validation should be established in order to gather evidence regarding digital impression accuracy.
v3-fos-license
2020-06-16T13:11:36.703Z
2020-06-16T00:00:00.000
219689239
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2020.01334/pdf", "pdf_hash": "ea69a413fffb97e7990e6517178eb27b6d283238", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42633", "s2fieldsofstudy": [ "Education", "Psychology" ], "sha1": "ea69a413fffb97e7990e6517178eb27b6d283238", "year": 2020 }
pes2o/s2orc
Music Performance Anxiety: Can Expressive Writing Intervention Help? Performance is an essential part of music education; however, many music professionals and students suffer from music performance anxiety (MPA). The purpose of this study was to investigate whether a 10-min expressive writing intervention (EWI) can effectively reduce performance anxiety and improve overall performance outcomes in college-level piano students. Two groups of music students (16 piano major students and 19 group/secondary piano students) participated in the study. Piano major students performed a solo work from memory, while group/secondary piano students took a sight-reading exam of an eight-measure piano musical selection. All students performed twice, at baseline and post-EWI, with 2 or 3 days between performances. During the EWI phase, students were randomly divided into two groups: an expressive writing group and a control group. Students in the expressive writing group wrote down feelings and thoughts about their upcoming performances, while students in the control group wrote about a topic unrelated to performing. Each student’s pulse was recorded immediately before performing, and each performance was videotaped. Three independent judges evaluated the recordings using a modified version of the Observational Scale for Piano Practicing (OSPP) by Gruson (1988). The results revealed that, by simply writing out their thoughts and feelings right before performing, students who had high MPA improved their performance quality significantly and reduced their MPA significantly. Our findings suggest that EWI may be a viable tool to alleviate music performance anxiety. INTRODUCTION Do classical music lovers ever notice that their presence at concert venues tends to raise heart rates and trigger anxiety in performing musicians? Performance anxiety is a complex phenomenon and has been studied in a variety of contexts, including sports competitions (Patel et al., 2010), public speaking (Beatty and Behnke, 1991), test-taking (Ramirez and Beilock, 2011), job interviews (McCarthy and Goffin, 2004), and performing arts (Williamon, 2004). For musicians, being able to perform high-quality concerts consistently under pressure is critical to their career's success and length (Kenny, 2011;Williamon et al., 2013). Coping with performance anxiety is a constant battle even for the most accomplished musicians and performers (McGinnis and Milling, 2005). For student-musicians, anxiety about performing their best can be extremely high, particularly in performance-evaluative situations. Music performance anxiety (MPA) has been defined as "the experience of marked and persistent anxious apprehension related to musical performance" (Kenny, 2010, p. 433). Several interactive factors comprise this form of anxiety, including genetics; environmental stimuli; and an individual's experience, emotions, cognition, and behaviors, may cause musicians to experience MPA to varying degrees (Kenny, 2011). Indeed, for musicians, performing in public is a demanding activity and can cause considerable stress (Williamon et al., 2013). Even for highly skilled and experienced performers, performing a concert in public is stressful (Eysenck et al., 2007;Nieuwenhuys and Oudejans, 2012). Research showed that an expert concertpianist who performed regularly around the world demonstrated an intensive autonomic arousal by loss of complexity in cardiovascular response temporarily when performing in front of a large crowd (Williamon et al., 2013). Although MPA symptoms vary widely, including elevated heart rate, sweaty palms, disrupted concentration, diminished confidence, and negative thoughts (Wan and Huon, 2005;Fehm and Schmidt, 2006); MPA manifests through three distinct avenues: cognition, autonomic arousal, and behavior (Kenny, 2005). The hypothesis of optimal performance involves a moderate level of arousal (Yerkes and Dodson, 1908), however, persistent fear and high levels of anxiety can damage performers' mental health and overall wellbeing (Fishbein et al., 1988;Kenny et al., 2012). Moreover, musicians can suffer from mental illness (Brodsky, 1996), which adds another level of psychological vulnerability. According to a survey of 2212 professional musicians from American orchestras, 24% reported MPA, 13% reported acute anxiety, and 17% suffered from depression (Fishbein et al., 1988). In addition, females and young musicians (<30 years) are particularly vulnerable to MPA and other mental illness based on a survey of Australian musicians (Kenny et al., 2012). Music psychologists have devoted to exploring coping strategies to reduce MPA in higher education (Clark and Williamon, 2011;Osborne et al., 2014). According to two systematic reviews of interventions aimed at reducing MPA, a combination of two or more types of therapy produced the most effective result (Brugués, 2011;Goren, 2014). However, research involving MPA interventions has often been conducted using methods that require multiple psychological sessions or special technology and materials to be successful. Therefore, it is important to explore methods that are simple and that potentially produce immediate effects in terms of decreasing the impact of MPA in real-world music-performance contexts. Expressive writing has been used as a therapeutic intervention for reducing personal stress and anxiety, and for encouraging healing from a traumatic experience for decades (Pennebaker and Beall, 1986;Pennebaker et al., 1988;Baikie and Wilhelm, 2005;Pennebaker and Smyth, 2016). Extensive research has indicated that expressive writing produces substantial immediate and longterm benefits in physical and mental health (King, 2002;Beckwith et al., 2005;Frattaroli, 2006). To date, although many of these studies have been conducted with college student populations, the research has focused on alleviating test anxiety across nonmusic disciplines (Frattaroli et al., 2011;Ramirez and Beilock, 2011;Park et al., 2014). Similar to MPA, test anxiety refers to a negative response to an exam situation which manifests both physically (e.g., increased heart rate, nausea) and cognitively (e.g., self-doubt, feeling unprepared or not expert enough; Cassady and Johnson, 2012). Research has found that anxious thoughts take up valuable working memory and interfere with concentration (Moran, 2016). In line with this finding, music psychology research have found that during a music performance, students who performed poorly reported more disturbing thoughts and worries, especially immediately before they froze in the middle of their performances (Oudejans et al., 2017). To alleviate test anxiety, Ramirez and Beilock (2011) developed an intervention based on expressive writing that is particularly effective in decreasing the impact of test anxiety on a math test. Students wrote about their thoughts and feelings regarding the upcoming math exam immediately before the exam took place. Their results showed that students with high-math test anxiety improved their grades from B− to B+. Another study reported a similarly positive effect of an expressive writing intervention on applicants taking medical and law school entrance exams (Frattaroli et al., 2011). Participants had significantly higher test scores and significantly lower preexam depressive symptoms than the control group. Although the mechanism underlying the EWI's therapeutic effect is unclear (Danoff-Burg et al., 2010), the hypothesis is that EWI helps to regulate the fear and anxiety by giving students an opportunity to express their anxious thoughts, and re-evaluate the situation before an impending test (Ramirez and Beilock, 2011). The aim of the current study was to investigate whether an expressive writing intervention (EWI) effectively reduces MPA and measurably improves performance outcomes in college-level piano students. We also explored the factors that determine who would benefit from EWI the most. Participants College students majoring in music (N = 35) participated in this study. Participants were either piano performance majors (n = 16) or group/secondary piano students (n = 19) 1 . The age of participants ranged from 18 to 61, (M = 20.90, SD = 4.32). For piano major students, the average years of playing piano were 14.6 years and the average practice time was 23.1 h per week. Piano students' average lesson time was 1 h (individual session) a week. For group/secondary piano students, the average years of playing piano and their primary instrument were 2.47 and 8.91 years, respectively. The average piano practice time (individual session) was 3.2 h a week. Secondary piano students' average piano lesson time was 2-3 h per week (group class). Study Design To induce the proper MPA, two performing tasks were designed for each group according to their piano curriculum and exam scope. Sixteen advanced piano students were asked to perform a solo piano work of their own choice from memory and nineteen secondary piano students were asked to sight-read a piano music selection without prior practice. Piano major students performed the same piano piece in both performances (baseline and post-EWI) and the pieces ranged from 4 to 18 min long. Baseline performances took place in students' weekly studio performance class where they regularly play in front of peers and are critiqued by a piano professor. The post-EWI performance took place under various performance settings, including a studio performance class, a piano area noon recital, and a piano jury. For group/secondary piano students, their piano instructors chose two different pieces at the appropriate reading level for them to sight-read. During the baseline sight-reading performance, each student was given an eight-measure piano musical selection to play by sight. The baseline sight-reading performance was conducted in a regular classroom and students were not graded on this performance. During the post-EWI performances, students played a new sight-reading selection at their mid-term piano tests; therefore, the students knew that they would receive a grade on the performance. The author and the secondary piano class instructor were presented in the room. Procedure Prior to the experiment, a questionnaire of students' music educational experience and performance habits that adapted from Roland (1993) was administered. The experiment consisted of two phases: a baseline and post-EWI performance, occurring 2 to 3 days apart. Students were given a piece of paper to complete the writing exercises during the post-EWI performance. We videotaped each student's two performances (baseline and the post-EWI) and obtained pulse rates using a pulse oximeter immediately before each performance. At the end of each performance, participants were immediately asked to fill out a post-performance self-report questionnaire (Figure 1). Writing Exercises During the post-EWI performance, students were randomly assigned to one of two groups: the expressive writing group and the control group. The expressive writing group wrote about their current feelings and thoughts regarding the upcoming performance; the control group wrote about an event that happened recently. They were given 10 min to write. Their performances began right after they finished the writing exercises. Performance Quality Students' performance recordings were analyzed acoustically using a form adapted from Gruson's (1988) Observational Scale for Piano Practicing (OSPP). The OSPP is used to analyze playing errors such as repetition of notes or sections, pauses, and/or wrong notes (Table 1). Table 2 contains the explanation of each term used for analysis. Three independent judges graded each performance recording according to OSPP. The judges were not told that they were grading the recordings of the same subjects or the performance order (whether pre-or post-EWI). We took the average grades from three judges to produce a single performance quality score for each student. Pre-performance Pulse-Rate Each student's pulse rate was recorded immediately before baseline and post-EWI performances. Two of their preperformance pulse rates were compared to measure levels of arousal such as anxiety. Questionnaire Only three questions that were relevant to the current study were included in the results analysis: 1. Have you received any training or information on how to deal with performance anxiety during your studies? 2. In which performance settings are you most likely to feel very anxious? 3. In general, do you talk to yourself right before and during your performance? If yes, please specify the content of your talk. Anxiety and Satisfaction Performance anxiety and satisfaction were reported by each participant immediately following their performance. Each item was rated on a 5-point scale ranging from 1 ("not anxious/ satisfied at all") to 5 ("very anxious /satisfied"). Post-performance Questions Three questions were used to evaluate the influence of the EWI on student anxiety level and performance quality. 1. What made you nervous or not nervous for today's performance? 2. Did you talk to yourself as you were performing, if so, how many times and what did you say to yourself? 3. In your opinion, how could you improve your performance? Pre-intervention Analysis A series of independent samples t-tests were conducted to compare the group differences across the expressive writing and control group at baseline. There were no statistically significant differences between the two groups of advanced piano players in age, t(14) = 0.93, p < 0.36, years of piano study, t(14) = 0.76, p < 0.46, or weekly practice times, t(14) = 1.35, p < 0.19. For secondary piano students, similarly, there were no statistically significant differences between the two groups in age, t(17) = 1.23, p < 0.23, years of piano FIGURE 1 | Post-repertoire/sight reading performance self-report. For this study, all 35 participants completed both baseline and post-EWI performances, however, we were not able to record three participants' pulse rates during the baseline test; therefore, only 32 out of 35 participants' pulse rates were included in the analysis of results. Performance Errors of Both Performance Tasks The overall performance errors were analyzed using a mixed-design ANOVA to compare the within-subject factor time (pre, post), and two between-subject factors: the intervention group (expressive writing, control), and performance type (solo piano, sight-reading). A significant interaction between time and the intervention group, F(1,31) = 9.91, p < 0.004, indicated that the expressive writing group demonstrated a significant decrease in errors (mean decrease = 7.3), whereas the control group performances remained the same before and after the intervention (Figure 2). Expertise Level and Effectiveness of EWI Importantly, while the level of piano-playing experience was different between the two groups (piano major students = 14.6 years, secondary piano students = 2.47 years), the impact of the intervention on errors was similar for both groups, F(1,31) = 1.63, ns, suggesting that the impact of the intervention on errors did not depend on the level of experience. For both expressive writing groups (solo and sight-reading), performance errors decreased by approximately 50% in the post-EWI performance when compared to baseline performances ( Table 3). Other Measurements In contrast to performance errors, no significant statistical differences in pulse rate, self-reported satisfaction, or selfreported performance anxiety were observed before and after the intervention across performance types. Comparing High Versus Low Self-Talk Groups In order to identify students who are most likely to benefit from EWI, we further analyzed our data. Within the experimental group, we found large differences in the frequency of organic self-talk during performance. In addition, the content of these self-conversations was different. According to students' selfreports, high self-talkers (HSTs) were more involved in emotional conversations (examples: "I had this inner monolog running to express my anger at my mistakes when they really happened, " "how could I have made the same mistake in the same place again?"). In contrast, low self-talkers (LSTs) reported that they were either not aware of any self-talk, or only had a few internal instructional conversations during the performance (examples: "before chord changes, I talked to myself about where I needed to move my hands, " "I am taking a breath and not being fluid"). Based on the theoretical assertion that "self-talk lies at the core of anxiety" (Conroy and Metzler, 2004, p. 69), and the assumption that excessively frequent self-talk may interrupt the performer's concentration and disrupt execution (Zinsser et al., 1998), we hypothesized that the HST group would gain the most benefits from the expressive writing intervention due to high levels of performance anxiety. Two subgroups collapsed across performance type were created: participants (n = 8) reported frequent self-talk during performance, and participants (n = 8) who reported little or no self-talk during performance. These sixteen participants' data were studied further 2 . The baseline performance data between the HSTs and LSTs were compared by the t-test in the four following measures: performance errors, pulse rate, self-reported anxiety levels, and performance satisfaction. A preliminary t-test score *Pulse rate excluded one participant in the HST group (n = 7), due to no record for the baseline performance. indicated significant differences in self-reported performance anxiety levels between the HSTs and the LSTs, t(16) = (2.31), p < 0.03. This confirmed that the HSTs felt more anxious in the baseline performance, however, the other three measures (performance errors, pulse rate, and performance satisfaction) did not demonstrate significant statistical differences. It is worth noting that in the baseline performance, the HSTs demonstrated approximately 10% higher pulse rates, 46% more performance errors, and reported 26% less self-satisfaction on average when compared with the LSTs in raw data ( Table 4). There was a clear gap between the HSTs and LSTs in the baseline performance. To examine the impact of EWI on these two groups, we compared baseline and post-EWI performance data in the following measures: performance errors, pulse rates, self-reported anxiety levels, and performance satisfaction. Measures were analyzed separately using a 2 × 2 mixed-design ANOVA to compare the within-subject factor time (pre, post), and the between-subjects factor self-talk (HSTs vs. LSTs) within the expressive writing group. Performance Errors A significant interaction was seen between performance time (pre, post) and group (HST or LST), F(1,14) = 5.82, p < 0.03, indicating that the HST group demonstrated a significant decrease in performance errors (mean decrease = 12.5, SD = 8.74), whereas the LST group did not demonstrate a statistically significant difference in performance errors before and after the EWI (mean decrease = 3.75, SD = 7.02) (Figure 3). Other Measures For pulse rate, there was a significant interaction between performance time and group (HST or LST), F(1,14) = 14.65, p < 0.002, indicating the HST group had a significant decrease in pulse rate (mean decrease = 9.88), whereas pulse rate of the LST group remained the same. In self-reported performance anxiety levels, there was a significant interaction between performance time and group, F(1,14) = 4.99, p < 0.04, indicating the HST group demonstrated a significant decrease in the preperformance anxiety levels (mean decrease = 1.18), whereas the LST group did not show change (mean decrease = 0.06). In contrast, no significant interaction was found in selfsatisfaction, although there was a main effect for self-satisfaction, F(1,14) = 15.6, p < 0.001. This indicated both groups showed increased satisfaction in the post-EWI performance. Self-Talk The frequency of self-talk was significantly reduced in the HST group. Only three out of eight participants reported that they still engaged in self-talk during the post-EWI performance. Among them, two participants reported that they self-talked only a few times when they made a mistake. The LST group results remained unchanged; only one participant reported that he coached himself through the music. Support for EWI as an Effective Intervention for Music Performance Anxiety This is the first empirical study that we know of to use expressive writing intervention to reduce music performance anxiety and improve performance in college-level piano students. The data from our study supports the positive results in the reduction of performance anxiety with expressive writing intervention for college-level piano students resulting in improved performance outcomes. In our study, college-level piano students (both solo players and sight readers) who received an expressive writing intervention demonstrated significant overall performance improvement. In post-expressive writing intervention performances, marked reduction in performance errors occurred in the experimental condition for 15 of the 23 participants. Results showed a 50% reduction in performance for both performing tasks (performing a solo piece and performing a sight-reading test) compared to the baseline performance. The control group experienced no major changes between the two performances. We also found that the writing length and details may affect the impact of an expressive writing intervention. A previous study suggested that emotional expression and narrative structure are the two key factors that determine the efficacy of expressive writing intervention (Danoff-Burg et al., 2010). In the present study, two students in the expressive writing group wrote only two sentences in the writing exercise. No major changes were found in their performance outcomes and other measurements between the two performances. Previous studies suggest that performance outcomes may improve organically through the accumulation of practice sessions (Ericsson et al., 1993;Ericsson and Anders, 2006), however, we found that within a short period of 1 to 3 days, students in the control group (comprised of both solo piano players and sight-readers) did not improve performance quality significantly with regular practice. Instead of making fewer errors, 6 of the 12 students in the control group performed slightly less well during their second performance; 2 students' performance outcomes remained the same, and 4 performed better. The implication of these results is that practicing the night before a recital or test may not significantly improve performance quality the following day. In piano performance, enhancing technique and improving performance quality is a relatively long process. While differences lie in the level of piano-playing experience between the piano major and group piano students, both groups benefited from writing their thoughts and feelings prior to a performance. This suggests that the level of experience of the student does not influence the efficacy of expressive writing intervention. Furthermore, we identified performance anxiety as a critical variable that predicted the positive impact of expressive writing intervention. Compared with less anxious peers, the students (including both solo performers and sight-readers) who experienced the highest anxiety at baseline performance benefited the most from the expressive writing, as evidenced by more significant reductions in performance errors, selfreported anxiety levels, and pulse rates in post-expressive writing intervention performance. Exploring the Relationship Between Self-Talk and Performance Anxiety Another finding of our study was a close association between performance anxiety and frequency of self-talk during the baseline performance. Piano students who engaged in frequent self-talk during the baseline performance reported 30% higher performance anxiety levels than students who reported little or no self-talk. This finding is consistent with Conroy and Metzler's (2004) cognitive theories on anxiety, suggesting that "self-talk lies at the core of anxiety" (p. 69). Organic self-talk is often portrayed as "the primary pathology or dysfunction" of anxiety (Beck et al., 1985, p. 85). The intrusive thoughts included in organic self-talk cause distraction, which may threaten performance outcomes. Furthermore, we found that frequent self-talkers had faster pulse rates, consistent with Rushall et al. (1988) study of cross-country ski performance. The authors reported that pulse rates were higher in self-talk conditions than in the control condition that did not include self-talk. Tod et al. (2011) suggest that the impact of self-talk on performance may be mediated by physiological changes in arousal, as reflected in variations in pulse rates. Our study shows the impact of negative self-talk specifically associated with music performance anxiety is consistent with previous research (McKellar et al., 1996;Treadwell and Kendall, 1996;Ronan and Kendall, 1997;Hiebert et al., 1998;Muris et al., 1998), which has demonstrated that negative self-talk is positively associated with anxiety between normal and clinically anxious children, as well as with college students. Academic performance anxiety has also been strongly associated with negative self-talk during exams (Hunsley, 1987;Fernandez and Allen, 1989). Similarly, in our study, piano students who were frequent self-talkers reported that they were occupied by negative self-talk, such as expressing anger or guilt over performance mistakes. Expressive writing intervention led to a measurable reduction in self-talk for 5 of 8 frequent self-talkers, who reported no incidence of self-talk during the post-expressive writing intervention performance. Among three participants who reported ongoing self-talk during expressive writing intervention performances, there were changes in the focus of their self-talk from emotional, harsh self-criticism to instructional self-talk that presented strategies for getting back "on track." In addition, these frequent self-talkers reported lower levels of anxiety during post-intervention performances. Our study supports Kendall and Treadwell's (2007) findings that reducing anxious selftalk results in less anxious states and positively influences performance outcomes. Self-Talk's Content Self-talk is often considered an effective strategy for improving athletic performance (Tod et al., 2011;Webb et al., 2012), however, the content of self-talk must be planned and manipulated systematically in advance. Besides differentiating self-talk as positive versus negative, previous research has categorized self-talk in athletes as motivational (e.g., "give it all") versus instructional (e.g., "push the ball"). These types of self-talk may have different effects on specific task performances (Theodorakis et al., 2000;Hatzigeorgiadis et al., 2004). Instructional self-talk may be more effective for tasks requiring fine motor skills, such as dexterity, handeye coordination, precision, and accuracy in sports (e.g., dart throwing, golf-putting, and shooting a basketball), and by extension, perhaps piano performance. In contrast, motivational self-talk may be more effective for tasks requiring strength and endurance like weightlifting and long-distance running. In this study, we did not manipulate the content or the type of students' self-talk. Nevertheless, when we compared the content of self-talk between baseline and post-expressive writing intervention performances, we found significant changes in content and focus. After taking the expressive writing intervention, self-talk became less subjective and more instructional oriented. For example, in baseline performances, students stated, "shock at the level of my anxiety" or "I was telling myself how uneven it was." These students either engaged in negative self-evaluation or focused on negative emotional thoughts while performing. After expressive writing intervention, examples of their reported self-talk content included comments about the instrument: "this piano has a firm touch" or "(to) get back somewhere near where I fell off." These talks became less judgmental and more objective. We hypothesize that writing feelings and thoughts before a performance can regulate emotions, and redirect attention to music-related or more objective thoughts. Instead of training people to remember and use pre-planned self-talk cues and strategies during a performance, expressive writing intervention may produce similar results in more organic and potentially effective ways. This possibility will require further investigation. We also noticed that all piano students reported that they usually talk to themselves before a performance in positive ways such as "everything will be fine" or even reciting a psalm from the Bible. Perhaps these habits demonstrate that everyone experiences emotional arousal prior to performing. During a performance, however, only the students with the high levels of music performance anxiety reported using frequent self-talk, engaging mostly in the negative commentary: "That was bad!" or "how could I made the same mistake in the same place again?" However, the less anxious students reported little or no selftalk. In our study, expressive writing intervention appears to provide piano students with a mechanism for staying focused and eliminating unnecessary negative or self-critical conversations while performing. Future Research on Music Performance Anxiety Our findings regarding the connection between various forms of self-talk and performance anxiety, and the impact of expressive writing intervention on the amount and quality of self-talk, is intriguing and promising. However, it requires further study. Neuroimaging studies have found that math anxiety is associated with hyperactivity in right amygdala regions that are responsible for processing negative emotions, and reduced activity in posterior parietal and dorsolateral prefrontal cortex regions are responsible for working memory processes (Young et al., 2012). We expect that piano students may show similar patterns of brain activation when experiencing music performance anxiety. In addition, we found that most of our participants reported moments when their minds went blank, or they had memory slips that lasted only a few seconds while they were performing. Since the amygdala is also linked to the hippocampus, which is known to be critical for long-term memory (Bird and Burgess, 2008;Gazzaniga et al., 2009, 378-423), one may hypothesize that hippocampal activity may be decreased when a piano student's mind "goes blank" on stage. In order to provide further insight into this process, future research might include functional magnetic resonance imaging (MRI) to identify which parts of the brain are associated with memory slips during music performance or the changes in patterns of activation associated with post-expressive writing intervention performance. Limitations The length of solo piano repertoires was varied, though there is no correlation between the pieces' lengths and the students' performance quality. Future studies may consider the length of the solo piano repertoires as a factor when recruiting the participants. Also, the number of participants in the sight-reading's control group was limited. A larger sample would have provided greater statistical evidence. Additionally, the performance settings in our study did not necessarily induce stress for all participants. The most stressful performance settings in this study, as ranked by our participants (N = 35) were solo/small ensemble recitals (81%) and public masterclasses (75%). The fear of being criticized by prestigious musicians, teachers, or a public audience may increase performance anxiety. Since the expressive writing intervention appeared to be most effective for students with high performance anxiety, it would be meaningful to test this expressive writing intervention in more high-stakes performance settings. We also did not officially ask our participants to give feedback about the expressive writing intervention. Participants' feedback will be valuable; although the participants may not be able to accurately evaluate the effects of the expressive writing intervention or may not be aware of what caused performance quality changes. CONCLUSION Unlike professional athletes, musicians are often left alone without resources or strategies to help them to cope with performance anxiety. Most piano students are unaware of either their need for psychological support or strategies to help combat performance anxiety. Our findings address an urgent need in current music pedagogy to address performance anxiety. Seventeen of 35 students in our study stated they had never received any education about how to identify and cope with performance anxiety. When we asked students how they could improve their performances, most of them answered that they needed to spend more time practicing. Only a few participants indicated that they might need to practice coping with their performance anxiety. Our study suggests a better understanding of performance anxiety, and the use of expressive writing intervention may help establish appropriate pedagogical support for music students. In conclusion, this study largely confirms our hypothesis that an expressive writing intervention could effectively help piano students cope with performance anxiety. Students who were highly anxious about performing received the most benefit from this intervention. This small but innovative tool was helpful in reducing the performance gap between the most anxious students and least anxious students and can be seen as a potentially useful procedure for all music performers. DATA AVAILABILITY STATEMENT All datasets generated for this study are included in the article/supplementary material. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the University of Arizona, Fred Fox School of Music. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS YT and LR contributed the design of the study and performed the statistical analysis. YT organized the database and wrote the first draft of the manuscript. Both authors contributed to manuscript revision, read, and approved the submitted version.
v3-fos-license
2014-10-01T00:00:00.000Z
2005-08-02T00:00:00.000
16709984
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1016/j.physletb.2005.08.128", "pdf_hash": "54d91d33c9e3e4f464ff6f8c8859eb9fa2ea105b", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42634", "s2fieldsofstudy": [ "Physics" ], "sha1": "a723bba23b5ffa4c1d567287469180b10f273c4c", "year": 2005 }
pes2o/s2orc
Model independent analysis of a class of $\bar{B}_s^0 $ decay modes The widths of a class of two-body $\bar{B}_s^0 $ decays induced by $b \to c \bar u d$ and $b \to c \bar u s$ transitions are determined in a model-independent way, using $SU(3)_F$ symmetry and existing information on $\bar B \to D_{(s)} P$ and $\bar B \to D_{(s)} V$ decays, with $P$ and $V$ a light pseudoscalar or vector meson. The results are relevant for the $B_s$ physics programmes at the hadron colliders and at the $e^+ e^-$ factories running at the peak of $\Upsilon(5S)$. In the next few years an intense B s physics programme will be pursued at the hadron colliders, the Fermilab Tevatron and the CERN LHC, and at the e + e − factories running at Υ(5S). The programme includes precise determination of the B s − B s mixing parameters and search for CP violating asymmetries in B s decays, with the aim of providing new tests of the Standard Model (SM) and searching for physics beyond SM. The analysis of rare B s transitions is another aspect of the research programme, with the same aim of looking for deviations from SM expectations. The knowledge of non leptonic B s decay rates is of prime importance for working out the research programme. For example, B s − B s mixing can be studied using B s twobody hadronic decay modes in addition to semileptonic modes. It is noticeable that the widths of a set of two-body transitions can be predicted in a model independent way, using the symmetries of QCD and available information on B decays. We are referring in particular to a class of decay modes induced by the quark transitions b → cūd and b → cūs, for example those collected in Table 1. The key observation is that the various Table 1: SU(3) decay amplitudes for B 0 s → D (s) P decays, with P a light pseudoscalar meson. In the last column the corresponding branching fractions predicted using the method described in the text are reported. decay modes are governed, in the SU(3) F limit, by few independent amplitudes that can be constrained, both in moduli and in phase differences, from corresponding B decay processes. Considering transitions with a light pseudoscalar meson belonging to the octet in the final state, the scheme where the correspondence can be established involves the three different topologies in B 0 s decays induced by b → cūd(s), namely the color allowed topology T , the color suppressed topology C and the W -exchange topology E. The transition in the SU(3) singlet η 0 involves another amplitude D in principle not related to the previous ones. Notice that the identification of the different amplitudes is not graphical, it is based on SU(3) [1]. Since B → DP decays induced by the quark processes b → cuq (q = d or s) involve a weak Hamiltonian transforming as a flavor octet, using de Swart's notation T (µ) ν for the ν = (Y, I, I 3 ) component of an irreducible tensor operator of rank (µ) [2], one can write: Table 1, i.e. the color suppressed, color enhanced and W-exchange diagrams, respectively. The SU(3) representation for B decays is reported in Table 2. Considering Table 2 one realizes that the threeB → DK experimental rates could allow to obtain |T |, |C| and the phase difference δ C − δ T . This was already observed in [4], and can be recast in the determination of the two independent isospin amplitudes A 1 and A 0 for I = 1 and I = 0 isospin DK final states: the difference of the B − andB 0 lifetimes: τ B − = 1.671 ± 0.018 ps and τ B 0 = 1.537 ± 0.014 ps, but neglecting the tiny phase space correction due to the difference between p D 0 K − = p D 0K 0 = 2280 MeV and p D +K − = 2279 MeV, with p the modulus of the three-momentum of one of the two final mesons in the B rest frame, one would obtain allowed region for A 0 /A 1 at various confidence levels by minimizing the χ 2 function for the three branching ratios and plotting the χ 2 contours that correspond to a given confidence level, as done in fig.1. Due to the quality of the experimental data and to the correlation between |A 0 /A 1 | and δ 0 −δ 1 , the allowed region is not tightly constrained, in particular the phase difference could be zero. We pause here, since we can elaborate once more about factorization approximations sometimes adopted for computing non leptonic decays, in this case for B mesons [5]. In fig.1 we have shown the predictions by, e.g., naive factorization, where the decay amplitudes are written in terms of K and D meson leptonic constants f K and f D , and the B → D and B → K form factors F 0 : . The result of this approach corresponds to vanishing phase difference; using a 1 = c 1 + c 2 /3 and a 2 = c 2 +c 1 /3, with c 1 and c 2 the Wilson coefficients appearing in the effective hamiltonian inducing the decays (for their numerical values we quote a 1 = (1.036, 1.017, 1.025) and a 2 = (0.073, 0.175, 0.140) at LO and at NLO (in NDR and HV renormalization schemes) accuracy, respectively [6]) we obtain results corresponding to the dots along the horizontal axis in fig. 1, which do not belong to the region permitted by experimental data at 95% CL. In generalized factorization, where a 1 and a 2 are considered as parameters, the phase difference is constrained to be zero, too. This is allowed by the experimental data on these three channels, but excluded if one considers all channels, as we shall see below. Coming to bounding the decay amplitudes, the fourB → Dπ andB → D s K decay rates cannot determine C, T , E and their phase differences [7].B → D s K only fixes the modulus of E, which is not small at odds with the expectations by factorization, where Wexchange processes are suppressed by ratios of decay constants and form factors and are usually considered to be negligible. Moreover, the presence of E does not allow to directly relate color favoured T or color suppressed C decay amplitudes in Dπ and DK final states. What can be done, however, is to use all the information onB → Dπ, D s K and DK (7 experimental data) to determine T , C and E (5 parameters). A similar attitude has been recently adopted in [8]. Noticeably, the combined experimental information is enough accurate to tightly determine the ranges of variation for all these quantities. In fig. 2 we have depicted the allowed regions in the C/T and E/T planes, obtained fixing the other variables to their fitted values, with the corresponding confidence levels. It is worth noticing that the phase differences between the various amplitudes are close to be maximal; this signals again deviation from naive (or generalized) factorization, provides contraints to QCD-based approaches proposed to evaluate non leptonic B decay amplitudes [9,10,11] and points towards sizeable long-distance effects in C and E [12,13]. To obtain the amplitudes we have fixed the ratio |V us /V ud | to the experimental result: |V us /V ud | = 0.226 ± 0.003, and we have taken into account the phase space correction due to p DK , p Dπ = 2306 MeV and p DsK − = 2242 MeV. We obtain | C T | = 0.53 ± 0.10, | E T | = 0.115 ± 0.020, δ C − δ T = (76 ± 12) • and δ E − δ T = (112 ± 46) • . We have to mention that the accuracy of the fit is not particularly high since χ 2 /dof = 2.3, i.e. a fit probability of 10%. This is entirely due to a single entry in Table 2 Table 1. The uncertainties in the predicted rates are small; in particular, the W -exchange induced processes B 0 s → D + π − , D 0 π 0 are precisely estimated [14]. Considering the decays with η or η ′ in the final state, they involve the amplitude D corresponding to the transition in a SU(3) singlet η 0 , and the η − η ′ mixing angle θ (in a one angle mixing scheme): If we use the value θ = −15.4 0 for the mixing angle [15], we obtain | D T | = 0.41 ± 0.11 without sensibly constraining the D − T phase difference, δ D − δ T = −(25 ± 51) • . Corresponding B 0 s decay rates are predicted consequently. The key of the success of the programme of predicting B s decay rates is the small number of amplitudes in comparison to the available data, a feature which is not common to all processes. Considering b → cūd(s) induced transitions, one could look at the case of one light vector meson in the final state, with the same SU(3) decomposition reported in Tables 1, 2 (we denote by a prime the amplitudes involved in this case). B decay data are collected in Table 3. The difference with respect to the previous case is that the W-exchange modeB 0 → D + s K * − has not been observed, yet, therefore the E ′ amplitude is poorly determined considering only the other modes. Taking into account phase space corrections due to p Dρ = 2235 MeV and p DK * = 2211 MeV, we obtain | C ′ T ′ | = 0.36 ± 0.10, predictions for B 0 s decay rates are collected in Table 3: as anticipated, the accuracy is not high for W −exchange induced decays. On the other hand, the prediction for the rate of Table 3. is compatible with the upper bound in Considering other decay modes induced by the same quark transitions, namelyB → D * (s) P andB → D * (s) V decays, the present experimental data are not precise enough to sensibly constrain the independent amplitudes and to provide stringent predictions for B s . As soon as the experimental accuracy will improve, a similar analysis will be possible to describe B 0 s → D * (s) P modes, while the three helicityB → D * (s) V amplitudes will be needed to determine the corresponding B s decays. Let us finally comment on the possible role of SU(3) F breaking terms that can modify our predictions. Those effects are not universal, and in general cannot be reduced to well defined and predictable patterns without new assumptions. Their parametrization would introduce additional quantities [16] that at present cannot be sensibly bounded since their effects seem to be smaller than the experimental uncertainties. Therefore they can be neglected until the experimental errors remain at the present level. It will be interesting to investigate their role when the B s decay rates will be measured and more precise B branching fractions will be available. Acnowledgments We thank F. De Fazio for discussions. We acknowledge partial support from the EC Contract No. HPRN-CT-2002-00311 (EURIDICE).
v3-fos-license
2024-04-06T15:37:27.195Z
2024-04-01T00:00:00.000
268946122
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://assets.cureus.com/uploads/review_article/pdf/244133/20240404-28457-o43wdi.pdf", "pdf_hash": "af2acbd9eaf74c6b0b1cf70f17e49c4e892b6fb8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42636", "s2fieldsofstudy": [ "Medicine" ], "sha1": "603195924af467f37217d67c8613aba99625aa14", "year": 2024 }
pes2o/s2orc
Mobilizing Progress: A Comprehensive Review of the Efficacy of Early Mobilization Therapy in the Intensive Care Unit Early mobilization therapy has emerged as a crucial aspect of intensive care unit (ICU) management, aiming to counteract the detrimental effects of prolonged immobility in critically ill patients. This comprehensive review examines the efficacy of early mobilization therapy in the ICU setting, synthesizing evidence from clinical trials, meta-analyses, and guidelines. Key findings indicate that early mobilization is associated with numerous benefits, including reduced muscle weakness, a shorter duration of mechanical ventilation, decreased ICU and hospital length of stay, and improved functional outcomes. However, safety concerns, staffing limitations, and patient-specific considerations pose significant barriers to widespread adoption. Despite these challenges, early mobilization is important for improving ICU patient outcomes. This review underscores the critical need for continued research and implementation efforts to optimize early mobilization protocols, address remaining challenges, and expand access to this beneficial therapy. By working collaboratively to overcome barriers and prioritize early mobilization, healthcare providers can enhance the quality of care and improve outcomes for critically ill patients in the ICU. Introduction And Background Early mobilization therapy is a systematic approach to initiating physical activity and movement in critically ill patients in the intensive care unit (ICU) [1].Traditionally, ICU patients are often kept sedated and immobilized to prevent complications and facilitate medical management.However, research over the past few decades has highlighted the detrimental effects of prolonged immobility, such as muscle weakness, ventilator-associated complications, and psychological distress.Early mobilization therapy aims to counteract these negative consequences by promoting early and progressive physical activity tailored to the patient's condition and capabilities [2]. The significance of early mobilization for ICU patients cannot be overstated.Prolonged immobility in the ICU has been associated with a myriad of adverse outcomes, including muscle atrophy, weakness, ventilatorassociated pneumonia (VAP), pressure ulcers, and an increased risk of thromboembolic events [3].Moreover, immobility can exacerbate psychological distress, leading to anxiety, depression, and post-traumatic stress disorder (PTSD) in ICU survivors.By contrast, early mobilization has been shown to mitigate these risks, improve functional outcomes, shorten ICU and hospital lengths of stay, and enhance survivors' overall quality of life [4].This review aims to comprehensively examine the efficacy of early mobilization therapy in the ICU setting.By synthesizing the existing literature, we aim to elucidate the benefits, challenges, and implications of early mobilization for critically ill patients.Furthermore, we seek to identify gaps in knowledge, areas for future research, and strategies for successfully implementing early mobilization protocols in the ICU.Ultimately, this review aims to contribute to optimizing ICU care and improving outcomes for critically ill patients by promoting early mobilization therapy. Historical Perspective on ICU Care and Immobility The historical perspective on ICU care and immobility delineates the progression of critical care medicine and nursing practices over time.In the 19th century, professional nurses delivered physical care in hospitals near nursing stations, marking the nascent stages of intensive therapy.The advent of life-support devices for ventilation and renal function characterized early 20th-century intensive therapy.Over the past five decades, critical care has evolved into comprehensive monitoring and automated laboratory measurements guided by critical care physicians, nurse specialists, pharmacists, and respiratory therapists utilizing diverse life-support methodologies [5].During the 1990s, a significant paradigm shift unfolded in ICU critical care medicine toward managing more critically ill patients with conditions like acute respiratory distress syndrome.During this period, many breakthroughs occurred in sustaining patients on ventilators for extended durations and exploring novel treatment modalities.However, practices involving deep sedation and paralysis were prevalent, resulting in prolonged immobility and potential long-term ramifications for patients [6].The ICU Liberation initiative by the Society of Critical Care Medicine is geared toward saving patients from pain, oversedation, delirium, immobility, and sleep disturbances in the ICU.Implementing strategies such as ventilator weaning protocols, maintaining light sedation levels, preventing delirium, initiating early mobilization, and fostering family engagement aims to enhance patient outcomes and reduce the risk of post-intensive care syndrome (PICS) [7].In recent years, critical care nursing has witnessed advancements in multidisciplinary teamwork, protocol-driven care for weaning from mechanical ventilation and sedation, early patient mobilization to prevent complications like VAP and deep vein thrombosis, as well as a heightened emphasis on humanizing ICU environments through open visiting policies and ethical end-of-life care approaches [8]. Evolution of Early Mobilization Therapy Over the past decade, the evolution of early mobilization therapy in the ICU has marked significant progress.Numerous randomized trials have been conducted to assess the efficacy of early mobilization and rehabilitation in ICU settings, reduce the incidence of ICU-acquired weakness (ICUAW), and enhance longterm physical functioning and quality of life for patients.Despite the availability of supportive evidence and guidelines advocating for early mobilization, its implementation in ICUs exhibits considerable variability [9].Recent studies have shed light on various strategies to optimize ICU early mobilization and rehabilitation practices.These strategies encompass the establishment of multidisciplinary teams (MDTs) with assigned champions, the utilization of structured quality improvement methodologies, the identification of barriers and facilitators, the assessment of optimal timing, type, and dosage of interventions, the evaluation of outcomes and performance metrics, and the integration of mobility-related measures into clinical care to establish patient-centric goals and monitor progress [10].The effectiveness of early mobilization in the ICU has been corroborated by research demonstrating a reduction in the incidence of ICUAW, enhancement of functional capacity, reduction in mechanical ventilation duration, improvement in patient's ability to stand, increased rates of ICU discharge, and overall improvement in patient outcomes.Emerging techniques such as electrical muscle stimulation, cycling, hydrotherapy, and devices like the Sara Combilizer have exhibited favorable outcomes and safety profiles in facilitating early mobilization [3]. Muscle Atrophy and Weakness Muscle atrophy and weakness represent significant challenges encountered by critically ill patients in the ICU.ICUAW is a prevalent condition among critically ill individuals, affecting up to 80% of patients.It can result in prolonged disability extending far beyond the ICU stay.ICUAW manifests as muscle wasting, compromised contractility, neuropathy, and dysregulation of pathways involved in muscle protein degradation, such as the ubiquitin-proteasome system [11].Notably, the preferential loss of myosin is a distinguishing characteristic of this condition.Risk factors contributing to ICUAW include inflammation, the administration of steroids, and immobilization through paralysis, underscoring the critical importance of early rehabilitation interventions in mitigating these declines in muscle function [11].Research indicates that daily rates of muscle atrophy in key muscle groups such as the rectus femoris and vastus intermedius can be alarmingly high, reaching up to 0.84% and 0.98%, respectively.Furthermore, gender disparities do exist, with women experiencing approximately three times higher rates of muscle atrophy compared to men [12].A comprehensive understanding of the mechanisms underlying muscle atrophy and weakness in critically ill patients is imperative for developing effective interventions to prevent and manage these conditions.Ultimately, such interventions promise to improve patient outcomes and significantly enhance overall quality of life. Ventilator-Associated Complications Ventilator-associated complications, including VAP, ventilator-associated events (VAEs), and infectionrelated ventilator-associated complications (IVACs), pose significant challenges for patients in the ICU.Research indicates that VAP is a prevalent ICU-acquired infection, often correlated with prolonged mechanical ventilation, extended stays in the ICU and hospital, and elevated mortality rates.Reported incidences vary widely, from 5% to 40%, depending on the setting and diagnostic criteria.VAP is associated with prolonged mechanical ventilation and ICU stays [13,14].VAEs, encompassing both VAC and IVAC, have been identified as contributors to adverse patient outcomes, with IVAC linked explicitly to increased hospital mortality among critically ill patients necessitating prolonged mechanical ventilation [15,16].These complications underscore the critical need for vigilant surveillance and the implementation of preventive strategies to reduce ventilator-associated complications' impact on patient outcomes within the ICU. Psychological Impact of Prolonged Immobility Prolonged immobility in ICU patients can take a profound toll on their psychological well-being, precipitating conditions such as depression, anxiety, cognitive impairments, and PTSD.Research indicates that as many as 80% of critically ill patients in the ICU develop neuromuscular dysfunction or ICU delirium, both of which are closely linked to unfavorable outcomes [17].Moreover, survivors of critical illness often grapple with moderate to severe depression and anxiety persisting for up to two years post-discharge, with depression rates surpassing those observed in the general population [17].Cognitive impairments, although frequently overlooked, can persist over the long term, significantly impacting the quality of life for ICU survivors [17].Furthermore, the psychological ramifications following critical illness, collectively referred to as PICS, encompass a spectrum of physical, mental, and cognitive challenges.Depression and symptoms of PTSD constitute pivotal components of psychological PICS and are pivotal factors associated with patientreported unacceptable outcomes [18].These psychological sequelae have been documented to endure for up to five years post-ICU discharge, underscoring the imperative of addressing and managing these psychological burdens to bolster the quality of life for ICU survivors [18]. Bedside Exercises The components of early mobilization therapy, particularly bedside exercises, encompass a spectrum of interventions meticulously tailored to the patient's individual tolerance levels and requirements.These interventions range from straightforward activities, such as transitioning from a supine to a seated position or performing grooming tasks while seated at the bedside, to more advanced maneuvers, like ambulating to the bathroom.The intervention selection should be driven by a patient-centered approach, considering their capacity to withstand and engage in the activity [19].In instances where patients exhibit diminished endurance or strength, co-treatment involving collaboration with physical therapy can prove advantageous.This collaborative approach facilitates patient participation in early mobilization activities with appropriate assistance, thereby fostering favorable outcomes such as reduced length of stay and enhanced functional recovery [19].Crucially, vigilant monitoring of patients' vital signs throughout the mobilization session is imperative to ensure their tolerability of the intervention and to promptly document any observed changes for subsequent treatments [19].While certain patients may not be deemed suitable candidates for early mobilization due to medical instability or other complicating factors, a significant portion of patients stand to benefit from these interventions with minimal adverse effects, underscoring the pivotal role of early mobilization in ameliorating patient outcomes in both ICU and acute care settings [19,20]. Sitting on the Edge of the Bed Early mobilization therapy encompasses a variety of components designed to safely mobilize critically ill patients within the ICU.One crucial element involves transitioning patients from prone positions to sitting on the edge of the bed or in a chair [19,21].Additionally, early mobilization interventions may encompass basic grooming tasks.At the same time, the patient is seated at the edge of the bed, transferring to a bedside commode, ambulating to the bathroom, or engaging in activities of daily living (ADLs) retraining tailored to the patient's tolerance and requirements [19].When patients exhibit diminished endurance or strength, concurrent treatment with physical therapy can facilitate early mobilization activities and ultimately improve outcomes [19].It is essential to monitor patients' vital signs before, during, and after each mobilization session to ensure they are tolerating the intervention well and to track their progress effectively [19].While not all patients may be deemed suitable candidates for early mobilization due to medical instability or other complicating factors, integrating these components into therapy protocols can help reduce the risk of functional decline and enhance outcomes for many critically ill patients in the ICU. Ambulation and Walking The components of early mobilization therapy, with a particular focus on ambulation and walking, encompass various techniques to enhance functional outcomes for critically ill patients in the ICU.These components comprise a spectrum of activities, including passive and active range of motion exercises, active side-to-side turning, bed exercises, bedside sitting, transfers between bed and chair, ambulation, hoist therapy, tilt table exercises, resistance exercises, and electrical stimulation [1].Early mobilization entails initiating physical activity as soon as the second to fifth day following the onset of a critical illness, underscoring the critical importance of mobilizing patients promptly to reduce complications associated with prolonged immobility [21].Research indicates that early rehabilitation and mobilization in the ICU yield notable improvements in functional capacity, muscle strength, duration of mechanical ventilation, walking ability upon discharge, and health-related quality of life [21].These interventions play a pivotal role in reducing the incidence of ICUAW, increasing ventilator-free days, and heightening the likelihood of patients being discharged home [21].Passive and active-assisted range of motion exercises involve manipulating the patient's joints through their full range of motion, either with assistance (active assisted) or without requiring the patient's effort (passive).These exercises are instrumental in averting joint stiffness and preserving flexibility [22].Functional mobility retraining is a targeted technique to enhance a patient's capacity to engage in daily activities and move autonomously.This approach incorporates exercises that replicate functional tasks such as sitting, standing, and walking, thereby facilitating the restoration of functional independence [23].Positioning is paramount in the ICU setting to forestall complications such as pressure ulcers, contractures, and respiratory issues.Physical therapists are pivotal in ensuring optimal patient positioning to promote comfort and safety [22].Breathing exercises are relevant for ICU patients, particularly those reliant on mechanical ventilation.These exercises facilitate lung function, reduce respiratory complications, and bolster overall respiratory health [22].Therapeutic exercises encompass a diverse range of movements customized to meet the patient's specific needs, targeting areas such as strength, endurance, balance, and coordination.These exercises enhance physical function and overall well-being and contribute to the patient's rehabilitation [23].Neuromuscular Electrical Stimulation involves the application of electrical stimulation to activate muscles, aiding in muscle strengthening and averting muscle atrophy in critically ill patients [24].Manual techniques employed by physiotherapists encompass methods such as percussion, vibration, and manual hyperventilation to assist patients in clearing secretions, improving lung function, and enhancing overall respiratory health [25].Figure 1 shows physical therapy techniques. FIGURE 1: Physical therapy techniques Image credit: Amol Singam Clinical Trials and Studies Demonstrating Benefits Clinical trials and studies have unequivocally demonstrated the advantages of early mobilization within the ICU.Seminal studies have conclusively shown that initiating early mobilization can reduce ICU and hospital stay durations, hasten return to independence, enhance ambulation capacity, augment muscle strength, reduce ICUAW, and amplify functional recovery [26,27].Research further underscores that early active mobilization during mechanical ventilation in the ICU contributes to shortened ICU stays and heightened functional mobility [27].Moreover, systematic reviews have illuminated that early physical therapy and ICU mobilization represent feasible, safe, and productive interventions.These interventions facilitate the attainment of mobility milestones and yield tangible improvements in functional outcomes for critically ill patients within the ICU [28].The meta-analyses and systematic reviews examine the effects of early mobilization therapy on critically ill patients within the ICU.These comprehensive studies scrutinize the influence of early mobilization on diverse outcomes, including muscle strength, physical function, duration of ICU and hospital stays, and the occurrence of adverse events among patients undergoing cardiac surgery or requiring mechanical ventilation [10,[29][30][31][32].The cumulative evidence suggests that early mobilization holds promise in enhancing physical function, mitigating the risk of ICUAW, expediting the weaning process from mechanical ventilation, and potentially forestalling ICUAW, a condition associated with diminished quality of life and heightened mortality risk [10].Nevertheless, the certainty of the evidence regarding the benefits of systematic early mobilization remains inconclusive.Some studies have reported conflicting findings, underscoring the imperative for further research to elucidate the effectiveness of early mobilization interventions in the ICU [30,31].Despite these uncertainties, meta-analyses and systematic reviews underscore early mobilization's significance as a prospective intervention to facilitate outcomes for critically ill patients.However, additional research is warranted to grasp its impact comprehensively. Guidelines and Recommendations from Professional Organizations Guidelines and recommendations from professional organizations underscore the critical importance of early mobilization in the ICU to enhance patient outcomes.These guidelines advocate for implementing early mobility protocols to foster patient mobility and facilitate functional recovery within the ICU setting [33,34].Extensive research has demonstrated that early mobilization is feasible, safe, and effective in reducing the length of ICU stays, improving functional outcomes, and mitigating complications associated with prolonged immobilization [33,35].The evidence supports the utilization of interdisciplinary collaboration, education, and targeted interventions to optimize the feasibility and effectiveness of early mobilization programs in the ICU [33,35].Additionally, the literature underscores the manifold benefits of early ambulation as an integral component of a comprehensive care bundle within the ICU, leading to tangible improvements such as decreased mechanical ventilation duration, abbreviated ICU and hospital stays, and enhanced functional recovery for critically ill patients [33].Despite potential barriers such as resource allocation constraints and apprehensions regarding patient safety, protocols and guideline recommendations have been developed to address these concerns and ensure the safety and appropriateness of early mobilization care in the ICU [33,35].In summary, professional guidelines and recommendations underscore early mobilization's pivotal role in augmenting outcomes and mitigating complications for critically ill patients within the ICU setting. Safety Concerns and Risk Assessment Safety concerns and risk assessment represent crucial considerations when contemplating early mobilization therapy within the ICU.Studies underscore the importance of establishing safety criteria for initiating early mobilization, particularly for patients receiving mechanical ventilation, necessitating adequate monitoring and the implementation of safety measures during mobilization sessions [36].Patientrelated safety events during active mobilization, especially among intubated patients in the ICU, are subjects of significant research focus aimed at comprehending the associated risk factors [37].While early mobilization and rehabilitation within the ICU have generally demonstrated safety, potential safety events such as hemodynamic fluctuations and desaturation may arise, underscoring the imperative of vigilant monitoring and risk assessment throughout mobilization sessions [38].Notably, barriers to early mobilization encompass apprehensions regarding patient safety, including physiological alterations such as hypotension and hypoxemia, as well as the necessity for the removal of medical devices.These concerns mandate meticulous consideration and management to safeguard patient well-being during mobilization activities [38].Addressing these safety concerns entails the implementation of structured protocols, interprofessional training initiatives, and fostering a culture that prioritizes patient safety.Such measures are indispensable for mitigating risks and optimizing the benefits of early mobilization therapy within the ICU setting [39]. Staffing and Resource Limitations Implementing early mobilization therapy in the ICU needs to be improved by staffing and resource limitations, which present formidable challenges.Studies have pinpointed restricted staffing levels, time constraints, and inadequate equipment availability as primary barriers to early mobilization within the ICU [1,33,39].Physiotherapists and healthcare professionals have voiced concerns regarding the impediments posed by constrained timeframes, heightened workloads, and staffing shortages, all of which impede their capacity to initiate early mobilization for ICU patients [1].Moreover, the scarcity of personnel available to facilitate patient mobilization, compounded by issues such as excessive sedation, delirium, the risk of musculoskeletal injuries, and elevated workplace stress levels, exacerbates the hurdles associated with implementing early mobilization practices in the ICU [1,33,40].Effectively surmounting these staffing and resource limitations necessitates adopting strategic measures, including workflow optimization, interprofessional training initiatives, establishing MDTs, and implementing streamlined communication processes to facilitate planning and ensure equitable distribution of mobility sessions [33].Addressing these barriers is paramount to fostering the seamless integration of early mobilization therapy into routine clinical care within the ICU, ultimately fostering improved outcomes for critically ill patients. Patient-Specific Considerations Patient-specific considerations are pivotal in successfully implementing early mobilization therapy within the ICU, ensuring safety and effectiveness for critically ill patients.Factors such as physiological instability, the presence of medical devices, and the patient's readiness are paramount in determining the appropriateness of early mobilization [1,3].Patient safety and stability emerge as prominent concerns when addressing barriers to early mobilization in the ICU.Studies underscore the critical importance of assessing patient readiness and physiological status before mobilizing [1].Factors such as hyperglycemia, the risk of pressure ulcers, and psychological effects necessitate vigilant monitoring during early mobilization to preempt potential complications [1]. Moreover, patient-specific barriers, including time constraints, staffing limitations, and inadequate training, can impede the initiation of early mobilization.This underscores the imperative of adopting tailored approaches based on individual patient needs and conditions [1,3].By carefully considering these patientspecific factors, healthcare professionals can optimize the safety and efficacy of early mobilization therapy in the ICU, promoting improved patient outcomes. Family Involvement and Support Family involvement and support emerge as integral components of early mobilization therapy within the ICU, a notion underscored by various studies.Engaging families in mobilization endeavors has been proven safe and feasible, yielding benefits for patients and their loved ones [41][42][43].Research indicates that a significant proportion of patients desire to undergo early mobilization following admission to the ICU, with the majority endorsing the view that early mobilization should be standard practice.Furthermore, many patients perceive mobilization as instrumental in facilitating recovery [42].Moreover, family members express a keen interest in participating in mobilization activities, underscoring their willingness to actively contribute to the care of their loved ones during their ICU stay [42].Studies further underscore the significance of family engagement, particularly within pediatric ICUs, where a positive attitude and support for early mobilization from all staff members have been shown to reassure families and foster their active participation in the rehabilitation process [43].By incorporating families into the mobilization process, healthcare professionals can enhance patient care and contribute to a supportive environment conducive to patient recovery within the ICU. Multidisciplinary Approach The successful integration of early rehabilitation into routine practice within the ICU hinges on the importance of multidisciplinary collaboration.A team-based approach involving various healthcare professionals is essential to ensure patient availability, sufficient staffing, and the coordination necessary to achieve early mobilization amid the demanding ICU environment [44].Mobility champions are pivotal in fostering a culture prioritizing early mobilization and rehabilitation within the ICU.These individuals provide leadership, communication skills, education, training, coordination, and advocacy for patient mobilization, facilitating integration into routine ICU care [9]. Structured quality improvement processes are crucial in enhancing the successful implementation of early mobilization and rehabilitation initiatives.This approach encompasses summarizing evidence, identifying barriers, establishing performance measures, and ensuring all eligible patients receive the intervention through comprehensive engagement, education, execution, and evaluation strategies [9].Recognizing and addressing barriers to early mobilization is paramount for its effective implementation.These barriers encompass patient-related factors, structural challenges, procedural issues, and cultural considerations.Overcoming these hurdles entails implementing safety guidelines, utilizing mobility protocols, providing interprofessional training and education, and involving physician champions [9].Evaluating interventions' optimal timing, type, and dosage is imperative for effective early mobilization.Initiating rehabilitation shortly after ICU admission and considering various interventions such as active functional mobilization, inbed cycle ergometry, electrical muscle stimulation, tilt tables, and other equipment can significantly enhance outcomes [9].Developing mobility-related measures integrated into clinical care facilitates the establishment of patient goals, tracking of progress, effective allocation of resources, and evaluation of structured quality improvement programs.Understanding patients' functioning before a critical illness is vital for tailoring interventions to meet their needs [9]. Figure 2 shows a multidisciplinary approach. Protocol Development and Standardization Advances in ICU care have been propelled by the development of clinical protocols and the establishment of MDTs, resulting in notable enhancements in patient management and consequent reductions in morbidity and mortality among critically ill patients [45].Protocols play a pivotal role in the ICU, providing a structured framework for care delivery.While they do not ensure immediate improvements in care quality, protocols offer essential guidelines for attaining higher standards of care through standardization and reliance on evidence-based practices [45].Evidence substantiates the effectiveness of protocols in ameliorating outcomes in critical care settings.Areas such as anemia management, sedation, ventilator weaning, and ventilation strategies have demonstrated improved outcomes with the implementation of protocols [45]. Despite their benefits, concerns linger regarding the potential for protocols to supplant clinical judgment, raising apprehension about compromised care quality, fostering complacency, and impeding professional learning.Critics argue that protocols may erode the decision-making process of healthcare professionals in an era marked by high technological reliance [46].Quality improvement tools, such as checklists and structured care plans, are indispensable for standardizing care delivery within the ICU.These tools aim to enhance quality, safety, and patient satisfaction and reduce ICU length of stay by structuring care processes and ensuring adherence to best practices [47].Successfully implementing protocols and quality improvement tools necessitates addressing local barriers, garnering clinician buy-in, and implementing ongoing strategies to maintain utilization.A comprehensive understanding of the factors conducive to successful implementation is imperative for devising effective tools to augment care delivery within the ICU [48]. Staff Education and Training Various educational strategies are employed to train ICU staff on the benefits and safety of early mobilization, encompassing communication methods such as email correspondence, staff meetings, poster displays, face-to-face education sessions, and in-person training sessions tailored for an interdisciplinary cohort of volunteer champions.The educational initiatives focus on elucidating the detrimental effects of prolonged bed rest, extolling the benefits of early mobilization for ICU patients, elucidating safety protocols, and imparting knowledge on utilizing mobility protocols [33].Staff education encompasses a range of topics, including the significance of rehabilitation in mitigating ICUAW and delirium, the safety considerations inherent in early mobility programs, and the favorable impact on patient outcomes, such as shortened ventilator duration, reduced ICU and hospital length of stay, and diminished complications.Training endeavors emphasize that critically ill patients can safely engage in movement activities [49].The teach-back method is employed to validate staff learning during training sessions, ensuring that staff members effectively comprehend the protocols and procedures for early mobilization [33].Following implementation, on-site support is extended to ICU departments for three weeks, facilitated by volunteers who conduct rounds through each unit, responding to inquiries, delivering real-time education to staff members, and aiding in patient mobility.This hands-on support reinforces training efforts and addresses any immediate concerns or queries from staff members [33].Staff perceptions regarding mobility are evaluated through surveys incorporating questions on a Likert scale and soliciting free-text responses targeting existing barriers to mobility.Understanding staff perceptions is essential for addressing challenges and refining the implementation of early mobilization programs [33].Figure 3 shows staff education and training. FIGURE 3: Staff education and training Image credit: Amol Singam Use of Technology and Assistive Devices The myICUvoice app, developed by an intensivist and trialed at Addenbrooke's Hospital in Cambridge, utilizes touchscreen technology on an iPad to facilitate communication between patients and healthcare staff.This innovative tool empowers patients to express symptoms and convey their needs effectively, providing a vital communication channel that ICU staff might otherwise miss.By enabling patients to articulate their feelings and requirements, the app enhances patient-centered care within the ICU setting [50,51].The EyeControl device, developed by an Israeli company, offers assistive communication solutions for "locked-in" patients and individuals requiring remote communication, precious during the COVID-19 pandemic.This wearable, screenless, lightweight device utilizes an infrared camera to track eye movements, translating them into audio communication.Ventilated patients can effectively communicate with medical staff and loved ones from isolated units, mitigating the risk of contagion and enhancing communication accessibility [50].Another impactful technology in healthcare is the EASE app, which facilitates communication between healthcare providers and the families of hospitalized patients.With a simple tap, this app allows doctors or nurses to send updates, texts, photos, and videos regarding the patient's progress.By providing real-time information and fostering seamless communication, the EASE app strengthens connections between healthcare teams and families during challenging times, enhancing the patient experience [50].Figure 4 shows the use of technology and assistive devices. Functional Outcomes and Quality of Life Functional outcomes and quality of life represent critical parameters evaluated in studies concerning early mobilization therapy within the ICU.Research findings indicate a positive correlation between early mobilization and physical therapy interventions and improved functional outcomes, encompassing factors such as muscle strength, ambulation capacity, ADLs, and mobility [27,28,52].These interventions are tailored to address the physical dysfunction commonly observed in ICU patients, enhance functional mobility, and foster independence in daily activities [52].Moreover, studies have underscored the beneficial impact of early mobilization on the quality of life of ICU patients, highlighting the necessity of considering not only physical outcomes but also the overall well-being and satisfaction of individuals undergoing critical care treatment [27,52].Despite encountering challenges and barriers in implementing early mobilization protocols, the emphasis on functional outcomes and quality of life underscores the holistic approach imperative for optimizing patient recovery and long-term well-being within the ICU. Healthcare Cost Implications Hospitals implementing early mobility programs have reported a noteworthy reduction in direct care costs for patients, surpassing 29%, translating into substantial cost savings [53].These programs have demonstrated their financial benefits through various avenues, including decreased ICU days, shorter hospital stays, and reduced readmission rates, contributing to overall financial advantages for healthcare institutions [53,54].Studies have delved deeper into the financial implications, revealing that the total net present value of an early mobility program for a hospital with 1,000 yearly ICU admissions can exceed $2.3 million over seven years, underscoring the significant financial value of such initiatives [54]. An in-depth financial impact analysis highlights the annual cost-of-care savings associated with early mobility programs, encompassing reductions in ICU and non-ICU days for ventilated and non-ventilated patients and fewer days on ventilation, thereby heralding substantial financial benefits for hospitals [55].Moreover, these initiatives have been linked to notable reductions in hospital readmission rates, resulting in additional annual savings for healthcare facilities [55]. Beyond the scope of hospitals, early mobility programs extend their value proposition to third-party payers and capitated health systems, manifesting in improved patient outcomes and reduced costs, thereby generating considerable value for these stakeholders [54].Even in scenarios where the clinical effectiveness of an early mobility program is moderately reduced by 20%, the financial impact remains positive, thus accentuating the economic viability of investing in such programs [54]. Long-Term Effects on Morbidity and Mortality Prolonged hospitalization and immobility in critically ill patients pose significant risks for long-term physical and cognitive impairments.Early mobilization therapy has emerged as a promising intervention in mitigating morbidity associated with prolonged immobilization, including muscle weakness, myopathy, and muscular atrophy [29].While early mobilization demonstrates short-term benefits, such as reduced mechanical ventilation days and hospital length of stay, its implications on long-term mortality remain under scrutiny.Studies suggest that active mobilization and rehabilitation in the ICU do not increase shortor long-term mortality rates.Instead, they may contribute to improvements in muscle strength, mobility status, and participation restriction outcomes [56].Implementing early mobilization interventions in the ICU may only sometimes result in direct cost savings due to the intricacies involved.The resource and labor burdens associated with early mobilization programs necessitate careful evaluation against potential benefits to ensure efficient resource allocation [29].Safety concerns, particularly in cardiac surgery patients, have been linked to adverse events during early mobilization, including significant hemodynamic alterations.Addressing these safety concerns and optimizing protocols for safe mobilization practices is imperative to maximize the benefits of early mobilization while minimizing associated risks [29].Figure 5 shows the long-term effects on morbidity and mortality. FIGURE 5: Long-term effects on morbidity and mortality Image credit: Amol Singam Areas for Further Investigation Further research is imperative to identify and address barriers hindering the implementation of early mobilization in the ICU.These barriers, including deep sedation, lack of coordination with rehabilitation professionals, and limited understanding of the benefits of early rehabilitation, need a thorough investigation to facilitate effective strategies for overcoming them [52].Investigating the optimal timing and methods for early mobilization in critically ill patients is paramount.Understanding the appropriate initiation time for mobilization and determining the most effective techniques can significantly improve patient outcomes and prevent delays in achieving early mobilization milestones [52].Conducting multicenter studies is essential to reduce bias and explore day-to-day changes in barriers to implementing early mobilization.These studies offer valuable insights into the challenges encountered across diverse ICU settings, thereby informing the development of comprehensive rehabilitation strategies [52].Research endeavors should prioritize assessing the impact of early mobilization on various patient outcomes.These outcomes encompass independence in ADLs, quality of life, duration of mechanical ventilation, ICU and hospital length of stay, and overall physical function post-discharge.A thorough evaluation of these metrics can provide comprehensive insights into the efficacy of early mobilization interventions [52].Continuous evaluation and refinement of clinical practice guidelines for early mobilization in the ICU are indispensable.Addressing issues such as the optimal dose of mobilization, criteria for patient selection, and timing of early mobilization can further optimize the implementation of early mobilization protocols, ensuring standardized and effective care delivery [57].Figure 6 shows areas for further investigation. FIGURE 6: Areas for further investigation Image credit: Amol Singam Innovations in Early Mobilization Techniques Newer techniques, such as electrical muscle stimulation, have emerged as safe and effective modalities for facilitating early mobilization and improving outcomes for critically ill patients in the ICU [3].Innovative approaches like cycling and hydrotherapy have demonstrated positive outcomes and safety in early mobilization endeavors, contributing significantly to the evolution of rehabilitation practices within the ICU [3].The Sara Combilizer, a device highlighted in the sources, represents a notable advancement in aiding patient mobility in the ICU while maintaining safety standards [3].Integrating multidisciplinary protocols incorporating early mobility alongside interventions like awakening/breathing coordination and delirium monitoring has been acknowledged as a forward-thinking strategy to bolster patient outcomes and foster early mobilization in the ICU [58].Several studies have delineated safety criteria and termination guidelines for both in-bed and out-of-bed mobilization, offering a structured approach to uphold patient safety during early mobilization initiatives [3]. Figure 7 shows innovations in early mobilization techniques. FIGURE 7: Innovations in early mobilization techniques Image credit: Amol Singam Integration of Early Mobilization Into Standard ICU Care Studies emphasize the safety and feasibility of early physical therapy and ICU mobilization for critically ill patients, demonstrating their potential to enhance the attainment of mobility milestones within the ICU setting [1].Interdisciplinary collaboration plays a pivotal role in successfully integrating early mobility protocols into ICU care, necessitating the involvement of various healthcare professionals, such as physicians, nursing staff, respiratory therapists, physical therapists, and occupational therapists [33].Implementing early mobility protocols has been shown to improve patient mobility within the ICU, as evidenced by decreased time from admission to ambulation and an increased proportion of patients engaging in ambulation while in the ICU [33].Developing and implementing structured protocols for early mobility are crucial components for successful integration into standard ICU care.These protocols should prioritize interdisciplinary collaboration, streamline workflow, and optimize coordination of tasks to ensure effectiveness [33].Utilizing structured quality improvement processes is essential for enhancing the successful implementation of early mobilization and rehabilitation in the ICU.These processes ensure that all eligible patients receive the intervention and address barriers such as sedation, a lack of equipment, and coordination issues [9].Figure 8 shows the integration of early mobilization into standard ICU care. Conclusions This review highlights the critical role of early mobilization therapy in improving outcomes for ICU patients, emphasizing benefits like reduced muscle weakness and shorter stays in both the ICU and hospital settings.Despite its advantages, implementing early mobilization requires a multidisciplinary effort, standardized protocols for safety and efficacy, and overcoming challenges such as staffing limitations and patient safety concerns.To maximize patient recovery and quality of life, ongoing research and the development of clear, standardized mobilization guidelines are essential for widespread adoption in critical care. FIGURE 4 : FIGURE 4: Use of technology and assistive devicesImage credit: Amol Singam FIGURE 8 : FIGURE 8: Integration of early mobilization into standard ICU care Image credit: Amol Singam
v3-fos-license
2018-04-03T02:37:14.921Z
2015-05-01T00:00:00.000
607136
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "c99ab3d9e0c9eda666d9e9a5723b31a1c6cfd391", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42637", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "c99ab3d9e0c9eda666d9e9a5723b31a1c6cfd391", "year": 2015 }
pes2o/s2orc
Screening of different extracts from artemisia species for their potential antimalarial activity. The formation of hemozoin (malaria pigment) has been proposed as an ideal drug target for antimalarial screening programs. In this study, we used an improved, cost-effective and high-throughput spectrophotometric assay to screen plant extracts for finding novel antimalarial plant sources. Fifteen extracts with different polarity from three Iranian Artemisia species, A. ciniformis, A. biennis and A. turanica, were assessed for their antimalarial activity by in-vitro β-hematin formation assay. The most potent effect was observed in dichloromethane (DCM) extract of A. ciniformis with IC50 and IC90 values of 0.92 ± 0.01 and 1.29 ± 0.02 mg/mL, respectively. Ethyl acetate (EtOAC) extracts of A. biennis and A. turanica also showed significant antimalarial activities with IC50 values of 1.11 ± 0.02 and 1.35 ± 0.08 mg/mL and IC90 values of 1.22 ± 0.04 and 2.81 ± 0.21 mg/mL, respectively. Based on these results, it is possible to conclude that the components with strong antimalarial activity have been concentrated in the medium-polar extracts. Introduction Malaria continues to be a life threatening disease in the tropical and subtropical regions with the strongest mortality (1). It is transmitted by protozoa of the genus plasmodium and responsible for hundreds of millions of infections that kill between one and three million people annually (2). This situation has been complicated by the emergence of parasite strains resistant to the existing inexpensive drugs such as chloroquine (3); therefore, there is an urgent need to find alternative drugs especially traditional and herbal remedies for the treatment of the disease. Members of the genus Artemisia (Asteraceae) are important medicinal plants, with about 400 species wildly distributed in the northern hemisphere (especially in Europe, North America, Asia and South Africa) and represented in Iranian flora by 34 species (4,5). This genus has been gaining increasing attention since the discovery of artemisinin, a promising and potent Europe, North America drug which derived from the plant A. annua (6). Experiments suggested that artemisinin and its derivatives kill plasmodium protozoa by interacting with heme to produce free radicals that alkylate specific malarial proteins and damage membranes of the parasite. Moreover, artemisinin could inhibit Extract Preparation The plant materials were air-dried at room temperature, finely ground and extracted by maceration method (18). 100 g of each plant was extracted successively with petroleum ether (PE), DCM, EtOAC, ethanol and ethanolwater (1:1 v/v) at room temperature (Sequential maceration with ca. 3×1 L of each solvent). All the extracts were separately concentrated using a rotary evaporator at a maximum temperature of 45 °C. In-vitro β-hematin formation assay The antimalarial activity of plant extracts was evaluated by the in-vitro β-hematin formation assay described by Afshar et al. (9) with some modifications. Briefly, varying concentrations (0.4-2 mg/mL in DMSO) of each extract were mixed with 3 mM of hematin, 10 mM oleic acid and 1 M HCl. The final volume was adjusted to 1 mL using sodium acetate buffer, pH 5. Chloroquine diphosphate was used as a positive control. The reaction mixtures were incubated overnight at 37 °C with constant gentle shaking. Incubation was terminated by centrifugation (14000 rpm, 10 min, at 21 °C) to collect the β-hematin pellets. The pellets repeatedly washed with incubation (15 min at 37 °C with regular shaking) in 2.5% (w/v) SDS in phosphate buffer saline followed by a final wash in 0.1 M sodium bicarbonate, until the supernatant was colorless. To determine the heme amount crystallized into β-hematin, the pellets were dissolved in 0.1 M NaOH and measured the absorbance at 400 nm (Beckman DU640 spectrophotometer). The results were recorded as % inhibition (I%) of heme crystallization compared to negative control (DMSO) using the following heme bio crystallization and interact with hemozoin formation, lead to split of the malaria pigment (7,8). Recently, the DCM extracts of A. scoparia and A. spicigera were shown to significantly inhibit the heme bio crystallization in β-hematin formation assay (9). In continuation of our studies on Iranian Artemisia species, we have now evaluated antimalarial effect of different extracts from three Artemisia species including A. ciniformis, A. biennis and A. turanica. Recently, the total extract of A. turanica was reported to have antimalarial effect against Plasmodium berghei (10). In other studies, ethanol extract of A. turanica has shown anticancer activity against human Caucasian hepatocyte carcinoma (HepG-2) and human Caucasian larynx carcinoma (Hep-2) cell lines (11). Moreover, methanol extract of this plant was reported to have antimicrobial activity (12). DCM extracts of A. biennis and A. ciniformis have been shown to inhibit cancer cell growth (13), likewise, different extracts of A. ciniformis have been reported to possess antiprolifrative effects on malignant cell lines (14, 15). It was recently reported that the ethanol extracts of these three species have inhibitory effects against Leishmania major parasites (16) and the hydroethanolic extract of A. biennis showed potent antioxidant activity in different assays (17). In the current study, the anti-malarial activity of different extracts from these three Artemisia species was examined by in-vitro β-hematin formation assay. Chemicals Hematin procine, chloroquine diphosphate, sodium dodecyle sulfate (SDS), sodium acetate, magnesium sulfate, sodium hydrogen phosphate, sodium chloride, potassium chloride, sodium hydroxide, glucose and sodium bicarbonate were purchased from Sigma-Aldrich Chemical Company, oleic acid from Fluka, dimethyl sulfoxide and hydrochloric acid from Merck and all the solvents used for extraction from Caledon and Scharlau. Statistical analyses All experiments were conducted in triplicate measurements and presented as the mean ± standard deviation. Data were analyzed by using SPSS, version 16.0.0 software. The IC 50 and IC 90 values were calculated from non-linear regression analysis. Results and Discussion During the intra-erythrocytic cycle, the malaria parasite digests the host hemoglobin within the food vacuoles of infected erythrocytes as the main source of nutrition for its development and maturation (19,20). Massive degradation of hemoglobin is accompanied by the release of toxic free heme which affects cellular metabolism and causes parasite death (21,22). To get rid of the excess heme, the malaria parasites have evolved a detoxification pathway which converted heme into an inert and insoluble crystal known as hemozoin or malaria pigment (23). Hemozoin bio crystallization is an essential process for the malaria parasite and is a validated target for antimalarial chemotherapy as well as drug screening programs (24). Several in-vitro bioassays based on differential solubility and spectral characteristics of monomeric heme and β -hematin (synthetic analogue of hemozoin) have been defined and exerted for searching of novel synthetic and natural antimalarial compounds (19,24,25). In the present investigation, the antimalarial activity was evaluated by the in-vitro β-hematin formation assay developed by Afshar et al. (9). The results from the antimalarial testing of fifteen extracts of A. ciniformis, A. turanica and A. biennis as well as the extraction yields are presented in Table 1. The IC 50 and IC 90 values for each active extract were calculated graphically by plotting concentrations against percentage of inhibition (I%) and defined as the concentration of extract causing 50% and 90% inhibition of β-hematin formation, respectively. As illustrated in Table 1 and Figure 1, DCM extract of A. ciniformis (IC 50 = 0.92 ± 0.01 mg/mL, IC 90 = 1.29 ± 0.02 mg/mL), followed by EtOAC extracts of A. biennis (IC 50 = 1.11 ± 0.02 mg/mL, IC 90 = 1.22 ± 0.04 mg/mL) and A. turanica (IC 50 = 1.35 ± 0.08 mg/mL, IC 90 = 2.81 ± 0.21 mg/mL).Using box and whisker plots for IC 50 and IC 90 values revealed the presence of an outlier that was related to EtOAC extract of A. ciniformis. In other words, the rest of active samples could be remained as candidates for further study and comparison. Chloroquine was tested as a reference drug with IC 50 value of 0.04 ± 0.01 mg/mL and IC 90 value of 0.35 ± 0.01 mg/mL. It was demonstrated that compounds with potent antimalarial activity in these active extracts have medium polarity. Previous researches on natural compounds showed that terpenes, steroids (26), saponins (27), methoxylated flavonoids (28) and methylated coumarins (29) exhibited antimalarial effects in various tests. Also, according to the screening study on terpenoid content of ten Iranian Artemisia species carried out by Iranshahi et al. (30), A. cinifomis showed high content of sesquiterpenoid lactons while A. biennis and A. turanica have low amount of terpenes. Therefore, it seems that the potent antimalarial activity of DCM extract from A. ciniformis might be due to the high content of sesquiterpenoid lactones. In the case of A. turanica and A. biennis, the antimalarial activity of EtOAC extracts was superior to the corresponding DCM extracts. These results might have been derived from the high concentration of antimalarial component with higher polarity than sesquiterpenoids like methoxylated flavonoids or methylated coumarins and removing as much the lipid like compounds from these extracts. As represented in Figure 2, at lower concentrations of the potent extracts and at all concentrations (0.4-2 mg/mL) of weak extracts (PE and EtOAC extracts of A. ciniformis), the percent inhibition values were negative, because the observed absorbences were higher than the negative control. These data are in agreement with our previous study (9) that showed that the presence of lipids and other fatty acids in the mixture of semi-polar extracts cause synergistic effect with oleic acid in the assay. It was indicated that the IC 50 and IC 90 values could be decreased by entirely removing the lipids and purification of the active antimalarial compounds. Conclusion The plant extracts in this investigation are less active antimalarials than the reference drug, chloroquine, but these extracts contain a heterogeneous mixture of various compounds and the active components might display more potent activity in their pure form. Among fifteen tested extracts, the DCM extract of A. ciniformis was considered more promising for further studies to isolate and identificate the active antimalarial principles.
v3-fos-license
2018-04-03T01:39:00.765Z
2005-04-29T00:00:00.000
25176887
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.jbc.org/content/280/17/17275.full.pdf", "pdf_hash": "0f52e793d5d1b17de77d1d319f84ee72c33fa1c5", "pdf_src": "Highwire", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42638", "s2fieldsofstudy": [ "Biology" ], "sha1": "40aa4d955a56afbb67b445fe53e783766631babc", "year": 2005 }
pes2o/s2orc
A Role of STAT3 in Rho GTPase-regulated Cell Migration and Proliferation* Rho family GTPases and STAT3 act as mediators of cytokine and growth factor signaling in a variety of cellular functions involved in inflammation, tumorigenesis, and development. In the course of searching for their functional connections, we found by using STAT3 knock-out mouse embryonic fibroblasts that RhoA, Rac1, and Cdc42 could cause nonspecific activation of STAT3 promoter-driven luciferase reporter in the absence of STAT3, raising concerns to a body of literature where STAT3 was associated with Rho GTPases based on the reporter system. We also found that although active RhoA, Rac1, and Cdc42 could all mediate Ser-727 and Tyr-705 phosphorylation and nuclear translocation of STAT3, the Rho GTPases were able to induce STAT3 activation independently of the interleukin-6 autocrine pathway, and active RhoA, Rac1, or Cdc42 could not form a stable complex with STAT3 as previously suggested, indicating an unappreciated mechanism of STAT3 activation by the Rho GTPases. The RhoA-induced STAT3 activation partly depended on Rho-associated kinase (ROK) and involved multiple effector signals as revealed by the examination of effector domain mutants of RhoA. Genetic deletion of STAT3 led to a loss of response to RhoA in myosin light chain phosphorylation and actin stress fiber induction but sensitized the cells to RhoA or ROK-stimulated cell migration. STAT3 was required for the RhoA-induced NF-κB and cyclin D1 transcription and was involved in NF-κB nuclear translocation. Furthermore, loss of STAT3 expression inhibited RhoA-promoted cell proliferation and blocked RhoA or ROK induced anchorage-independent growth. These phenotypic changes in STAT3–/– cells could be rescued by reconstituting STAT3 gene. Our studies carried out in STAT3 null cells demonstrate unambiguously that STAT3 represents an essential effector pathway of Rho GTPases in regulating multiple cellular functions including actin cytoskeleton reorganization, cell migration, gene activation, and proliferation. morphology and motility, Rho GTPases have emerged as important signal transducers to control actin cytoskeleton structure, focal adhesion complex, cell polarity as well as cell-cell communication (2,5,9). Studies of the past decade have also shown that these Rho GTPases and their signaling components are intimately associated with human disease development; they have been suggested to serve as candidate targets for future pharmacological interventions (10,11). Signal transducers and activators of transcription (STATs) 1 were discovered as latent cytoplasmic transcription factors that are activated by many cytokines and growth factors (12). Among seven mammalian STAT genes identified, STAT3 is ubiquitously expressed and appears to have important and unique functions since STAT3 gene targeting leads to early embryonic lethality (13). Cell stimulation by cytokine and growth factors such as interleukin-6 (IL-6), platelet-derived growth factor, and granulocyte colony-stimulating factor can activate STAT3 by tyrosine and serine residue phosphorylation to induce STAT3 dimerization, and the activated STAT3 translocates from cytosol to cell nucleus to mediate transcription of a number of STAT3-responsive genes. It is well established that STAT3 is involved in the control of essential cellular processes such as cell differentiation and proliferation, and growing evidence draws a correlation between abnormal STAT3 regulation and oncogenic transformation (14). Constitutively active STAT3 occurs with a high incidence in a number of human tumor types (15,16) and displays by itself a weak oncogenic activity in fibroblasts (14). A few recent studies have suggested a functional link between Rho GTPases and STAT3 (17)(18)(19). In one study it was reported that active Rac1 can directly interact with STAT3 in a yeast two-hybrid system and in co-expression/co-immunoprecipitation assays, leading to STAT3 activation (17). In another work it was shown that Rac1 could indirectly activate STAT3 activity through autocrine induction of IL-6, which in turn caused STAT3 activation (18). A third study suggested that constitutively active RhoA, but not Rac1, could stimulate STAT3 transcription activity in a transient expression system (19). Although these results raise interesting possibilities of potential signaling cross-talk and interdependence between the Rho GTPases-controlled signaling cascades and the STAT3-mediated transcriptional events, the controversial nature of the reported observations has stalled the Rho GTPase research field in incorporating STAT3 to the Rho GTPasesignaling paradigm. To clarify the functional relationship between Rho GTPases and STAT3, we have attempted to assess the involvement of STAT3 in Rho GTPase-regulated cell functions and pathways by using the STAT3 knock-out mouse embryonic fibroblast cells in the present studies. We found unexpectedly that the widely used STAT3-luciferase reporter gene constructs could produce a strong reporter signal in the presence or absence of STAT3, raising concerns to a body of literature where STAT3 was associated with Rho GTPases based on the reporter system. In addition, we found that although active RhoA, Rac1, and Cdc42 could all mediate Ser-727 and Tyr-705 phosphorylation and nuclear translocation of STAT3, the Rho GTPases were able to induce STAT3 activation independently of the IL-6 autocrine pathway, and active Rac1, RhoA, or Cdc42 could not form a stable complex with STAT3 as previously suggested. We show that the RhoA-induced STAT3 activation involves multiple effector signals including ROK. Moreover, we demonstrate that STAT3 is required for the RhoA-induced NF-B and cyclin D1 transcription, cell proliferation, and transformation as well as actin stress fiber formation and migration. These studies carried out in STAT3 null cells unambiguously establish that STAT3 represents an essential effector pathway of Rho GT-Pases in regulating multiple cellular functions. EXPERIMENTAL PROCEDURES cDNA Constructs-The constitutively active mutants of RhoA, Rac1, and Cdc42 (L63RhoA, V14RhoA, L61Rac1, V12Rac1, L61Cdc42, and V12Cdc42) and the effector domain mutants of RhoA in the constitutively active backbone (L63RhoA-F39V, L63RhoA-E40T, L63RhoA-E40L, and L63RhoA-Y42C) were generated by site-directed mutagenesis based on oligonucleotide-mediated PCR as described (20 -22). For transient expression, the mutants were cloned into the pKH3 mammalian expression vector in-frame with a triple-hemagglutinin (HA 3 ) tag at the N termini (23). For retroviral expression, cDNAs encoding the respective mutants and ROK were ligated into the BamHI and EcoRI sites in-frame with the HA 3 tag at the 5Ј end of the retroviral vector MIEG3 that expresses enhanced green fluorescent protein (EGFP) bicistronically (22). Cell Culture and Retroviral Transduction-Wild type and STAT3 deficient MEFs were generated as previously described (24). The STAT3 floxed/floxed (wild type) MEFs were derived from 14-day-old STAT3 floxed/floxed mouse embryos and grown in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% heat-inactivated fetal calf serum, 2 mM L-glutamine, 50 units/ml penicillin, 50 g/ml streptomycin (Invitrogen) and immortalized according to Todaro and Green (25). To generate the STAT3 knock-out MEFs, the STAT3 floxed/ floxed MEFs were infected with a recombinant adenovirus expressing the Cre recombinase (26). Individual clones were isolated from the infected pool by limited dilution and were genotyped by PCR (27). To generate the STAT3-reconstituted STAT3 Ϫ/Ϫ cells, the STAT3 Ϫ/Ϫ MEFs were stably transfected with pZeo-STAT3 and selected with zeocin (400 g/ml, Invitrogen). Individual clones were characterized for comparable STAT3 expression to wild type MEFs and were cultured in DMEM with 10% calf serum, 4.5 mg/ml D-glucose, 4.5 mg/ml L-glutamine, and 10 g of gentamicin/ml. Recombinant retroviruses were produced using the Phoenix cell packaging system by transient expression of relevant cDNAs in the MIEG3 retroviral vector containing bicistronically expressed EGFP (22). The MEF cells were infected with the respective retroviruses and harvested 48 -72 h post-infection. The EGFP-positive cells were isolated by fluorescence-activated cell sorting. Luciferase Reporter Assay-Two luciferase reporter constructs for STAT3 used in the studies were as described before (APRE-Luc (28 -30)) or were obtained from BD Biosciences (pSTAT3-TA-Luc). The NF-B reporter construct was obtained from Stratagene, whereas the cyclin D1 reporter construct was described previously (8,22). To probe STAT3, NF-B, or cyclin D1 gene induction, the luciferase reporter constructs fused with the promoter sequences of the respective genes were transiently co-expressed with the respective small GTPases and cDNA encoding ␤-galactosidase. Transient transfection of these reporter plasmids was carried out by using FuGENE 6 Transfection Reagent (Roche Applied Science) according to the manufacturer's protocols. Twenty-four hours before harvesting, the cells were switched to a medium containing 0.5% serum. Analysis of luciferase and ␤-galactosidase activities of the transfected cells was performed by using a luciferase assay kit (Promega). Transfection efficiencies were routinely corrected by obtaining the ratio of the luciferase and the ␤-galactosidase activities observed in the same sample as previously described (31). Immunofluorescence-Cells grown on cover glasses were fixed with 3.7% formaldehyde in phosphate-buffered saline for 15 min and washed with phosphate-buffered saline once followed by permeabilization with 0.1% Triton X-100 for 20 min. The cells were then blocked with 2% bovine serum albumin for 20 min. For actin staining, the cells were incubated with rhodamine-conjugated phalloidin. For STAT3 or NF-B staining, the cells were labeled with anti-STAT3 polyclonal antibody (Cell Signaling) followed by incubation with a TRITC-conjugated antirabbit secondary antibody or with anti-p65 NF-B monoclonal antibody (Sigma) followed by a TRITC-conjugated anti-mouse antibody. Cell nuclei were labeled with DAPI for 10 min. The stained cells were mounted onto slides in Aqua-mount and viewed with a Zeiss LSM510 confocal microscopy or a Leica fluorescence microscopy equipped with deconvolution software (Improvision, Inc.). Immunoblotting-Whole cell lysates were prepared by extraction of the cells with a lysis buffer containing 20 mM Tris-HCl (pH 7.6), 100 mM NaCl, 10 mM MgCl 2 , 1% Triton X-100, 0.2% sodium deoxycholate, 2 mM phenylmethylsulfonyl fluoride, 10 g of leupeptin/ml, 10 g of aprotinin/ml, and 0.5 mM dithiothreitol for 30 min. The nuclear proteins were purified by the method described before (8). Briefly, cells were washed in a hypotonic buffer (25 mM Tris-HCl (pH 7.6), 1 mM MgCl 2 , 5 mM KCl) and lysed in hypotonic buffer containing 0.25% Nonidet P-40, 2 mM phenylmethylsulfonyl fluoride, 10 g of leupeptin/ml, and 10 g of aprotinin/ml for 30 min. The lysates were centrifuged at 500 ϫ g for 5 min. The nuclear pellet was washed with hypotonic buffer containing 2 mM phenylmethylsulfonyl fluoride, 10 g of leupeptin/ml, and 10 g of aprotinin/ml, resuspended in a solution containing 20 mM Tris-HCl (pH 8.0), 0.42 M NaCl, 1.5 mM MgCl 2 , and 25% glycerol, vortexed, and incubated at 4°C for 30 min. The extracts were centrifuged at 900 ϫ g for 5 min, and the supernatants were taken as the nuclear protein lysates. Protein contents in the whole-cell lysates and nuclear lysates were normalized by the Bradford method. The lysates were separated by 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis. The ectopic expression of the constitutively active forms of Rac1, RhoA, and Cdc42 were probed by using an anti-HA antibody (Roche Applied Science). Phosphotyrosine 705 and phosphoserine 727 of STAT3 and total STAT3 from the cell extracts were probed by using the anti-phospho-Tyr-705, anti-phospho-Ser-727 and anti-STAT3 antibodies, respectively (Cell Signaling). Wound Healing and Transwell Migration Assays-For wound healing assays cells were plated at 2 ϫ 10 6 /dish density in 60-mm diameter dishes. A plastic pipette tip was drawn across the center of the plate to produce an ϳ1-mm-wide wound area after the cells have reached confluency. After 12 h in DMEM supplemented with 0.5% calf serum (CS), cell movement into the wound area was examined under a phase contrast microscope. The distances between the leading edge of the migrating cells and the edge of the wound were measured (21). Cell migration was also assayed by using a Transwell plate inserted with a 6.5-mm polycarbonate membrane (8.0-m pore size; Costar Inc.) (21). Briefly, 5 ϫ 10 4 cells were suspended in 0.2 ml of culture medium and were added to the upper chamber. 10% fetal calf serum in DMEM was used as chemoattractant in the lower chamber. The cells were incubated for 16 h in a humidified CO 2 incubator at 37°C. Cells that traversed the 8.0-m membrane pores and spread to the lower surface of the membrane were stained with 5% Giemsa solution and were counted in six different fields. Each experiment was carried out in triplicate, and error bars represent the mean S.E. Cell Proliferation Assay-Cell growth was measured by tracing [ 3 H]thymidine incorporation. Cells were cultured in a medium containing 2% CS for the assays. The cell cultures were assayed at 0, 1, 2, and 3 days by the addition of 1 Ci of [ 3 H]thymidine/ml to the medium followed by an incubation for 4 h at 37°C. The radioisotope chased cells were harvested by trypsinization, and [ 3 H]thymidine incorporated into the cells was quantified by liquid scintillation counting. Cell Transformation Assay-To determine the transforming potential of the STAT3 deficient or wild type (WT) cells transduced with active RhoA mutant, ROK, or EGFP, 10,000 cells were suspended in 10% CS supplemented DMEM containing 0.3% agarose and were plated on top of a solidified medium containing 0.6% agarose. The cells were fed weekly by the addition of 1 ml of DMEM supplemented with 10% CS. Three weeks after plating the colony numbers were scored, and the foci morphologies were recorded under a phase microscope (22). RESULTS Active RhoA, Rac1, and Cdc42 All Can Mediate STAT3 Activation in MEFs-Previously by using a STAT3-responsive luciferase reporter system, it has been shown that constitutively active Rac1 mutant was capable of activating STAT3 transcriptional activity in Cos-1 and Rat-1 cells (17,18). However, in transient transfection experiments carried out in HEK 293T cells, constitutively active RhoA, but not Rac1 or Cdc42, was shown to activate a STAT3-responsive reporter (19). To clarify whether and which Rho GTPases could mediate STAT3 activation, we have utilized STAT3 knock-out MEF cells (KO) that have been genetically deleted of the STAT3 gene product (Fig. 1A) in the present studies to determine the functional relationship between the active Rho GTPases, RhoA, Rac1, and Cdc42, and STAT3. Unexpectedly, upon co-transfection of a STAT3-responsive luciferase reporter construct (APRE-Luc; Refs. 28 -30) with the constitutively active mutant of RhoA, Rac1, or Cdc42 (V14RhoA, V12Rac1, or V12Cdc42, respectively), we observed a ϳ2.5-fold induction of the STAT3 reporter activity by V14RhoA or V12Rac1 and a ϳ3-fold induction by V12Cdc42 over the respective controls in the STAT3 knock-out cells, similar to that observed in the wild type (WT) cells (Fig. 1B). Co-expression of the wild type Rho GTPases did not further enhance the reporter activity (Fig. 1B). To verify whether the apparent lack of specificity of the reporter system was due to the particular reporter construct tested, we examined another STAT3 reporter system containing four copies of the STAT3 enhancer element that has been widely in use (pSTAT3-TA-Luc, BD Biosciences) and obtained similar results (data not shown). These data indicate that the STAT3-responsive reporter systems that have been believed to be specific to STAT3 activity could have nonspecific effects caused by other STAT3-related transcription factors, raising concerns to a body of literature where STAT3 was associated with Rho GTPase activities based on the use of such reporters. STAT3 in quiescent cells resides in the cytoplasm in a latent state. Upon activation it translocates to the nucleus after phosphorylation-dependent dimerization (12,13). To circumvent the potential problems caused by using the nonspecific STAT3 transcription reporter systems, as alternative approaches we have adopted immunofluorescence and anti-phospho-STAT3 Western blotting to track STAT3 activation state by its intracellular localization and by its phosphotyrosine 705 (Try(P)-705) and phosphoserine 727 (Ser(P)-727) statuses, events that are known to be associated with STAT3 activation. We used recombinant retrovirus transduction to introduce various constitutively active Rho GTPases (L63RhoA, L61Rac1, or L61Cdc42) together with an EGFP marker or EGFP alone to wild type cells. As shown in Fig. 2A, in the serum-starved cells expressing the active RhoA, Rac1, or Cdc42 mutant, STAT3 was found to be co-localized with the nucleus as revealed by anti-STAT3 and DAPI staining, similar to that observed in the CS-stimulated cells, whereas in EGFP-expressing cells STAT3 appeared almost exclusively localized in the cytosol under serum-free conditions. A quantification of the nuclear translocation of STAT3 indicated that the RhoA-, Rac1-, or Cdc42-stimulated effect was comparable with that induced by serum (Ͼ85%, Fig. 2A, lower panel). Anti-Tyr(P)-705 or anti-Ser(P)-727 of STAT3 Western blotting revealed that in cells expressing constitutively active RhoA, Rac1, or Cdc42, Tyr-705, and Ser-727 residues were phosphorylated similarly as by serum stimulation (Fig. 2B; data not shown). The expression of the HA-tagged RhoA, Rac1, or Cdc42 mutant was revealed by anti-HA Western blotting (data not shown). Immunofluorescent staining of the active RhoA-, Rac1-, or Cdc42-expressing cells by anti-Tyr(P)-705 or anti-Ser(P)-727 of STAT3 also showed that the Rho GTPases potently stimulated Tyr-705 and Ser-727 phosphorylation that accompanied STAT3 nuclear translocation (Fig. 2C). Both events of phosphorylation at Tyr-705 and Ser-727 were closely associated with the STAT3 nuclear localization pattern induced by the active Rho GTPase expressions ( Fig. 2D; data not shown). Together these results demonstrate unambiguously that RhoA, Rac1, and Cdc42 can all mediate STAT3 activation by means of Tyr-705 and Ser-727 phosphorylation and nuclear translocation. Rho GTPases Can Induce STAT3 Nuclear Translocation Independently from Autocrine IL-6 -So far the available data linking Rho GTPase pathways to STAT3 activation came mostly from two studies. By using yeast two-hybrid binding and co-expression/co-immunoprecipitation approaches one study suggested that active Rac1 could directly interact with STAT3 and caused its activation (17). Despite repeated attempts using Cos-7, HEK 293T, or NIH 3T3 as host cells, we FIG. 1. The STAT3 promoter driven luciferase reporter system can be nonspecific. A, lysates containing 50 g of proteins from WT or STAT3 Ϫ/Ϫ cells were probed by anti-STAT3 antibody. Anti-␤ actin blotting was carried out in parallel. B, WT and STAT3 Ϫ/Ϫ cells were transiently transfected with the STAT3-luciferase reporter plasmid APRE-Luc with pKH 3 alone or pKH3 containing V14RhoA , V12Rac1, V12Cdc42, RhoA, Rac1, or Cdc42 cDNAs in the presence of ␤-galactosidase-expressing vector. In the presence or absence of 10% CS, cell lysates were prepared 48 h after transfection for the measurement of luciferase activity, ␤-galactosidase activity, and protein concentration. The data were normalized by ␤-galactosidase expression and are means Ϯ S.E. They are representative of three independent experiments. have failed to detect stable binding interaction of exogenously expressed STAT3 with constitutively active L61Rac1, L63RhoA, or L61Cdc42 in the respective cell immunoprecipitates even when the exogenous STAT3 and respective Rho GTPase mutants were at least 5-fold (Ͼ10-fold in Cos-7 or HEK 293T cells) over that of the endogenous STAT3 or Rho proteins (data not shown). Another study showed that in rat-1 fibro-blasts Rac1-mediated STAT3 activation was through an indirect autocrine induction of IL-6, which in turn stimulated STAT3 activation (18). In wild type MEFs, as shown in Fig. 3, we observed that STAT3 was efficiently translocated to the nucleus upon IL-6 stimulation, whereas it stayed mostly in the cytosol without IL-6 stimulation. The addition of neutralizing IL-6 receptor antibody (IL-6 R Ab) to the cells prevented STAT3 FIG. 2. RhoA, Rac1, and Cdc42 all can activate STAT3 in cells. A, RhoA, Rac1, and Cdc42 can all stimulate STAT3 nuclear translocation. WT MEFs infected with retrovirus expressing EGFP together with L63RhoA, L61Rac1, or L61Cdc42, or EGFP alone were fluorescence-activated cell sorting-isolated for EGFP expression and were serum-starved for 24 h. The cells were then fixed and stained with anti-STAT3 antibody and TRITC-labeled secondary antibody as well as with DAPI. The percentage of cells displaying a STAT3 staining co-localized with DAPI was quantified. B, RhoA, Rac1, and Cdc42 stimulate Tyr-705 phosphorylation of STAT3. Lysates from the respective Rho GTPase mutant-expressing cells were probed by anti-STAT3 and anti-phospho-Tyr-705 STAT3 antibodies. C, WT MEFs expressing EGFP or L63RhoA were analyzed by immunofluorescence for STAT3 Tyr(P)-705 or Ser(P)-727 after overnight serum starvation. Cell nuclei were stained with DAPI. D, the number of cells displaying Tyr(P)-705 or Ser(P)-727 colocalization with nuclei was quantified under a fluorescence microscope. The data are representative of two independent experiments. activation by IL-6. However, the neutralizing IL-6 receptor antibody did not affect L61Rac1 or other active Rho GTPaseinduced STAT3 nuclear translocation ( Fig. 3; data not shown), suggesting that the Rho GTPases-mediated STAT3 activation can take an alternative route from that of Rho to IL-6 to STAT3 autocrine loop. These data point to a previously unappreciated mechanism of STAT3 activation by the Rho GTPases. Multiple Pathways Regulated by RhoA Contribute to STAT3 Activation-To begin to understand how individual Rho GT-Pases activate STAT3, we focused on the RhoA downstream signals that might impact on STAT3 nuclear translocation. In L63RhoA-expressing WT cells, STAT3 was constitutively localized in the nucleus compared with EGFP expressing cells in which STAT3 was localized in the cytosol in the absence of serum (Fig. 4A). Pharmacological inhibition of one of the RhoA effectors, ROK, by Y27632 led to a partial inhibition of STAT3 nuclear localization (Fig. 4A), whereas active ROK expression itself caused a partial nuclear translocation of STAT3 that was reversible by the treatment with Y27632 (Fig. 4B). Blockade of two of the downstream components of ROK, c-Jun N-terminal kinase (JNK) and myosin light chain kinase (MLCK) (33,34), by their respective inhibitors, SP600125 and ML-7, at a dose effective in inhibiting JNK and MLCK caused no detectable effect on STAT3 localization pattern (Figs. 4, A-C), suggesting that STAT3 regulation by RhoA-ROK involves the addition effectors. To confirm the contribution of ROK and to assess the involvement of additional effectors of RhoA in STAT3 regulation by RhoA, we next examined a set of effector domain mutants of RhoA for their ability to induce STAT3 nuclear translocation. As previously described (22,35) and illustrated in Fig. 5A, the E40L mutant is defective for ROK recognition but retains strong protein kinase N and mDia binding; F39V is active for binding to ROK, and mDia but fails to interact with protein kinase N; E40T is capable of binding to protein kinase N and mDIA strongly but only weakly interacts with ROK; Y42C retains strong ROK binding but is weakened for mDia coupling and defective in protein kinase N binding. Fig. 5B shows the STAT3 nuclear localization induction profiles by the respective effector domain mutants of RhoA made in the constitutively active L63RhoA backbone. Although the E40T mutant that retains binding activity to the three effectors displayed the strongest STAT3 induction activity that was indistinguishable from that of L63RhoA, F39V, Y42C, and E40L were partially active in inducing STAT3 nuclear translocation in a decreasing order. Interestingly, the RhoA mutants that have lost protein kinase N binding activity (F39V and Y42C) remained active, thus excluding a protein kinase N contribution to RhoA-mediated STAT3 activation. The E40L mutant that lacks the ability of ROK binding could still partially mediate STAT3 nuclear translocation, suggesting that ROK-independent pathways downstream of RhoA is involved. Furthermore, mDia or other RhoA effectors sharing a similar effector mutant binding pattern as mDia may play an important role in mediating STAT3 nuclear translocation, since Y42C, whose effector binding profile is similar to F39V except that its ability to bind mDia is reduced, appeared to be less active than F39V. From these data we conclude that RhoA activates STAT3 by multiple effector pathways such as ROK and mDia but not protein kinase N. STAT3 Is Required for RhoA-and ROK-mediated Actin Stress Fiber Formation and Cell Migration-Rho GTPases are key regulators of actin cytoskeleton and cell morphology (1,2). In contrast, little information is available for the role of STAT3 in actin-based cell biology. We next asked if STAT3 could be a part of RhoA-regulated cell machinery of actin reorganization by examining the STAT3 Ϫ/Ϫ cells. Comparison of the actin structures of the L63RhoA-expressing WT-, STAT3 Ϫ/Ϫ (KO)-, and STAT3-reconstituted MEFs led to the observation that, whereas active RhoA was able to induce abundant actin stress fibers in WT and reconstituted cells under serum-free conditions, it failed to stimulate stress fiber formation in the KO cells (Fig. 6A). Constitutive expression of ROK that was active in actin stress fiber induction by itself also failed to induce actin stress fiber formation in the KO cells (data not shown). One of the critical links from RhoA and ROK to actin bundling activity has been proposed to be myosin light chain kinase (2). Western blotting of WT cells expressing constitutively active RhoA mutants showed a significant increase in phospho-myosin light chain 2 compared with WT cells expressing only EGFP (Fig. 6B). However, STAT3 Ϫ/Ϫ cells were unresponsive to L63RhoA in myosin light chain phosphorylation, similar to the MCLK inhibitor ML-7-treated WT cells (Fig. 6B). These results suggest that STAT3 is required for RhoA-ROK-mediated actin cytoskeleton reorganization by mediating myosin light chain kinase regulation by RhoA and ROK. It is well established that increased RhoA activity could reduce cell motility in part due to actin stress fiber induction and the associated focal adhesion complex formation in fibroblasts (2,36). Given the fact that the STAT3-deficient MEFs were unable to form stress fibers in response to active RhoA, we reasoned that cell migration rate could be affected in the KO cells. In a wound healing assay in which a scratch of wound was introduced to cell monolayer by a pipette tip and the cells were allowed to migrate to the wound gap, we observed that KO cells were able to move faster to the open wound than WT cells when only EGFP was expressed (Fig. 6C). Overexpression of L63RhoA or ROK in WT cells did not increase the migration rate, whereas expression of either L63RhoA or ROK was able to significantly accelerate KO cell migration in a 12 h period (Fig. 6C), suggesting that loss of STAT3 sensitizes the cells for the induction of migration by RhoA or ROK. To further determine the role of STAT3 in the regulation of directional cell migration, we measured the cell migration rates using Transwell chambers. As shown in Fig. 6D, WT cells expressing EGFP only, active RhoA mutant, or ROK did not display detectable difference in movement toward a serum gradient. However, the STAT3-deficient cells expressing either active RhoA or ROK were able to migrate 10 -12-fold faster than WT cells. We also noticed an ϳ1-fold increase in the basal migration rate of the STAT3-deficient cells compared with WT cells (Fig. 6D). These results strongly suggest that in fibroblasts STAT3 functions as an important mediator of RhoA-regulated cell migration, serving in a negative manner to inhibit cell movement induced by RhoA. It is also of interest to note that in different genetic contexts (e.g. STAT3 gain or loss of function) RhoA may either stimulate or inhibit fibroblast migration. STAT3 Is Involved in NF-B and Cyclin D1 Regulation by Rho GTPases-In addition to the regulation of actin cytoskeleton, Rho GTPases have been implicated as important regulators of gene transcription (2, 5). Next we examined if STAT3, as a transcription factor itself, might be involved in mediating Rho GTPase signals to the transcriptional machinery. As shown in Fig. 7A, STAT3 KO cells were unresponsive to the active RhoA-, Rac1-, or Cdc42-stimulated cyclin D1 transcription in a luciferase-based reporter assay, whereas in WT-or STAT3-reconstituted cells, the cyclin D1 luciferase activity was significantly up-regulated by all three GTPases. Similarly, although all three Rho GTPases were able to activate NF-B transcription activity in the WT-or STAT3-reconstituted KO cells, they were ineffective in inducing a luciferase-based NF-B transcription reporter construct in the KO cells (Fig. 7B). Immunofluorescent tracking of one of the major subunit of NF-B, p65, further provided evidence that although active RhoA, Rac1, or Cdc42 could effectively induce NF-B nuclear translocation in WT cells, the active Rho proteins could not stimulate NF-B relocation from the cytosol to the nucleus in KO cells (Fig. 7C; data not shown). These results clearly indi-FIG. 6. STAT3 is essential for RhoA-and ROK-mediated stress fiber formation and cell migration. A, STAT3 is required for RhoA induced actin stress fiber formation. WT-, STAT3 KO-, or STAT3-reconstituted MEFs expressing EGFP or EGFP together with L63RhoA were stained for F-actin by using rhodamine-conjugated phalloidin under serum-free conditions. B, STAT3 knock out causes a loss of myosin light chain phosphorylation. WT or STAT3 Ϫ/Ϫ cells with or without ML-7 treatment were subjected to anti-phospho-myosin light chain (PMLC2) and anti-␤-actin blotting. C, STAT3 deficiency sensitizes the cells for RhoA-and ROK-mediated wound healing migration. WT and KO cells expressing EGFP or EGFP with the indicated RhoA/ROK construct were cultured to confluency before a scratch was made on the cell surface using a pipette tip. Black arrows indicate the time when the wound was introduced, and cell migration into the wound is shown after 12 h of culturing in 0.5% CS. D, STAT3 deficiency sensitizes cells for RhoA and ROK-mediated Transwell migration. WT and KO cells expressing EGFP or EGFP together with RhoA or ROK were subjected to a 10% fetal calf serum-induced Transwell migration assay. Cells that had migrated toward the serum were quantified 24 h after plating. Cell numbers were normalized to that of WT cells. Data are representative of three independent experiments. cate that the Rho GTPases require STAT3 for cyclin D1 and NF-B transcriptional activation as well as for NF-B nuclear translocation. A recent report showed that ROK was able to positively modulate the transcriptional activity of NF-B (37). Surprisingly, in WT cells we observed that the active RhoA-induced NF-B nuclear translocation was not significantly affected by treatment with the ROK inhibitor, Y27632 (Fig. 7D). Furthermore, ROK expression alone in the cells appeared to inhibit basal NF-B nuclear translocation (Fig. 7D). It is, therefore, possible that the RhoA-ROK branch of RhoA signals may actually produce a negative effect on the net NF-B regulation by RhoA. This interpretation was confirmed by using the panel of RhoA effector mutants (data not shown). STAT3 Is Required for RhoA-mediated cell Proliferation and Transformation-Both RhoA and STAT3 are known to have a crucial role in the control of cell proliferation, and each can confer weak transforming activity in fibroblasts upon activation (14,22). To determine whether STAT3 is involved in RhoAmediated cell growth, we compared the cell proliferation and transformation properties of WT and STAT3 KO MEFs in the presence or absence of constitutively active RhoA mutant. As shown in Fig. 8A, WT cells were able to proliferate three times faster than the KO, consistent with a role of STAT3 in cell proliferation. WT cells expressing active L63RhoA mutant appeared to proliferate twice as fast as WT cells expressing EGFP, but L63RhoA could only marginally stimulate the KO cell proliferation. When the STAT3 gene was reintroduced into the KO cells, active RhoAL63 regained the ability to stimulate the cell proliferation to an extent comparable with WT cells (Fig. 8A). These results indicate that active RhoA depends on the presence of STAT3 in promoting cell proliferation. To examine the consequences of STAT3 deficiency on RhoAinduced cell transformation, WT-, KO-, and STAT3-reconstituted KO cells expressing active RhoA mutant or EGFP alone were plated on soft-agar surface, and their ability to grow to anchorage independent colonies was evaluated. As shown in Fig. 8B, WT-and STAT3-reconstituted KO cells expressing L63RhoA were able to form a significant number of colonies, although the KO cells displayed an ϳ10-fold reduction of colony-forming activity under L63RhoA induction. Similar observations were made for ROK-induced colony formation (data not shown). Interestingly, although the colony numbers of WT and reconstituted cells induced by L63RhoA were similar to that induced by ROK, the morphology of active RhoA-induced colonies was distinct from those induced by ROK with a rugged, blebbing surface of cell clusters compared with a smooth, round cluster of ROK-expressing cells (Fig. 8C). These results demonstrate that RhoA-mediated cell transformation is dependent on STAT3 signals. DISCUSSION Both Rho family GTPases and STAT3 regulate cell proliferation and gene induction, and their activating mutants are known to be oncogenic (5,14). The intense search for their functional connection has led to a number of reports suggesting that STAT3 mediates certain aspects of Rho GTPase signaling (17)(18)(19). However, this issue remains controversial due to the apparent differences in the observations that active Rac1 could directly bind to and activate STAT3 (17) or indirectly activate STAT3 through autocrine induction of IL-6 (18) and that active RhoA but not Rac1 could stimulate STAT3-responsive gene inductions (19). By utilizing the recently available STAT3 knock-out cells that enable us to examine the functional contribution by STAT3 to Rho GTPase signaling events in the STAT3 null genetic background, we started the present work with an open mind, trying to confirm some of the previous findings and/or to sort out the apparent conflicts among the related reports of this subject. Unexpectedly, our studies in the STAT3 Ϫ/Ϫ MEF cells strongly indicate that the experimental approaches adopted by a large body of literature using the STAT3-promoter reporter systems were flawed due to the nonspecific nature of the reporter systems, possibly attributed to the similarity of the STAT3-promoter sequences with that of other closely related STAT family member or transcription factors. Because the STAT3-reporter system is widely used in the STAT3 functional assays and has provided crucial support for previous studies of the Rho-STAT3 connection (17)(18)(19), this observation added further to the controversy and prompted us to re-examine the interdependence of RhoA, Rac1, and Cdc42 on STAT3 in order to clearly define their relationships. It is well established that quiescent STAT3 exists mostly in the cytosol whereas activated STAT3 is tyrosine-phosphorylated at Tyr-705 and Ser-727 form homo-or heterodimers and translocate to the nucleus to induce gene transcription (12). Taking advantage of the STAT3 phosphorylation and intracellular localization status associated with its activation state, we demonstrate that active RhoA, Rac1, and Cdc42 can indeed activate STAT3 in fibroblast cells by virtue of induction of Tyr-705 and Ser-727 phosphorylation as well as nucleus translocation of STAT3. However, upon examination of the mechanism involved, we were unable to reproduce the previously reported direct physical interaction between constitutively active Rac1 and STAT3 (17) nor between active RhoA or Cdc42 and STAT3, consistent with the observations of another group (18). One other unexpected finding is that the RhoA-, Rac1-, or Cdc42-induced STAT3 nucleus translocation appeared to be independent of the IL-6 receptor pathway in MEFs, a result differing from the reported Rac1-IL-6-STAT3 autocrine loop in rat-1 cells (18). These results suggest that RhoA, Rac1, and Cdc42 can each activate STAT3 by previously unappreciated mechanisms. To begin to dissect the mechanism by which the Rho GT-Pases modulates STAT3 activity, we have examined the involvement of a few candidate effector pathways of RhoA signaling. Both pharmacological inhibitor administration and RhoA effector domain mutant expression led to a consistent conclusion that although the ROK pathway by itself can partially activate STAT3, a participation of other effectors such as mDia is required to fully promote the RhoA-mediated STAT3 activation. Along this line, we have excluded the possible involvement of downstream components of RhoA signaling including myosin light chain kinase and c-Jun N-terminal ki-nase. Although we have not attempted to dissect the pathways controlled by Rac1 or Cdc42 leading to STAT3 activation, given the complexity of the immediate effector networks controlled by each (4), it is likely that these Rho proteins also engage multiple effectors to regulate STAT3. By examining the STAT3 Ϫ/Ϫ MEF cell behaviors, we have demonstrated that STAT3 is an essential effector for a variety of Rho-mediated cell functions (Fig. 9). In particular, we show that genetic deletion of STAT3 leads to a loss of response to RhoA-induced actin stress fiber formation and sensitizes the cells to RhoA-or ROK-stimulated cell migration. This is somewhat surprising since STAT3 is not known for its role in actin cytoskeleton regulation and in keratinocytes STAT3 deficiency appeared to inhibit epithelial cell migration (38). It suggests the importance to examine the functional contributions of STAT3 in the context of Rho GTPase signaling and may also reflect the differences of Rho-STAT3 signals to regulate cell migration in fibroblasts and keratinocytes. Another important finding is that STAT3 is required for the RhoA-induced NF-B and cyclin D1 transcription and is involved in NF-B nuclear translocation. Furthermore, loss of STAT3 expression inhibits RhoA-promoted cell proliferation and blocks RhoA-or ROKinduced anchorage-independent growth. By adding back the STAT3 gene to the STAT3 Ϫ/Ϫ cells, these phenotypic changes were readily rescued. These observations provide an important link of Rho GTPase signaling to the nucleus, since although the relationship between Rho GTPase activation and transcription/ growth regulation has been extensively studied in the last decade, the events involved in the regulation of some of the transcription factors, such as NF-B, remain unclear (39). By putting STAT3 activation as an intermediate step before Rhomediated NF-B and cyclin D1 activation and cell proliferation, we have progressed a step closer to the appreciation of the interrelationship of Rho GTPases-regulated transcription events that are the key for the cell cycle and/or survival signals. Paradoxically, our observations that NF-B nuclear translocation and cyclin D1 transcription are dependent on the presence of STAT3 gene while filling up a gap in the Rho GTPase signaling cascades (Fig. 9) raise more questions on how these transcriptional machineries are interconnected and how they are coordinately involved in the cell growth control. More detailed mechanisms of STAT3 activation by each Rho GTPase as well as the molecular contribution of STAT3 to the proliferation and migration regulation need to be further determined. Because RhoA, Rac1, and Cdc42 can all stimulate STAT3 Tyr-705 and Ser-727 phosphorylation, it is likely that downstream effectors of each Rho GTPase could engage both tyrosine kinases such as JAK2 and Src (18) and Ser/Thr kinases such as PAK and ROK to elicit the phosphorylation events. By using the c-Jun N-terminal kinase and extracellular signal-regulated kinase (ERK)-specific pharmacological inhibitors, we were able to exclude the potential contribution by c-Jun N-terminal kinase, ERK1/2, or p38 mitogen-activated protein kinase ( Fig. 4; data not shown). Whether the Ser-727 phosphorylation is essential to STAT3 activation remains controversial, but it is well established that phosphorylation of Tyr-705 is required for STAT3 nuclear translocation and activation (12). From our data it is expected that the activated STAT3 would turn on specific transcriptional events and target genes that are integral parts of the cell migration and proliferation machinery of the Rho GTPase signaling paradigms. One way to further map out the involvement of specific STAT3 functions in the context of Rho-mediated migration and/or proliferation could come by reconstituting point mutants of STAT3 with well defined functions combined with examining the STAT3 responsive target genes by a micro-array approach (32). , and/or Rac1 to elicit effects on actin organization, cell migration, gene transcription, and cell proliferation. Each of these Rho GTPases may rely on STAT3-and/or STAT3-regulated genes to achieve these cellular responses. Identification of such STAT3 functions and targets may help define the essential components of Rho GTPase-signaling cascades and reveal the critical link of the signaling networks controlled by these two important classes of intracellular signal molecules.
v3-fos-license
2022-07-11T13:09:57.262Z
2022-07-11T00:00:00.000
250410466
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11930-022-00330-1.pdf", "pdf_hash": "4411624f6ff5003e7186d90f89abe41bb6733662", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42641", "s2fieldsofstudy": [ "Medicine", "Education", "Philosophy" ], "sha1": "cad7e687c1229c24feb313b3a40747a1f131af18", "year": 2022 }
pes2o/s2orc
The Ethics of Medical Sexual Health Education and Its Provision The purpose of this paper is to open a discussion of the ethics of medical sexual health education (SHE) and its provision in medical education. The paper utilizes a qualitative analysis of currently available literature on medical SHE and a medical ethics framework of the four prima facie principles of (1) respect for autonomy, (2) beneficence, (3) non-maleficence, and (4) justice, together with expert opinion. The result is a review of the ethics of medical SHE as well as the ethics of the decision to provide, or not to provide, comprehensive SHE. Recent literature has underscored the many ways in which comprehensive medical SHE supports trainees’ ability to provide sexual health care and improve their delivery of general health care, as well as the many ways sexual health is correlated with systemic health. The literature also provides evidence that the provision of comprehensive SHE is limited in undergraduate and graduate medical education. There is a dearth of literature specifically examining the ethics of medical SHE provision. This analysis demonstrates the ways in which comprehensive medical SHE and its provision conforms with the principles of the ethical practice of medicine. The analysis also supports that a lack of inclusion of SHE in medical education programs may be a violation of these principles and increases the risk of future unethical practice by medical professionals. MESH Headings: Ethics, Medical, Social justice, Sexual health, Sexuality, Human, Education, Medical, Undergraduate, Education, Medical, Graduate Introduction The history of sexual health education (SHE) in medical education (med-ed) originates in the twentieth century. In the 1950s, a scant handful of medical schools provided a limited amount (1-4 h) on the physiology of human reproduction [1]. After the sexual revolution, by 1973, 98% of medical schools offered courses in human sexuality, and 70% of those provided SHE as part of the core curriculum [1]. Medical SHE originally incorporated some biopsychosocial focus on sexuality as well as instruction in reproductive physiology with a dual purpose of providing the basics of the education and shifting undergraduate medical students' personal attitudes towards sexuality. Now medical SHE often emphasizes sexual engagement risk reduction, and the biological and physiological elements of reproductive biology and sexual engagement rather than a comprehensive biopsychosocial focus incorporating sexual medicine [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15]. Unfortunately, since the 1980s, the amount of medical SHE has been decreased [5-9, 11, 15]. A 2008 study found only 51 of 92 schools that responded to a telephone survey reported having a sexual health curriculum, and the median hours provided was 8 [15]. More recently, while a few universities and programs are expanding their sexual health programs (the University of Minnesota, for example), many SHE providers note their universities are further decreasing SHE. Reasons cited for this reduction include the lack of detailed accreditation standards for SHE by supervising medical educational committees, financial difficulties, and a lack of time and/ or faculty [5, 6, 9, 16•]. However, stigma attached to sexuality likely contributes as well [6, 16•]. The reduction of SHE and the lack of support for SHE by educational committees is problematic. Sexual health must be supported as it, itself, is fundamental to human health, and it is positively correlated with other health states, including financial, educational, physical, mental, and relational [16•, 17-33, 34•, 35]. More recently, this influence is increasingly shown to be bidirectional [18,19,25]. Without SHE, medical practitioners are ill prepared to support sexual health. Also, without 1 3 SHE, medical practitioners may unintentionally contribute to poor sexual and general health by providing medications and other treatments with sexual side effects and by not regularly addressing sexual health as a component of all other health [7, 8, 10, 11, 13-15, 30, 36-41, 41]. The potential for physicians to engage in these negative impacts on health is heightened by lack of education and mitigated by SHE [6-9, 14, 15, 16•, 21, 36, 43-46, 47•, 48•]. Even while the amount of SHE has been decreased in meded, sexual medicine and health have become flourishing academic subspecialties, and there is a growing body of research and literature that adds to our understanding of this facet of human health. Sexual health specialists have also examined SHE creation, current and future needs, and effects in medicine. The literature reflects an increasing focus on this education from the mid-twentieth century through the mid-1970s before declining through the end of the twentieth century. In recent years, the focus on SHE in med-ed has been renewed. The results have been multifold; careful, considered recommendations for a structure of SHE that would enable medical trainees to competently and consistently address sexual health as a biopsychosocial phenomena and with an understanding of the basics of sexual medicine; sexual health societies and programs that provide expanded sexual health education materials for professionals 1 ; and the creation, provision, and study of discrete SHE curricula [4-6, 8-11, 14, 16•, 36, 43, 47•]. Unfortunately, there are no studies to date that have demonstrated the information provided in these SHE curricula is used or even retained over the long term. And there is a paucity of data examining whether medical students vs residents vs practicing physicians use this information when it has been provided. At least partially, the lack of data is likely due to the same strictures that have limited the provision of this education. But the lack of data is doubtlessly also due to the limited population pool available for study, given the limited amounts and non-standardization of the SHE currently being provided. In addition, even if SHE is provided and then tested in the future, the degree to which the information is retained and used will likely depend on the degree of importance assigned to the knowledge by the students. The perceived importance of any knowledge is influenced by both overt factors (such as whether the course is mandatory and included in evaluations) and covert influences (including the hidden curriculum and the value or stigma attached to the topic). More recently, as resources for SHE have accumulated, proponents of SHE have proposed means for increasing SHE within undergraduate and graduate medical education (UME and GME, respectively), despite the multiple pressures that led to the reduction [5, 6, 8-10, 16 •]. With increased provision of research-supported SHE, further studies of the retention and use of this information will be possible and will likely demonstrate that this education, similar to other important, well thought out, institutionally supported core health curricula, is both retained and used. But, to date, the ethics of current SHE and its inclusion in education has not been an area of focus in published literature. Reasons behind the lack of discussion and study of the ethics of SHE provision are likely varied and may include that (1) given the progressive shift to a more bio/ medical approach, SHE may be considered simply more scientific data, not particularly worthy of ethical debate, and (2) stigma surrounding sexuality may have contributed to the marginalization of sexual medicine and to a reluctance to consider and address the ethics of sexual health and SHE [5, 6, 8, 9, 16•]. Despite, or perhaps because of these reasons, evaluation and discussion of the ethics of SHE and its provision are important. To open the discussion of the ethics of SHE, a qualitative analysis of the ethics of the fundamental form and purpose, application, and potential results of SHE will be performed. This analysis will utilize expert opinion, currently available literature, and an ethical framework of the "four principles plus scope" approach to biomedical ethics laid out in "Principles of Biomedical Ethics" by Beauchamp and Childress and summarized in the 1994 paper "Medical ethics: four principles plus attention to scope" by Gillon [49••, 50]. The four prima facie principles are (1) respect for autonomy, (2) beneficence (or the medical injunction of providing benefit to the patient), (3) non-maleficence (or the medical injunction to "do no harm"), and (4) justice (which may be considered to be divided into three: distributive justice, rights based justice, and legal justice) [49••]. The Ethics of Comprehensive She Provision As noted in a 2021 paper, stigma and secrecy around sexuality are common and contribute to adverse health outcomes. Sexual stigma is created by social, cultural, and institutional norms and perpetuated through (a lack of or) inaccurate information dissemination or secrecy and by the lack of scientifically supported education [21,32,37,43,[51][52][53][54]. While stigma is countered by education and normalization" [16•]. SHE, when scientifically valid and provided without bias or misinformation, is an effective means of mitigating stigma 1 3 both for the public and within medicine. Reducing stigma, in turn, can contribute to improvements in both sexual and other health measures [11, 30-32, 43, 55-57]. Scientifically valid medical and public SHE, when free of bias and misinformation, thus conforms with the principle of beneficence. The core tenet of scientifically valid, comprehensive medical SHE is to counter sexual stigma and ensure trainees have the capacity to provide sexual health care. One of the benefits of comprehensive medical SHE is that it teaches trainees how to address the stigmatized topic of sexual health and the need to do so consistently. These skills, taught through sexual health, are applicable across specialties [16•]. SHE also provides trainees with the ability to educate patients, as well as diagnose and treat patients' sexual health problems (or at least know when and where to refer) to restore sexual health [6-11, 14, 15, 16•, 32, 36, 43-46, 47•]. Medical SHE therefore doubly decreases stigma (1) by educating about and normalizing sexual health for medical trainees, leading to decreased stigma around sexual health in the medical population and (2) by supporting that physicians may decrease stigma in the patient population by education and normalization through consistently addressing sexual health as simply another component of systemic health. The provision of SHE thus doubly fulfills the principle of beneficence; benefitting trainees as an educational construct and, in turn, allowing trained practitioners to benefit patients as a professional obligation. The role SHE plays in combatting sexual stigma must also be considered with regard to the principle of justice. Minority and marginalized populations, including gender and sexual minorities (GSM), are disproportionately affected by stigma. These populations have been found to suffer from poor health at disproportionate rates, and these health disparities occur in the context of stigma leading to social marginalization resulting in increased mental and physical health issues and multiple factors that limit access to care [52, 53, 58 •, 59•, 60]. For GSM populations, it is clear that sexual stigma is the primary contributor to these health disparities [58•, 59•, 61]. However, many explanatory models of minority differences encompass sexual stigmas as well [53,60]. One manner sexual stigma can impact health is through perceived medical professional comfort, an important aspect of perceived support, and one that is positively correlated with mental and physical health in GSM and likely in all populations [61]. Comprehensive SHE contains components designed to improve trainees' comfort with addressing sexual health and related topics in all populations, including minorities and GSM, thereby upholding and supporting the principle of justice. Sexual and other stigma also affects general medical care, as health care providers may be personally uncomfortable or uncertain of how to interact with various minority groups, including GSM [58•, 59•, 60, 61]. This "constellation of inequalities can systematically magnify and reinforce initial conditions of ill health, creating ripple effects that impact other dimensions of well-being… The result is a mixture of interactive and cascading effects that require urgent attention from the point of view of justice. The job of (medical) justice is to correct these defects by making the six core elements of wellbeing (1-Health, 2-Personal Security, 3-Knowledge and Understanding, 4-Equal Respect, 5-Personal Attachments, and 6-Self-Determination) embedded values in social policy" [62]. This includes enabling physicians to be better prepared to treat minority and marginalized populations and GSM, both generally and around sexual health. Comprehensive SHE encompasses this education, and thereby, its inclusion supports the principles of justice, as guidelines for SHE include objectives of knowledge of GSM populations, of personal recognition and mitigation of transference and bias with all populations, and of increasing effective communication [5, 8-11, 14, 16•]. In summary, the fair opportunity rule of social justice "demands that (disadvantaged health populations) receive help to reduce or overcome the unfortunate effects of life's lottery" of health and the rules of redress require that compensation (in the form of increased attention to the needs of this population) be provided, and the inclusion of comprehensive SHE in medical education provides the means to this end [63]. The Ethics of The Lack of Comprehensive She Provision It has been noted by some medical educators that the lack of inclusion of SHE is not an active choice, but rather a passive one secondary to the scope of medical education pressures to include too much education in too little time [16•, 64•]. Educators also note the choice to limit SHE is supported by limited guidelines for SHE inclusion from educational committees and limited inclusion of sexual health specific knowledge in evaluations [6-9, 14, 16•, 64•]. With this argument, educators suggest that within the scope of medical education they are conforming with the principle of justice (acting on the basis of fair adjudication of competing claims) to best support the principle of beneficence. This is a weak ethical argument, though, as the scope considered is self-referential and narrow. As there is now a body of work expounding the necessary components of comprehensive SHE and offering suggestions for how to achieve increased amounts of SHE despite curricular pressures, as well as multiple sources for stand-alone SHE classes, there are indisputably multiple ways to ensure SHE can be added to medical education programs [4-6, 8-11, 14, 16•, 36, 43, 47•]. Furthermore, the literature supports that sexual health is an important facet of health, therefore worthy of education, and suggests SHE improves trainees' capacity to provide sexual and general health care [16•, 38, 65]. Therefore, in addition to the ways SHE meets the criteria for the principle of beneficence, the lack of SHE can be considered to be supportive of physicians violating the principles of beneficence and other medical ethical principles. To abstain from utilizing these ready resources to at least enhance the SHE of a program can be conceptualized as an active choice to limit trainees' ability to provide adequate sexual health care and education for their patients. To choose not to include SHE results in medical practitioners who are less able to benefit their patients in this crucial area of health, therefore less able to comply with the principle of beneficence. In addition, the well-being theory of social justice suggests the "job of justice" is to secure the core elements of well-being… in all six dimensions for each person in every society, including the global society" [62]. A lack of SHE decreases trainees' ability to secure patients' sexual "health" (core element #1) and their sexual health "knowledge and understanding" (core element #3). It could be argued that consequently this abstention goes against the broad principles of beneficence and of justice. The lack of inclusion of comprehensive SHE in a program can also be considered to conflict with the principle of distributive justice. Numerous public health organizations have espoused the need for increased focus on and support of sexual health through both public health efforts and the efforts of individual health care professionals [29][30][31][32]. Comprehensive medical SHE is the means of enabling individual medical trainees to achieve this contribution to sexual health care. For UME and GME programs to not at least make a concerted effort to improve the quality and amount of SHE can be regarded as both a shirking of professional duty and conscious contribution to inequitable and limited accessibility of sexual health care. This is at odds with the principle of distributive justice. Even if an argument is made that individual medical professionals are only able to contribute to sexual health care marginally, due to constraints of practice, they still require the knowledge and communication skills necessary to determine when a sexual health problem warrants further investigation or treatment and to be able to refer. Comprehensive SHE provides this education and thus supports distributive justice. The lack of provision of SHE may also be considered to create potential for active and passive violations of the principle of non-maleficence. As previously noted, a majority of physicians do not address sexual health on a regular basis and SHE is a proven method to increase this capacity to address and treat sexual health [2, 3, 5, 7, 9, 15, 16 •, 21, 31, 38, 40, 44, 45, 66-70]. Accordingly, to limit the provision of SHE, which directly contributes to physicians continuing to not address sexual health, may be considered to be tantamount to purposefully ignoring and not treating sexual health, a contradiction of the principle of non-maleficence. And, as noted, there is ample evidence that sexual health impacts multiple other facets of health. When we consider the causal sequence of deliberately deciding to not include SHE, we must recognize one of them to be the potential to allow poor sexual health to detrimentally impact systemic health, another contradiction of the principle of non-maleficence. Moreover, when physicians do not address sexual health on a regular basis, they may easily be seen as avoiding a stigmatized topic, thereby perpetuating or even contributing to the stigma surrounding sexuality and sexual health; an active violation of non-maleficence. The lack of SHE, and the correlated lack of physician communication around sexual health, can also be considered oblique neglect of the principle of autonomy (core element #6). There is evidence that patients would like their physicians to initiate conversations about sexual health [2, 41, 42•, 70, 71]. If evidence supports that many patients would like their physicians to address this component of health, and if physicians must at least address the topic to determine if an individual patient wants to engage further, for physicians to not have the capacity to assess or respond to their patients' health desires can be considered an infringement against the principle of patient autonomy. This same lack of ability to communicate engendered by a decision to limit SHE provision can also be considered a neglect of the principle of non-maleficence in another way. When physicians do not initiate the conversation, they place the burden on patients. Evidence supports that patients are reluctant to initiate these conversations as they (1) do not want to make their physicians uncomfortable and (2) doubt their physicians' competence [2,40,41,70]. To make patients shoulder the burden of initiating a difficult, stigmaladen conversation can be considered to be a cause of at least momentary harm. And the ethics of seeming to perpetuate the stigma have already been addressed. As well, if patients doubt their physicians' competence in this area and physicians support their patients' suspicions through their lack of communication, the patients' concerns can lead to a decreased willingness to engage with the individual physician or with health practitioners in general, indirectly causing greater harm to patients. There is another way a decision to not provide comprehensive SHE violates ethical principles. Sexual medicine research has provided a clear understanding of the impact of treatments, especially medications, on sexual health. This research has also shown that sexual side effects are a common reason for treatment noncompliance. In addition, the sexual medicine and sexual health communities have providers to diagnose and treat all varieties of sexual health dysfunction, whether from medical treatment or not. Comprehensive SHE teaches trainees about the sexual side effects of common medication, to be aware of this potential with all treatments, and provides practitioners with resources so they may refer for sexual health matters. Without comprehensive SHE, many trainees are not aware of this sexual health knowledge. The result is practitioners providing treatments that cause sexual dysfunctions and disorders and not disclosing these potential side effects as part of the discussion of the risks, benefits, and alternatives [72]. This sequence of behaviors is not only a violation of the principle of non-maleficence by potentially causing sexual health harm, it may be considered a further violation of this principle as it can contribute to patients eschewing treatment they need to improve other health conditions because of the sexual health impact. This lack of disclosure of the full sexual health impact may also be considered to flout the principle of autonomy, as it undermines the basic premise of informed consent. The last consideration in this brief initial analysis is given the intertwined nature of sexual health and systemic health, many of the behaviors and health interventions that preserve sexual health are the same as those that preserve systemic health [12,20,23,25,65]. And given that sexual health, or at least sexual activity and sexual satisfaction, is desired by many patients, the preservation of sexual health can be used as an extremely important motivation for patient engagement in systemic health preservation [12,65,73]. SHE provides the understanding of this motivation and encourages trainees to use sexual health as a component of a multifaceted health motivation approach. The lack of inclusion of SHE therefore can also be seen as a lack of attention to the principle of beneficence as it pertains to systemic health, not just sexual health. Conclusion In conclusion, this analysis demonstrates that the fundamental form and purpose, provision, and potential results of comprehensive SHE in medical education conforms with the four principles of medical ethics and support the ethical practice of medicine. This analysis also supports that the lack of provision of SHE must be considered to be a potential violation of the principles of medical ethics and increases the risk of trainees engaging in unethical practice. Implications of this initial analysis include that medical educators, educational committees, and evaluation creators must consider incorporating increased amounts of, if not comprehensive, SHE to ensure ethical education and practice. Readers of Current Sexual Health Reports can consider this analysis when advocating for greater inclusion of comprehensive SHE in UME and GME generally, in educational guidelines, and in testing. Going forward, continued analysis and discussion of the ethics of SHE and its provision should be undertaken and offered in the literature for further understanding of this topic. Conflict of Interest The author declares no conflict of interest. The author discloses she is an author and a medical advisor for Volonte. Human and Animal Rights Statement This article does not contain any studies with human or animal subjects performed by any of the authors. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
v3-fos-license
2023-10-06T06:18:01.635Z
2023-10-05T00:00:00.000
263670262
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.jmir.org/2023/1/e46747/PDF", "pdf_hash": "87bf78ca232d15ad1856226fb0419f69b0c5f3a8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42642", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "sha1": "f3b309573f648e552c4af733bed5d9bc58ef1d4e", "year": 2023 }
pes2o/s2orc
Initiation Patterns and Transitions Among Adults Using Stimulant Drugs: Latent Transition Analysis Background: The fourth wave of the drug overdose epidemic in the United States includes increasing rates of stimulant-involved overdose. Recent studies of transitions leading to stimulant misuse have shown complex patterns that are not universally applicable because they have isolated individual populations or individual behaviors. A comprehensive analysis of transitions between behaviors and the associations with present-day problematic drug use has not been conducted. Objective: This study aims to determine whether adults from the general population who use stimulants initiate use through a heterogeneous combination of behaviors and quantify the association between these typologies with present-day problematic drug use. Methods: Individuals who have reported use of any stimulant in their lifetime were recruited from the 2021 Survey of Nonmedical Use of Prescription Drugs Program, a nationally representative web-based survey on drug use, to participate in a rapid follow-up survey about their past stimulant use. Individuals were asked which stimulants they used, the reasons for use, the routes of administration, and the sources of the stimulant. For each stimulant-related behavior, they were asked at what age, between 6 and 30 years, they initiated each behavior in a 6-year time window. A latent transition analysis was used to characterize heterogeneity in initiation typologies. Mutually exclusive pathways of initiation were identified manually by the researchers. The association of these pathways with present-day problematic drug use was calculated using logistic regression adjusted by the current age of the respondent. Results: From a total of 1329 participants, 740 (55.7%) reported lifetime prescription stimulant use and 1077 Introduction The drug overdose epidemic in the United States is constantly evolving, with new drugs emerging and old drugs resurfacing [1].Recently, a resurgence of stimulant-involved overdose (eg, methamphetamine, amphetamine, and cocaine) [2], both with and without opioids [3], indicates the latest overdose wave involves concomitant use with stimulants, in addition to opioid-specific overdoses [4].In 2020, nearly 489,000 people aged 12 years and older initiated nonpharmaceutical stimulant use, and 734,000 initiated prescription stimulant misuse [5].Meanwhile, the dispensing of prescription stimulants is rising [6].Physiologically, transitions from occasional stimulant use to addiction have been characterized by structural changes to the brain [7], while moderate to high doses of stimulants lead to euphoria, cognitive impairment, and potentially, psychosis [8].Continued use of stimulants can lead to detrimental changes to neurological structures involved in impulse control, attention, disinhibition in social settings, and habit creation, potentially predicting transitions into use disorder through behavioral change [9].Understanding behavioral changes would present opportunities for treatment and other interventions, and understanding drug-related behavior prior to progression to substance use disorder is a critical area that needs further study [10]. Recent assessments into nonmedical stimulant initiation have shown complex patterns that are not universally applicable to all individuals who nonmedically use stimulants.Adolescents who initiate stimulant misuse earlier in life are more likely to have future substance-related problems than those who initiate it later in life [11].Initiation among college students peaks in certain months of the year [12] and it is correlated to high academic demand [13].Among individuals entering treatment for opioid use disorder, the number of years between the first use of opioids and stimulants (or vice versa) has steadily decreased from 1991 to 2020 [14], indicating a faster progression through drug use trajectories.In a survey of individuals recruited from Reddit and reporting nonoral prescription stimulant misuse, misuse of prescription stimulants and marijuana preceded the first use of many nonpharmaceutical drugs such as cocaine, methamphetamine, and heroin [15].In a different study, prescription stimulant misuse was not seen as an initiating factor, as it likely occurred after other drug use [16].These studies have isolated individual populations (eg, college students) or individual behaviors (eg, initiation of any misuse) when examining transitions.A comprehensive analysis of transitions between behaviors and the associations with present-day problematic drug use has not been done. Our goal is to determine whether adults from the general population who use stimulants follow a single behavioral pattern of initiation or whether initiation occurs through a heterogeneous combination of behaviors.Using a latent transition methodology, we categorized initiation into typologies, which represent different progressions of stimulant initiation.We also quantify the association between these typologies with present-day problematic drug use.We hypothesized that more than 1 initiation typology would be detected. Data Source A retrospective cross-sectional custom stimulant survey was conducted among adults in the United States who have reported any lifetime stimulant use.This custom stimulant survey was deployed in conjunction with the Researched Abuse, Diversion, and Addiction-Related Surveillance System routine web-based drug survey of the general adult population, the Survey of Non-Medical Use of Prescription Drugs (NMURx) Program, which has been shown to be valid [17] and reliable [18] against 3 national benchmark surveys on drug use and health. Parent Survey Description The parent NMURx Program survey is a national sample selected from a web-based panel, which is a group of individuals willing to take surveys for modest compensation [17].A survey administration company recruits panelists and administers the survey.Panelists were recruited through advertising, peer recruitment, and sponsored recruitment events, and panelist recruitment was conducted independently from the parent NMURx Program survey.Within the panelist group, participants for the parent NMURx Program survey were selected to be representative of all regions and with even distribution between male participants and female participants.Selected participants had the NMURx Program survey appear in their list of available surveys on the web-based portal hosted by the survey administration company.The parent NMURx Program survey asks about drug use for prescription and nonpharmaceutical stimulants (among other drug classes).Participant demographics, treatment history, and the Modified Drug Abuse Screening Test (DAST-10) [19] are also collected.The DAST-10 is a continuous score from a self-administered instrument for problematic drug use, and a score of 3 or larger (herein described as "severe problematic drug use") is a suitable indicator for risk of substance use disorder [20].Data were collected from August 27 to October 10, 2021, with an overall completion rate of 70% (42,616 initiated surveys and 30,006 completed surveys).The study was conducted in accordance with the CHERRIES (Checklist for Reporting Results of Internet E-Surveys) checklist [21] (Multimedia Appendix 1). Custom Stimulant Initiation Survey Description The parent NMURx Program survey was used as a case-finding tool.Participants who reported any lifetime stimulant use, prescription or nonpharmaceutical, received a follow-up survey within 2 weeks of their responses to the parent NMURx Program survey.Data collection for the follow-up survey occurred from September 13 to November 1, 2021.The follow-up survey appeared in the eligible participants' list of available surveys, where they could optionally fill the questionnaire for additional XSL • FO RenderX compensation.Eligible panelists were not specifically targeted with recruitment emails or other communication, beyond nonspecific reminders from the survey administration company that surveys were available for them.A total of 8812 respondents (29.4% of the NMURx Program parent survey) reported lifetime use of prescription or nonpharmaceutical stimulants making them eligible to be recontacted.A total of 1919 were recruited into the follow-up stimulant survey and 1329 (69.3% completion rate) completed the survey.It was assumed that nonrecruitment into the follow-up survey was not associated with initiation typologies. The follow-up stimulant survey asked additional questions about the initiation of stimulant-related behavior.Participants were first required to reconfirm whether they have used any stimulant in their lifetime to continue.Questions included which stimulant drugs have been used and at what age the first use occurred.In addition, for each stimulant reported, questions included a series of 17 behavioral questions about the route of administration (eg, snorting), the reason for use (eg, used to get high), and the source of the drug (eg, obtained from a friend or family) and the age they first engaged in such behaviors.The earliest reported initiation age for each behavior was used for analysis.A list of drugs and behavioral questions is provided (Multimedia Appendix 2). The quality of retrospective data is at risk from recall bias and could be incomplete [22]; however, calendar instruments have been shown to mitigate this bias [23,24].A calendar instrument was used that primed participants with life events.Participants were asked to provide 5 life events (eg, purchasing their first car) and the age those events occurred.When asked about drug-related ages of initiation, the responsive design elements of the survey used the life events and ages to aid in recall by being visible to the respondent, while drug behavior questions were asked.A screenshot of the calendar instrument used is included (Multimedia Appendix 2).A last question asking whether individuals' answers should be trusted was used as exclusion criteria (if answered "No") to reduce residual measurement bias; this has been shown to parsimoniously remove "careless" responses [25].It was assumed that all responses were accurate without residual recall bias after exclusions were applied. Ethical Considerations The NMURx Program study protocol was reviewed and approved by the Colorado Multiple Institutional Review Board prior to data being collected (#16-0922).The follow-up survey was approved by the Office of Management and Budget Paper Reduction Act review on July 13, 2021 (#0910-0847).Participants consented to be surveyed for both the parent survey and the custom stimulant survey.Data privacy is protected by a National Institutes of Health Certificate of Confidentiality. Latent Transition Analysis A latent transition analysis (LTA) was conducted to identify distinct typologies and transitions between typologies [26]. Briefly, this exploratory approach assumes that participants transition between latent status, where a status represents a subgroup of participants with a similar set of initiation behaviors.The reported age of initiation age was used to construct a retrospective longitudinal trajectory of behaviors for each person.Six-year age windows were used, 6-11, 12-17, 18-23, 24-29, and 30+ years, to establish categorical transition windows.An initial latent class analysis was conducted to identify important indicators, where subsequent models collapsed or removed indicators to improve model performance and interpretability, as has been done previously [27].Item-response probabilities were defined by the ρ parameters, which indicate the probability a person in a latent status would engage in a behavior.It was assumed that all initiation status were possible at all ages, and there were no time-varying changes in how statuses were defined. A series of LTA models were fit to determine the optimal number of latent statuses.Models with 1 through 6 statuses were considered; the most parsimonious, based on the smallest Bayesian Information Criterion, and an interpretable model was selected.When 2 models were similarly interpreted, the most parsimonious model with fewer statuses was preferred.Finally, once the model was identified, mutually exclusive pathways of initiation were identified manually by the researchers.The association of these pathways with present-day severe DAST-10 scores (≥3) was calculated using logistic regression adjusted by the current age of the respondent.All analyses were conducted in SAS (version 9.4; SAS Institute). Descriptive Statistics Within the follow-up sample of 1329, 740 (55.7%) participants confirmed lifetime use of a prescription stimulant and 1077 (81%) participants confirmed lifetime use of a nonpharmaceutical stimulant.Table 1 outlines participant characteristics by age of stimulant initiation.A total of 463 (34.8% of the entire sample) initiated before age 18 years of age, with 111 (8.4% of the entire sample) initiating from 6 to 11 years.Regardless of initiation ages, most initiated illicit stimulant use first or only used illicit stimulants, although the use of prescription drugs first was much higher in those who initiated after 22 years of age.Individuals initiating stimulant use before 23 years of age had slightly higher DAST-10 scores and were more likely to have used other drugs in their lifetime. Figure 1 shows the timing of prescription stimulant initiation stratified by the age of initiation of nonpharmaceutical stimulant use (n=1077, 81%).Regardless of what age nonpharmaceutical stimulant use was initiated, less than a quarter of participants indicated prescription stimulant use came before nonpharmaceutical stimulant use.The later in life an individual initiated nonpharmaceutical stimulant use, the more likely they were to have ever used a prescription stimulant. Model Overview The selected latent transition model based on patterns of initiation had 4 statuses which were held constant across the 5 age windows, as shown in Figure 2. Item-response probabilities (ρ parameters) are visually provided in Figure 2, while the values are provided in Multimedia Appendix 3. Based on an analysis of the behaviors associated with each status, the following names were selected: No initiation, illicit experimentation, conservative initiation, and nondiscriminatory experimentation.Brief descriptions are provided as follows. No Initiation This status was primarily defined by no initiation of new behaviors, and it was the predominant latent status over time. Participants in this status were not starting new behaviors (ρ near 0), although it is possible they could have continued behaviors they had initiated previously. Illicit Experimentation This status was characterized primarily by illicit stimulant initiation (usually cocaine, ρ=0.55) with a small probability of prescription amphetamine initiation (ρ=0.09).However, prescription amphetamine initiated in this status had a near zero probability of being acquired from their own prescription (ρ<0.01),indicating that any prescription amphetamine initiation that occurs in this status is nonmedical use.This status was also strongly characterized by initiating snorting stimulants (ρ=0.71) or oral use (ρ=0.51),new acquisition from friends or family members (ρ=0.79) or a dealer (ρ=0.54), and initiation of use to get high (ρ=0.94). Conservative Initiation This status was defined by low but approximately equal probabilities of initiating any use across all stimulant drugs (ρ values approximately 0.10).Furthermore, participants in this status were likely to initiate 1 or 2 new behaviors such as snorting (ρ=0.15) or obtaining the drug from a friend or family member (ρ=0.13).Notably, no singular set of reasons for use, routes of administration, or source of the drug strongly characterized this status.Rather, this status was characterized by participants selecting a small number of new behaviors to try during each age window. Nondiscriminatory Experimentation This status was characterized by a modest to high probability of engaging in multiple new behaviors across drugs, reasons, routes, and sources.Initiation of nonoral routes (ρ>0.50);initiation of use to get high (ρ=0.76),for cognitive performance (ρ=0.54),athletic performance (ρ=0.36); and polydrug use (ρ=0.68)were very likely in this status. Latent Status Transitions Figure 3 presents the movement of respondents between latent statuses across age windows as a Sankey plot.The width of the flow represents the percentage of individuals progressing between initiation statuses.No initiation latent status was the highest prevalence across all age windows, indicating that in most age windows, participants were not initiating new drug use behaviors.During the 6-11-year age window, the percentage of the No initiation status was the largest, with a small prevalence of conservative initiation, likely from initiation of medical use of stimulant drugs.Entering the 12-17-year age window, participants who were not initiating had a 14% probability of transitioning to the conservative initiation status and an 11% probability of transitioning into illicit experimentation.Entering the 18-23-year age window, the highest probability of progressing into the 2 Experimentation status was seen from the No initiation status.Entering the 24-29-year age window, those within the illicit experimentation had a 51% probability of transitioning into conservative initiation and a 48% probability of transitioning into No initiation.In the 24-29 and the 30 years and older age windows, transitions were primarily between conservative initiation and No initiation.During these 24 years and older age windows, a large percentage of respondents are characterized by no new initiation, indicating they were not trying new behaviors.All LTA item-response probabilities and transition probabilities are provided in Multimedia Appendix 3. Pathways of Stimulant Use Progression Four common initiation pathways across the ages were identified to parsimoniously define how individuals progressed through different behaviors related to stimulant use.Respondents were classified as (1) only transitioning to the conservative initiation status (n=481, 36%), (2) only transitioning to the illicit experimentation status (n=264, 21%), (3) transitioning between illicit experimentation and conservative initiation (n=250, 19%), and (4) transitioning to the nondiscriminatory experimentation status, regardless of any other status (n=288, 22%).Notably, it was extremely rare (n=46, 3%) for participants to transition from conservative initiation to illicit experimentation (seen in Figure 3 as very little flow from green to blue). The odds of having present-day severe DAST-10 scores significantly differed by the initiation pathway participants followed through time, even after adjusting for birth cohort (Table 2).Those with an initiation pathway involving nondiscriminatory initiation had 5.45 (95% CI 3.39-8.77)times the odds of a severe present-day DAST-10 score compared to those within the only illicit experimentation pathway.Those who progressed from illicit experimentation to conservative initiation had 3.50 (95% CI 2.13-5.74)times the odds of a severe DAST-10 score compared to those within the only illicit experimentation pathway.Those with only conservative initiation had 1.84 (95% CI 1.14-2.94)times the odds of a severe DAST-10 score as compared to those with only illicit experimentation. The amount of time spent initiating new stimulant use differed between pathways.Among participants who only transitioned into the conservative initiation status, on average, they spent 2 age windows in this status.Among participants who only transitioned into the illicit experimentation status, on average, spent 1 age window in this initiation status.Therefore, those who only initiated illicit experimentation tended to do so in only 1 period of their life, while those who initiated more conservatively continued to initiate new behaviors across multiple periods in their lives. Principal Findings Stimulant initiation is not homogeneous.This study uniquely identified subtypes of initiation in use rather than static classes of how stimulants are used.There were three subtypes of initiation characterizing (1) a slower, more conservative initiation pattern engaging in few behaviors in the age window, (2) a distinct initiation of illicit drugs to get high with oral or snorting use, and (3) a more varied, less discriminating initiation pattern engaging in many behaviors in the age window.None of the initiation subtypes were predominantly defined by the drug used, although preferences such as the preference of illicit experimentation to initiate cocaine use were observed.The number and variety in behaviors initiated, including reason for use, route of administration, and sourcing the drug, was a stronger differentiating factor than the choice of drug.Findings are consistent with other literature showing those who initiate stimulant use earlier in life have more substance-related problems [11]. Implications for Intervention The progression of individuals through stimulant use behaviors is heterogeneous, with potential consequences for problematic drug use later in life.With the recognition that earlier stages of drug use need to be targeted for clinical intervention [10], screening for and discussing patients' behavioral patterns could be a way for clinicians to address drug use before patients develop more severe substance use disorders.The results presented here show that individuals who protract their initiation of nonmedical use behaviors, including potentially nonmedical use of prescription stimulants, across many years are at higher risk of present-day problematic drug use than those who had a single initiation period involving snorting or oral use of nonpharmaceutical stimulants to get high.It is commonly known that initiation of stimulant nonmedical use often occurs during early adulthood [11], and these results confirm that.However, these results also show that initiation continued for many into the second half of their 20s and beyond.The conservative initiation status, often occurring during at least 2 time windows, was predictive of present-day problematic drug use, and therefore could benefit from intervention at multiple time points in a patient's life if these behaviors are detected by health care professionals.Screening for unhealthy drug use is recommended for all adults [28], and tools such as the Screening, Brief Intervention, and Referral to Treatment approach [29] could offer prompts for health care professionals to begin discussions with their patients. Another high-risk pattern identified that signals potential problematic use was nondiscriminatory experimentation.Study participants with this initiation pattern, consisting of one-fifth of all those studied, had 5.45 times greater odds of a severe DAST-10 score in the present day.Major features of polysubstance use and use by nonoral routes suggest high risk for other diagnoses including infectious disease exposures and use disorders beyond stimulants alone.These individuals may benefit from early intervention if this pattern is screened for and identified, both through harm reduction measures and other medical and psychiatric evaluation and treatment options. Notably, prescription stimulant nonmedical use did not emerge as a separate latent status nor was it common for individuals to use prescription stimulants before nonpharmaceutical stimulants, indicating that behavior is not a differentiating factor in determining lifetime stimulant use progression.This could be an important distinction for the progression of stimulant nonmedical use from what has been observed historically for opioid nonmedical use.Although an LTA analysis was not conducted on opioids, past work has shown prescription opioid use often precedes nonpharmaceutical opioid use [30,31].Despite the fact that those who use stimulants are approximately twice as likely to misuse them than those who use opioids [5], the results presented here show that initiation of prescription stimulant use first is uncommon, even as the likelihood of using a prescription stimulant has increased in younger birth cohorts. Strengths and Limitations The primary strength of this study is that participants were sourced from a large general population survey, and not from specific subpopulations (eg, college students).This allowed a more comprehensive understanding of what subtypes of behaviors are present and how they changed over time.The study also proactively mitigated measurement bias using previously established methods to exclude careless respondents. The study has 3 primary limitations.The first is recall bias.For some, participants were asked to recall behaviors from decades ago, which is likely incomplete or misremembered.However, the inclusion of a calendar tool mitigated recall bias.Second, the responding sample was skewed toward males, older individuals, White individuals, individuals from the western and southern regions, and higher-income individuals relative to the larger survey.However, the sample analyzed had similar overall problematic drug use scores to the larger survey, which could mitigate demographic-related bias.Third, while the study was recruited from a large diverse sample, the results were unweighted.Biases inherent in the self-selection, such as the relative poorer health of panelists [17], are uncorrected.Taken together, the low recruitment rate, demographic differences, and panel self-selection mean the results may not identify all possible initiation statuses.For example, unique pathways experienced by non-White participants or those from younger birth cohorts may be undetectable in this study.Statistical methods that correct for misrepresentation in follow-up surveys, such as the application of nested case-control designs [32], may enhance the detection of underrepresented pathways. Conclusions Public health implications of this work include the importance of repeated screening for both nonmedical and nonpharmaceutical stimulant use and clarity on how different progressions might lead to future problematic behavior.In future work, models evaluating polysubstance initiation and longitudinal studies of trajectories will be crucial to understanding the role of stimulants within a broader model of substance use.While stimulant use is studied as a single phenomenon here, it is part of a much broader pattern of use of multiple substances. Figure 3 . Figure 3. Transition probabilities and prevalence of latent statuses over time. Table 1 . Sample characteristics stratified by age of stimulant initiation. Table 2 . The odds of having present-day severe Modified Drug Abuse Screening Test (DAST-10) scores by initiation pathway.
v3-fos-license
2022-01-16T05:08:20.166Z
2022-01-14T00:00:00.000
245963453
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "6a65dc0bde604f9124bc7634f3df4b1f5eefa490", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42643", "s2fieldsofstudy": [ "Medicine" ], "sha1": "6a65dc0bde604f9124bc7634f3df4b1f5eefa490", "year": 2022 }
pes2o/s2orc
PET metabolic tumor volume as a new prognostic factor in childhood rhabdomyosarcoma Purpose Childhood RMS is a rare malignant disease in which evaluation of tumour spread at diagnosis is essential for therapeutic management. F-18 FDG-PET imaging is currently used for initial RMS disease staging. Materials and methods This multicentre retrospective study in six French university hospitals was designed to analyse the prognostic accuracy of MTV at diagnosis for patients with RMS between 1 January 2007 and 31 October 2017, for overall (OS) and progression-free survival (PFS). MTV was defined as the sum of the primitive tumour and the largest metastasis, where relevant, with a 40% threshold of the primary tumour SUVmax. Additional aims were to define the prognostic value of SUVmax, SUVpeak, and bone lysis at diagnosis. Results Participants were 101 patients with a median age of 7.4 years (IQR [4.0-12.5], 62 boys), with localized disease (35 cases), regional nodal spread (43 cases), or distant metastases (23). 44 patients had alveolar subtypes. In a univariate analysis, a MTV greater than 200 cm3 was associated with OS (HR = 3.47 [1.79;6.74], p<0.001) and PFS (HR = 3.03 [1.51;6.07], p = 0.002). SUVmax, SUVpeak, and bone lysis also influenced OS (respectively p = 0.005, p = 0.004 and p = 0.007) and PFS (p = 0.029, p = 0.019 and p = 0.015). In a multivariate analysis, a MTV greater than 200 cm3 was associated with OS (HR = 2.642 [1.272;5.486], p = 0.009) and PFS (HR = 2.707 [1.322;5.547], p = 0.006) after adjustment for confounding factors, including SUVmax, SUVpeak, and bone lysis. Conclusion A metabolic tumor volume greater than 200 cm3, SUVmax, SUVpeak, and bone lysis in the pre-treatment assessment were unfavourable for outcome. Introduction 101 patients have been previously reported [24,25,27]. This prior article dealt with prognosis factors especially the MTV whereas previous studies compared the efficacy of different treatment exposure. The monitoring methods after the end of the treatment were also defined by the protocols. All children and adolescents aged 0-20 years who had been diagnosed with localized or metastatic RMS were prospectively included. Histological diagnosis was carried out by the local pathologist and reviewed by the EpSSG Pathology Panel. Alveolar subtype was based on histology, as fusion status was not mandatory. Each tumour was classified according to its site of origin. Evaluation of lymph node involvement was primarily based on MRI or CT, but was verified by sampling on suspicion. Regional lymph nodes were defined as those appropriate to the site of the primary tumour. Induction chemotherapy was administered according to the EpSSG risk group, with liver-bile duct RMS considered an unfavourable other site. Patients within high-risk group were first randomized to receive either ifosfamide, vincristine and actinomycin (IVA regimen) or IVA + doxorubicin (IVADo regimen). Those who achieved complete remission (CR) after nine courses cycles of induction chemotherapy and local therapy entered a second randomization phase, receiving either no further chemotherapy, or 6 months of maintenance chemotherapy using vinorelbine and cyclophosphamide (VC). Local treatment was recommended after four cycles of induction chemotherapy, and was decided by the local multidisciplinary team, using delayed surgery, radiotherapy, or both. Patients with metastatic disease received induction chemotherapy (four cycles of IVADo + five cycles of IVA, ± bevacizumab), surgery and/or radiotherapy, followed by maintenance chemotherapy (12 cycles of low-dose cyclophosphamide + vinorelbine). Local therapy (surgery + radiotherapy) was planned after six courses. PET protocol All patients fasted for at least 5 hours prior to injection and had a blood sugar level below 120mg/dl. The activity of the FDG administered intravenously 60 minutes before examination was adapted to the patients' weight and age, in accordance with the EANM paediatric calculators (https://www.eanm.org/publications/dosage-calculator/). The following PET equipment was used in the six sites: in Clermont-Ferrand, a General Electric Discovery ST and a General Electric Discovery ST710, in Lyon, a Philips Gemini Allegro Body then, from 2012 onwards, a Philips Gemini Big Bore; in Marseille, a GE discovery until 2010, then a Siemens biograph 16; in Montpellier, a Siemens Biograph until 2015, then a Siemens mCT20 flow; in Toulouse, a Siemens biograph 6.0 Truepoint Hirez; and in Paris, a Philips Gemini XLI until 2016, then a General Electric Discovery ST710. The PET-CT scans were interpreted by two nuclear medicine physicians, of 2 and 16 years of experience, blinded to the clinical data except for the pathological diagnosis. MTVs were measured using the same imaging analysis software (TrueD-Siemens SyngoCT2006A). Studies were read independently, and in case of disagreements the exams were interpreted once again with the two physician to find a consensus agreement. An initial visual analysis was performed to determine the sites of abnormal FDG uptake. Any uptake greater than the adjacent background activity and which could not be explained by physiological or inflammatory phenomena was considered to be pathological. SUVmax, SUVpeak and MTV were calculated by placing a spheroid-shaped volume of interest at the site of the primary tumour lesion. From the physics side, after phantom studies [28, 29], a threshold value of 40% to define the tumor boundary on PET images was used in many clinical studies [30][31][32][33]. This 40% threshold is the most common index in clinical practice for evaluating tumor prognosis [34,35]. Nowadays PET imaging softwares offer an automatic 40% SUV approach to delineate tumor contours. Thereby a threshold of 40% of the SUVmax was applied in our study. Tumour volume was delineated by all the voxels within the initial volume of interest, with SUV values equal to or above this threshold (Fig 2). In the event of nonlocalized disease, we deemed that exhaustive measurement of all the lesions was impossible in daily clinical practice. We therefore adopted an approach whereby the total tumour volume was measured as the sum of the primary tumour volume and the volume of the largest distant lesion (lymph node or distant metastasis). MTV therefore referred either to the metabolic volume of the primary tumour, or to the sum of the primary tumour and the largest distant lesion. The presence of bone lysis induced by tumour contiguity on the PET-CT scan was also noted. Statistical analysis Chi 2 or Fisher tests were used to compare categorical variables, and Student or Wilcoxon-Mann-Whitney tests to compare quantitative variables. PFS was defined as the time interval between diagnosis and progression or death, whichever occurred first. OS was defined as the time from diagnosis to death. PFS and OS were analysed using the Kaplan-Meier method, logrank test, and Cox proportional hazards model to estimate the hazard ratio (HR), along with 95% confidence intervals (95% CIs). Follow-up was estimated using the reverse Kaplan-Meier method [36]. To test the prognostic value of MTV, a minimum p value approach was used to determine the cut-off value [37]. In a multivariate analysis, the first model included all the possible confounding factors with a p value < 0.2 in the univariate analysis. The model was reduced using backward elimination until only significant effects remained. First-order interactions were explored and proportionality checked. Tests were two-sided, and p values below 0.05 were considered significant. Analyses were performed using STATA version 14.2 software (StataCorp). Results Of the 326 patients with histologically confirmed RMS, 101 were eligible according to the prespecified exclusion criteria (Fig 1). Patients' characteristics are shown in Table 1. Median age at diagnosis was 7.4 years (IQR [4.0-12.5], children with alveolar RMS were older (p = 0.001), and 61% were male. The most frequent histology subtype was nonalveolar RMS (56%). According to 37% of patients had a distant site with lymph node involvement and 24 had a bone lysis induced by tumour contiguity, as this 4-year-old child with a left mandibular embryonal RMS (Fig 3). There were more patients with advanced stages in alveolar RMS (p = 0.007). The PET characteristics are shown in Table 2. The median MTV of the primary tumour was 26 cm 3 (IQR [10.2-103], and the median MTV of the primary tumour plus largest distant lesion where relevant was 31 cm 3 (IQR [13.1-172]. MTV was used for prognosis in both OS and PFS, and because cut-off values are easier for clinicians to grasp and to classify (e.g. RMS staging in four steps), we calculated the best cut-off MTV, using the minimum p value approach. An MTV of 200 cm3 was the optimum cutoff point in OS and PFS analyses, and 22% of patients had an MTV above 200 cm 3 . Overall Survival (OS) Median OS was 72.5 months (IQR [20.5-not reached] and the probability of surviving for 3 years after diagnosis was 62% (Fig 4A). Age at diagnosis, Stage 4, bone lysis induced by primary tumour contiguity, SUVpeak, SUVmax and an MTV above 200 cm 3 were prognostic Progression-Free Survival (PFS) Median PFS was 70.5 months (IQR [13.5-not reached], and the probability of PFS for 3 years after diagnosis was 53% (Fig 4B). Age at diagnosis, Stage 4, primary tumour excision, bone lysis induced by primary tumour contiguity, SUVpeak, SUVmax, and MTV above 200 cm 3 were prognostic factors for PFS in the univariate analysis ( (Fig 6)). In the multivariate Discussion The objective of this 10-year multicentre cohort study was to assess the prognostic value of metabolic tumor volume, measured on PET imaging carried out as part of patients' RMS extension assessment. Thus, after adjusting on confounding factors in the multivariate analysis, the risks of death and recurrence were approximately 2.6 (p = 0.009) and 2.7 (p = 0.006) times higher for an MTV � 200 cm 3 . Although patient selection relied on a PET scan being performed at diagnosis, our population was representative of the clinical reality, especially for the proportion of boys/girls and the distribution in each risk group [26, 38] contrary to previous published PET studies [11,14,39]. The age distribution is as previously described with a bimodal age peaks in childhood [40]. Nevertheless, the proportions of alveolar RMS and metastatic RMS were greater in our cohort than in previously reported ones [3,41,42]. This could be explained by the exclusion of 16 patients with embryonal RMS at the localized stage, whose primary tumour could not be measured because they had undergone excision surgery before the PET examination was performed. It took a while for PET to become systematically performed at RMS diagnosis, meaning that very few children in the early years of the study underwent PET imaging as part of their extension assessment. thereby 200 patients were also excluded (Fig 1). The best way of measuring tumour dimensions is a widely debated subject. Although a tumour size of more than 5 cm is historically considered to be a prognostic factor and is used for staging, it seems to be more relevant to measure the tumour in 3D especially for oblong ones. We did not find any publication concerning MTV for RMS in either children or adults, even though this parameter has been reported as a prognostic factor for several other malignant diseases, such as Hodgkin's lymphoma, advanced ovarian squamous cell carcinoma, nonsmall cell lung cancer, and metastatic colorectal cancer [18][19][20][21][22]. In a previous study of 108 patients with rhabdomyosarcoma combining MRI and CT scans, Ferrari et al. demonstrated a correlation between tumour size and volume with risk of death, with the risk increasingly proportionally to tumour size and volume until a plateau was reached for lesions >12 cm (major axis) or >194 cm 3 [43]. However, a correlation between a factor and death cannot be interpreted as a prognostic factor for death. Similarly, Baum et al. showed a correlation between intensity of primary tumour uptake and OS, but failed to demonstrate that it was an independent predictor of survival [39]. In addition, the use of morphological CT and/or MRI measurements in Ferrari et al.'s study did not take the aggressiveness of the tumour into account, contrary to PET, which provides information about its metabolic activity. These findings are consistent with our results concerning the increased risk of progression and death, linked to MTV increase. In our population, a tumour size > 5cm was taken into account through RMS stage. Thus, after adjusting on confounding factors in the multivariate analysis, including RMS stage, the risks of death and recurrence were approximately 2.6 (p = 0.009) and 2.7 (p = 0.006) times higher for an MTV � 200 cm 3 (p = 0.024). The advantage of using MTV over tumour size is the possibility of over-classifying small metabolically active lesions as higher-risk lesions. We did not measure the tumour volume from the CT scan, mainly because a large number of children had a chest CT and abdominopelvic ultrasound rather than a thoraco-abdominopelvic scan. Some tumours may also have poorly defined contours, and are therefore difficult to measure on the CT, as previously illustrated in Fig 2. Finally, the CT combined with the PET did not include injection with an iodinated contrast medium, so the tumour / anatomical structure contrast was too low to allow for automatic contouring achievable in routine clinical practice. Instead of exhaustively measuring the volume of each metastasis at diagnosis in the case of polymetastatic disease, we considered a maximum of two lesion volumes (i.e., primary tumour and largest metastasis). This point may be subject to discussion, but we reasoned that it is not reasonable to exhaustively measure the MTV of all metastases in the event of multifocal involvement in clinical daily practice. This type of time-consuming measurement may be appropriate in research protocols, but it is less common in daily use, and may lead to interobserver variations. In the univariate analysis, MTV was associated with a poorer prognosis (both for OS and PFS), even if the volume of the distant lesion was not considered in the MTV calculation. Conceptually speaking, measuring the primary tumour and the largest metastasis appears to be a better approach to gauging the actual tumour burden, and the resulting HR may be closer to reality. This type of approach is already used with the RECIST 1.1 criteria for CT evaluation of the therapeutic response in oncology, involving a maximum of five target lesions, with a maximum of two per organ [44,45]. Most of PET studies use SUV, and especially SUVmax, to approach the tumor's agressiveness, but with a certain variability of the measure [15,16]. Thus Brendle et al. showed that SUV calculated on the same PET equipment acquisition was subject to variability according to the differents algorithms reconstructions, and that SUVmax was the least reproducible measurement comparing to SUVmean and SUVpeak [46]. And the meta-analysis of Ghooshkhanei et al. in endometrial cancer illustrates this issue showing that three studies reported an association between the pre-operative SUVmax with disease free survival and/or overall survival. The HR of each study was calculated according to three differents cut off SUVmax (the values were 12.7, 17.7, and 8.35), highliting its variability, even more when PET machines are differents [47]. The prognostic value of the SUVmax of the primary tumour is still equivocal in rhabdomyosarcoma. Baum et al.'s study among 41 patients failed to prove that either primary tumour intensity or SUVmax/SUVliver was an independent predictor of OS and PFS [39]. Neither did Esraa El-Kholy et al.'s more recent study, despite a larger population of 98 patients [14]. By contrast, Casey et al.'s study involving 107 patients showed that a SUVmax threshold of 9.5 for primary tumour was an independent predictor of OS and EFS [11]. We demonstrated that SUVpeak had the same prognostic values across OS and PFS, with an identical HR to SUVmax, but a lower p value. As SUVpeak is calculated by averaging the SUV values in the pixels adjacent to the pixel representing the SUVmax within a radius of 1 cm 3 , it is free from the vagaries of variation that affect SUVmax, particularly in relation to background activity, variations in equipment, imaging acquisition and reconstruction protocols, and time between injection and PET acquisition. It therefore seems to be more precise and robust than the SUVmax, as has already been shown in numerous studies [21,48,49]. In our study, the SUVmax and SUVpeak values did not appear to be more predictive when adjusted to MTV in the multivariate analysis. This suggests that they act as a confounding factor in the prediction of death or progression, and that it is preferable to only consider MTV. Among the other factors we evaluated, bone lysis induced by primary tumour contiguity was an independent prognostic factor (Fig 3). Even if the possibility of complete surgical excision of the primary tumour is a known prognostic factor, partly dependent on its locoregional spread and therefore on the involvement of bone invasion [26], we did not find any study that specifically assessed this parameter in childhood RMS. In a clinical data review of 874 adults treated for soft-tissue sarcoma, bone invasion was a prognostic factor [50]. However, none of the 48 patients with bone disease had RMS. In other pathologies such as Hodgkin's or non-Hodgkin's lymphoma, bone invasion of a lymph node in the case of localized disease is not considered to have a worse prognosis and does not lead to any change in disease staging [51]. It might be worthwhile confirming our result by conducting a further prospective study. We did not find any difference in OS and PFS for histological type or Stages 1, 2 or 3. Only Stage 4 was linked to prognosis. Nowadays, the therapeutic escalation in the RMS 2005 protocol according to the prognostic factors at onset may improve patients' survival. It is only in patients with advanced disease that treatment has not been sufficiently effective, as reported by the Children's Oncology Group Soft-Tissue Sarcoma Committee [52]. The main limiting factor of our study was the population size, despite the recruitment of patients in six major French hospitals and cancer centres. The nationwide collection of patient data would allow us to validate our results in the future. Given the therapeutic challenges in a paediatric population, with the risk of developing secondary toxicities either immediately following treatment (vincristine-induced neuropathies, doxorubicin-induced heart failure) or later on (secondary blood diseases, neoplasia, or post-radiotherapy morphological sequelae), the inclusion of PET parameters such as metabolic volume and SUV in the decision-making trees for the management of RMS could make it possible to adapt patients' therapeutic management. Conclusion This multicentre study, a collaboration of six french university hospital, confirmed the prognostic value of pretreatment PET in childhood RMS. Moreover it is to our knowledge the first time that MTV appears to be a prognostic parameter. By considering the MTV of the primary tumour and the largest distant lesion, where relevant, to gauge the actual metabolic tumour burden, we showed that a MTV > 200 cm 3 is prognostic on survival with a risk of death or progression multiplied by approximately 2.5. These results should be prospectively validated in a larger patient population. Given the therapeutic challenges in a paediatric population, with the risk of developing secondary toxicities, our study brings an additional argument to include metabolic PET parameters in the decision-making trees for the management of RMS. And it could help to adapt patients' therapeutic management. Multicenters international studies, especially those of the European paediatric Soft tissue sarcoma Study Group, were focused on the treatment until now. On the future it could be interested to collect methodically patient's PET data then to analyse PET parameters and study their relationship with survival. Thereby the prognostic advantage of measuring MTV should be confirmed by a further prospective multicentre study involving a larger patient population. Thus future protocols could include PET data to classify patients in the different treatment groups and specify the management of children with RMS. (02)
v3-fos-license
2018-04-03T00:10:46.566Z
2009-01-01T00:00:00.000
10246936
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.tandfonline.com/doi/pdf/10.3109/17453670903035559?needAccess=true", "pdf_hash": "79cf1a6194267b1073f0e97ef611cd4b8c3e56fe", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42646", "s2fieldsofstudy": [ "Medicine" ], "sha1": "79cf1a6194267b1073f0e97ef611cd4b8c3e56fe", "year": 2009 }
pes2o/s2orc
Similar clinical outcome after unicompartmental knee arthroplasty using a conventional or accelerated care program Background and purpose Over the last 5 years, there has been increasing interest in reducing length of hospitalization (LOS) through accelerated programs. We examined the clinical outcome of patients undergoing a unicompartmental knee replacement (UKR) in an accelerated care program (A group) compared to a conventional care program (C group). Methods 40 patients randomized into 2 groups were included (A group: 17 patients; C group: 23 patients). Nausea, micturition problems, lower limb dysfunction, pain (VAS), opiate consumption, Knee Society score (KSS), day of discharge, rehospitalization within 3 months, contact with a general physician or nurse, and level of satisfaction were registered. Patients in the A group attended an information meeting. An intraarticular infiltration with Marcaine and adrenaline was used peroperatively. Patients in the C group had an epidural pump for 2 or 3 days. Patients in the A program were treated with NSAID and paracetamol postoperatively. Opiates were used in both groups in the case of breakthrough pain. The patients were considered ready for discharge when they were able to climb stairs to the second floor within 5 min. Results The median length of stay was 1 (1–3) day in the A group and 6 (4–7) days in the C group. The median pain score (VAS) at day 0 was 1 (0–3) in the A group and 5 (0–8) in the C group (p < 0.001). 11/23 of the patients in the C group had weakness of the lower limbs on day 1 due to the epidural; all patients in the A group were exercising on the day of the operation. Micturition problems necessitating intermediate catherization were more frequent in patients in the C program (19/23) than in patients in the A programme (3/17) (p = 0.001). There were no statistically significant differences between the two groups concerning nausea, average pain on days 1 and 2, use of opioids (during the first week postoperatively), KSS, contact with primary sector, level of satisfaction, or level of confidence. Interpretation We achieved a reduction in LOS of 5 days without affecting the clinical outcome. Background and purpose Over the last 5 years, there has been increasing interest in reducing length of hospitalization (LOS) through accelerated programs. We examined the clinical outcome of patients undergoing a unicompartmental knee replacement (UKR) in an accelerated care program (A group) compared to a conventional care program (C group). Methods 40 patients randomized into 2 groups were included (A group: 17 patients; C group: 23 patients). Nausea, micturition problems, lower limb dysfunction, pain (VAS), opiate consumption, Knee Society score (KSS), day of discharge, rehospitalization within 3 months, contact with a general physician or nurse, and level of satisfaction were registered. Patients in the A group attended an information meeting. An intraarticular infiltration with Marcaine and adrenaline was used peroperatively. Patients in the C group had an epidural pump for 2 or 3 days. Patients in the A program were treated with NSAID and paracetamol postoperatively. Opiates were used in both groups in the case of breakthrough pain. The patients were considered ready for discharge when they were able to climb stairs to the second floor within 5 min. Results The median length of stay was 1 (1-3) day in the A group and 6 (4-7) days in the C group. The median pain score (VAS) at day 0 was 1 (0-3) in the A group and 5 (0-8) in the C group (p < 0.001). 11/23 of the patients in the C group had weakness of the lower limbs on day 1 due to the epidural; all patients in the A group were exercising on the day of the operation. Micturition problems necessitating intermediate catherization were more frequent in patients in the C program (19/23) than in patients in the A programme (3/17) (p = 0.001). There were no statistically significant differences between the two groups concerning nausea, average pain on days 1 and 2, use of opioids (during the first week postoperatively), KSS, contact with primary sector, level of satisfaction, or level of confidence. Interpretation We achieved a reduction in LOS of 5 days without affecting the clinical outcome.  The length of hospitalization after surgery has decreased considerably over the last 2 decades, including major orthopedic surgery. In 1985 the typical length of stay (LOS) for patients undergoing a unicompartmental knee replacement (UKR) was 15 days (Robertsson et al. 2000) and today it is approximately 5 days (Fisher et al. 1997). In recent years, there has been an increasing interest in reducing LOS using a fast-track multimodal regime. This regime has mainly been implemented in cardiac surgery (Djaiani et al. 1999), gastrointestinal surgery (Kehlet 2005), and in orthopedic surgery , Larsen et al. 2008. In this randomized, controlled study we compared the clinical outcome of patients undergoing a UKR at the Department of Orthopaedic Surgery, Frederiksberg University Hospital in an accelerated program (A) or in a conventional program (C). Patients and methods Over a 1-year period (February 2004 through January 2005) 402 patients had a knee arthroplasty in our department. 96 of these had a UKR. All patients undergoing a UKR were asked successively to participate in the study, until there were 50 patients for inclusion. The inclusion criteria were: resident in the County of Copenhagen, ASA I or II, no medical history of gastrointestinal bleeding, and someone to look after the patient after discharge. The exclusion criteria were: major psychiatric disease, patients incapable of managing his/her own affairs, inflammatory joint disease, neurological or other disease(s) affecting the lower limbs, and previous major surgery of the knee. The patients were randomized (using sealed envelopes) into 2 groups, the conventional program group (C) and the accelerated program group (A). The Ethics Committee of the County of Copenhagen (KF 01-231/03) approved the study, and all patients gave informed consent. The protocol for anesthesia was violated in 3 patients in the A group due to unforeseen events after the randomization, but before the operation. Also, 3 patients scheduled for UKR decided peroperatively to have TKR instead (1 in group A and 2 in group C). 1 patient in group A and 2 patients in group C withdrew their consent regarding participation after randomization, at different stages of the project. Finally, surgery was cancelled for 1 patient in group A because of hypertension. Thus, the study was based on 23 patients in group C and 17 patients in group A. The groups were similar regarding anthropomorphometrical data (Table). The perioperative care program Patients in the A group attended a preoperative information meeting at which an orthopedic nurse, an anesthesiologist, and a physiotherapist thoroughly informed the patients about the planned procedure. Patients in the A group had spinal anesthesia with 3 mL bupivacaine (5 mg/mL) with 5 µg sufentanil added. At the end of surgery, the tissues around the knee joint were infiltrated with 50 mL bupivacaine (2.5 mg/mL) with adrenaline (5 µg/mL). Patients in the C group were anesthetized using a combined spinal/epidural technique with an indwelling epidural catheter, which was used for continuous infusion with 5 mL/h of bupivacaine (1.25 mg/mL) and morphine (50 µg/mL) for 2 days postoperatively. All patients had surgery performed by a consultant surgeon. Minimally invasive surgery was used in both programs and all patients were operated without a catheter. In the A group, postoperative pain was treated with NSAIDs. Opioids were used in both groups in the case of breakthrough pain. All patients were encouraged to walk and were assisted by a trained physiotherapist every day starting on the day of surgery, and this physiotherapist recorded VAS scores. The patients were considered ready for discharge when they were able to climb stairs to the second floor within 5 min, and this information was given to the patients before the operation. In the A group a 24-hour contact-line was established, and the patients were informed of the opportunity to call for hospital help/assistance after discharge. Furthermore, the personal nurse checked the well-being of the patients by calling them by phone on the day after discharge. Outcome measures The patients were followed according to a strict scheme at 2, 6, 26, and 52 weeks postoperatively. The Knee Society score (Insall et al. 1989) was registered. This is subdivided into a knee score that rates only the knee joint itself and a functional score that rates the patient's ability to walk and climb stairs. It was determined preoperatively and 6 months postoperatively. The following variables were also registered: nausea, micturition problems (at a level where intermittent catheterization was necessary), lower limb weakness caused by the epidural infusion (patient unable to walk), pain (VAS) at mobilization, consumption of opioids (conversion to morphine using a narcotic conversion table; www.medicin.dk), day of discharge, level of confidence ("how confident did you feel at discharge?"), rehospitalization within 3 months of discharge, phone contact with a general practitioner, contact with a nurse, and level of satisfaction ("how satisfied or dissatisfied were you with the operation and the perioperative period?") using a Likert scale with 5 categories. Statistics Sample-size calculations were performed with LOS as effect parameter. The expected difference in LOS was 3 days. SDa and the standard deviations (SDa and SDc) were 3 days. With power set to 90% and α set to 0.05, 44 patients would be required. With an estimated dropout rate of 10%, 50 patients would be needed. 10 patients was excluded after randomization. Since they were not different from the rest of their group (by t-test), we decided to use per-protocol analysis of the 40 patients fullfilling the whole program. All comparisons were performed by parametric or non-parametric tests according to the distribution of data. The level of significance was chosen to be p < 0.05. Results The length of stay was 1 (1-3) day in the A group and 6 (4-7) days in the C group. The median pain score (VAS) for day 0 was 1 (0-2.5) in the A group and 5 (0-7.8) in the C group (p < 0.001); for days 1 and 2 it was similar (median 2). 11/23 of the patients in the C group had weakness of the lower limbs on day 1, due to the epidural infusion preventing them from exercising sufficiently. All patients in the A group had exercise on the day of operation. Micturition problems necessitating catherization were more frequent in patients in the C group (19/23) than in patients in the A group (3/17) (p = 0.001). There was no statistically significant difference between the groups in the use of morphine during the first postoperative week. Approximately one-half of the patients in both groups experienced nausea during hospitalization. At discharge, 18 of the 23 of the patients in the C group and 13 of the 17 patients in the A group were "very confident" or "confident". After discharge, one-quarter of the patients in both groups had contact with a home nurse. 4/17 patients in the A group and 2/23 in the C group had telephone contact with their general practitioner. No patients were re-admitted to hospital after discharge. At 6 months postoperatively, the knee score and function score were 93 (35-100) and 90 (35-100) points, respectively, in the C group and 95 (64-100) and 100 (70-100) points in the A group; the differences were not statistically significant. Level of satisfaction was the same in the groups: 11/17 patients in the A group and 14/23 patients in the C group were very satisfied with the operation and with the perioperative period. Discussion and conclusion We accomplished a reduction in LOS of 5 days, from a median of 6 days in the conventional program to 1 day in the accelerated program. This reduction was achieved without affecting the clinical outcome. Furthermore, compared to the conventionally treated group, patients in the accelerated program had fewer problems during rehabilitation and had reduced use of opiates postoperatively. There have been a limited number of studies on accelerated programs concerning hip or knee arthroplasties (Fisher et al. 1997, Swanson et al. 1998, Beard et al. 2002, Isaac et al. 2005, Reilly et al. 2005, Vanhaecht et al. 2005. Only 3 of these have been randomized controlled trials (Swanson et al. 1998, Reilly et al. 2005, Larsen et al. 2008. Reilly et al. (2005) measured LOS in 41 patients undergoing UKR and found similar results to ours, thus reducing LOS from 4 days in the conventional program to 1 day in their accelerated program. They also found that this reduction in LOS was achieved without impairment of the clinical out-come. It is of interest that despite the wider inclusion criteria we used, we achieved similar results. Reilly et al. set an upper age limit of 75 years and only patients with NSAID tolerance were included, whereas in our study there was no upper age limit and only patients with a medical history of gastrointestinal bleeding were excluded. Furthermore, in contrast to our study Reilly et al. excluded patients with diabetes, previous heart surgery, or deep vein thrombosis. In the recent study by Larsen et al. (2008), the setup was very similar to the one in our study except that THR, TKR, and UKR were all included. 87 patients were randomized into an accelerated group and a conventional group (THR, n = 56; TKR, n = 27; and UKR, n = 4). LOS was reduced from 8 days in the control group to 5 days in the intervention group. It is difficult to compare the results of that study with ours due to the inclusion of three different patient groups. UKR is a smaller surgical trauma than THR and TKR. It is also well known that the patients undergoing UKR are younger and the LOS expected is therefore shorter. In the study by Larsen et al., one of the discharge criteria was 90 degrees of knee flexion. In our study this degree of flexion was achieved after discharge. Our findings suggest that the degree of knee flexion before discharge has no relevance; we found no difference in knee function scores between the intervention group and the conventional group. Our accelerated program eliminated lower limb weakness and reduced the number of patients with micturition problems. These 2 improvements are probably due to pain control without epidural infusion. Furthermore, intraarticular infiltration of bupivacaine/adrenaline seems to be a reasonable explanation for the low VAS scores at day 0 in the accelerated programme. This is in accordance with the findings of Ranawat and Ranawat (2007) and of Parvataneni et al. (2007). The reduction in LOS in our study was achieved without affecting the level of satisfaction or level of confidence, as was also found in 2 other studies . Our findings and those of others support the concept of accelerated postoperative care. LB was the investigator and AB was principal investigator. All authors participated in planning of the study and preparation of manuscript.
v3-fos-license
2022-03-03T16:53:06.573Z
2022-02-25T00:00:00.000
247199181
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1422-0067/23/5/2570/pdf", "pdf_hash": "c246b43b07c7a1ed7f6f9f921844d06cf12ab68f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42647", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "86ef5dce24c652a63144ca8178118fe439d5e84a", "year": 2022 }
pes2o/s2orc
Combined Transcriptomic and Proteomic Profiling of E. coli under Microaerobic versus Aerobic Conditions: The Multifaceted Roles of Noncoding Small RNAs and Oxygen-Dependent Sensing in Global Gene Expression Control Adaptive mechanisms that facilitate intestinal colonization by the human microbiota, including Escherichia coli, may be better understood by analyzing the physiology and gene expression of bacteria in low-oxygen environments. We used high-throughput transcriptomics and proteomics to compare the expression profiles of E. coli grown under aerobic versus microaerobic conditions. Clustering of high-abundance transcripts under microaerobiosis highlighted genes controlling acid-stress adaptation (gadAXW, gadAB, hdeAB-yhiD and hdeD operons), cell adhesion/biofilm formation (pgaABCD and csgDEFG operons), electron transport (cydAB), oligopeptide transport (oppABCDF), and anaerobic respiration/fermentation (hyaABCDEF and hycABCDEFGHI operons). In contrast, downregulated genes were involved in iron transport (fhuABCD, feoABC and fepA-entD operons), iron-sulfur cluster assembly (iscRSUA and sufABCDSE operons), aerobic respiration (sdhDAB and sucABCDSE operons), and de novo nucleotide synthesis (nrdHIEF). Additionally, quantitative proteomics showed that the products (proteins) of these high- or low-abundance transcripts were expressed consistently. Our findings highlight interrelationships among energy production, carbon metabolism, and iron homeostasis. Moreover, we have identified and validated a subset of differentially expressed noncoding small RNAs (i.e., CsrC, RyhB, RprA and GcvB), and we discuss their regulatory functions during microaerobic growth. Collectively, we reveal key changes in gene expression at the transcriptional and post-transcriptional levels that sustain E. coli growth when oxygen levels are low. Introduction Escherichia coli is a Gram-negative commensal bacterium that commonly inhabits the intestines of humans and other animals under microaerobic or anaerobic conditions. Previous studies have shown that E. coli growth at different concentrations of oxygen involves substantial reprogramming of the gene expression controlled by several transcription factors, E. coli MG1655 strain was grown on minimal medium containing glucose as a carbon source under continuous aerobic or microaerobic conditions. These experiments were carried out in a benchtop fermentor (Winpact Parallel Fermentation System FS-05-220) to obtain multiple biological replicates (namely, biological replicates of 5 aerobic and 10 microaerobic cultures) for further analysis. Cell doubling times of aerobic and microaerobic cultures were 77.9 ± 8.6 and 245.5 ± 24.7 min, respectively. To prepare total RNA [17] or protein samples, cells were grown to mid-logarithmic phase, corresponding to OD 460 0.5-0.6. Samples of purified RNA were used for the RNA deep-sequencing analyses, and the same batches of RNA were used for Northern blot validation. Details of the bioreactor culture conditions, RNA deep-sequencing procedures, and reagents used are provided in the Materials and Methods. More than 9 million raw sequencing reads were generated for each RNA sample. After trimming the raw sequencing reads, the high-quality unique sequences were mapped to the E. coli K12 substrain MG1655 reference genome (NC_000913) [18]. On average, mapping coverages of 94.6% and 88.8% were obtained for total unique sequence reads under aerobic and microaerobic growth conditions, respectively (Table S1). With a threshold of ≥1 transcript per million mapped reads (TPM), we detected expression of 4388 and 4385 genes under aerobic and microaerobic growth conditions, respectively (NCBI GEO accession # GSE189154). To assess the robustness of our datasets, we calculated correlations across biological replicates. In Figure S1A, we show values for the correlation coefficients for all pairwise scatterplots obtained for aerobic or microaerobic samples. We identified strong correlations between biological replicates representing the same growth condition. More specifically, correlation coefficients for aerobic and microaerobic transcriptomes were greater than 0.97 and 0.91, respectively, implying that our results are highly reproducible. In addition, we performed principal component analysis (PCA) on a combined dataset that successively maximized variance across all datasets (i.e., datasets O-1~O-5 and N-1~N-10). Our PCA revealed two distinct groups (i.e., O-1~O-5 and N-1~N-10) corresponding to aerobic and microaerobic cultures, respectively ( Figure S1B). Together, our correlation and PCA analyses justify the use of all our RNA-seq datasets for further gene expression analyses. Defining Major Gene Clusters Involved in Adaptation to Microaerobiosis To identify differentially expressed genes (DEGs), we ranked the processed data on detected gene transcripts under aerobic (4388 genes) and microaerobic (4385 genes) conditions according to false discovery rate (FDR)-adjusted p-values and log 2 fold-change (FC). Identification of DEGs under Microaerobic Versus Aerobic Conditions To compare the gene expression patterns of microaerobic and aerobic cultures, we calculated the normalized transcript expression values (expressed as TPM) for each gene with a p-value ≤ 0.05 and selected 280 DEG transcripts displaying log 2 fold-change ≥ 2, representing 176 upregulated and 104 downregulated genes, respectively ( Figure 1A). Next, we filtered these DEGs using a FDR threshold of −log 10 (p-value) ≥ 1 ( Figure 1B), which resulted in 176 DEGs (105 upregulated and 71 downregulated genes) that showed significant differential expression. aerobic and microaerobic cultures, respectively ( Figure S1B). Together, our correlatio and PCA analyses justify the use of all our RNA-seq datasets for further gene expressio analyses. Defining Major Gene Clusters Involved in Adaptation to Microaerobiosis To identify differentially expressed genes (DEGs), we ranked the processed data o detected gene transcripts under aerobic (4388 genes) and microaerobic (4385 genes) con ditions according to false discovery rate (FDR)-adjusted p-values and log2 fold-chang (FC). Identification of DEGs under Microaerobic versus Aerobic Conditions To compare the gene expression patterns of microaerobic and aerobic cultures, w calculated the normalized transcript expression values (expressed as TPM) for each gen with a p-value ≤ 0.05 and selected 280 DEG transcripts displaying log2 fold-change ≥ representing 176 upregulated and 104 downregulated genes, respectively ( Figure 1A Next, we filtered these DEGs using a FDR threshold of -log10 (p-value) ≥ 1 ( Figure 1B which resulted in 176 DEGs (105 upregulated and 71 downregulated genes) that showe significant differential expression. Functional Clusters of DEGs We present a comprehensive overview of the identified DEGs in Table 1, reveal key gene clusters, genes/operons, biological functions, small regulatory RNAs, and lated transcription factors involved in adaptation to changing oxygen conditions. Upr ulated ( Figure 2A) and downregulated DEGs ( Figure 2B) were functionally classified i three general gene ontology (GO) categories, i.e., biological processes, cellular comp nents, and molecular functions. Based on significant fold enrichment (≥10 compared w reference genes in the same subcategory), the upregulated DEGs we identified w mainly found in the following GO subcategories: peptidoglycan-associated pept transport, oxidative phosphorylation, Ni-Fe hydrogenase complex, peptidoglycan p tide transmembrane transporter activity, peptidoglycan transmembrane transporter tivity, hydrogenase (acceptor) activity, and oxidoreductase activity acting on hydrogen donor (Figure 2A). Through the same process, we identified 76 GO subcategories downregulated DEGs, including TCA cycle, several transport processes (such as fer hydroxamate import into cell, iron import into cell, copper ion export), ion homeosta chemical homeostasis, enterobactin biosynthetic process, nonribosomal peptide biosy thetic process, secondary metabolite biosynthetic process, lactone metabolic process, tibiotic metabolic process, stress response to metal ion, detoxification of inorganic co . DEGs displaying statistical significance (i.e., meeting this FC criterion) are shown as red (176 upregulated genes) or green (104 downregulated genes) dots. (B) Volcano plot displaying FC plotted against the false discovery rate (FDR) p-value. The y axis represents the −log 10 FDR p-value and the x axis represents the log 2 FC value. The horizontal black line indicates the significance threshold (−log 10 p-value ≥ 1), and the vertical black lines indicate the FC threshold (absolute value of log 2 FC ≥ 2). DEGs displaying statistical significance (i.e., those meeting both criteria) are shown as 105 upregulated (red dots) and 71 downregulated (green dots) genes in the right-upper and left-upper areas of the panel delineated by black lines, respectively. Functional Clusters of DEGs We present a comprehensive overview of the identified DEGs in Table 1, revealing key gene clusters, genes/operons, biological functions, small regulatory RNAs, and related transcription factors involved in adaptation to changing oxygen conditions. Upregulated ( Figure 2A) and downregulated DEGs ( Figure 2B) were functionally classified into three general gene ontology (GO) categories, i.e., biological processes, cellular components, and molecular functions. Based on significant fold enrichment (≥10 compared with reference genes in the same subcategory), the upregulated DEGs we identified were mainly found in the following GO subcategories: peptidoglycan-associated peptide transport, oxidative phosphorylation, Ni-Fe hydrogenase complex, peptidoglycan peptide transmembrane transporter activity, peptidoglycan transmembrane transporter activity, hydrogenase (acceptor) activity, and oxidoreductase activity acting on hydrogen as donor (Figure 2A). Through the same process, we identified 76 GO subcategories for downregulated DEGs, including TCA cycle, several transport processes (such as ferric hydroxamate import into cell, iron import into cell, copper ion export), ion homeostasis, chemical homeostasis, enterobactin biosynthetic process, nonribosomal peptide biosynthetic process, secondary metabolite biosynthetic process, lactone metabolic process, antibiotic metabolic process, stress response to metal ion, detoxification of inorganic compound, energy transducer activity, signaling receptor activity, and Fe 2 S 2 cluster binding, among others ( Figure 2B). To better define the molecular interactions, reactions, and relationship network of the biogenesis pathways that are differentially affected under microaerobic versus aerobic conditions, we conducted a Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis (p-value ≤ 0.05) on the DEGs. This analysis revealed four and seven KEGG pathways that were upregulated or downregulated, respectively ( Figure 2C,D, left panels, respectively). The upregulated KEGG pathways were β-alanine metabolism, nitrotoluene degradation, quorum sensing, and β-lactam resistance, whereas the downregulated KEGG pathways were propanoate metabolism, carbon metabolism, biosynthesis of secondary metabolites, ABC transporters, biosynthesis of antibiotics, TCA cycle, and biosynthesis of siderophore group nonribosomal peptides. We also employed the UniProt Knowledgebase (UniProtKB) keyword database (EMBL-EBI, Cambridge, UK; SIB Swiss Institute of Bioinformatics, Geneva, Switzerland; PIR, Washington, DC, USA) to further characterize the retrieved pathways pertaining to DEGs. By assigning DEGs to functional, structural, or other UniProtKB keyword categories, we essentially generated a highly similar classification to those determined from our GO and KEGG pathway analyses. More specifically, UniProtKB keywords associated with upregulated DEGs were iron, electron transport, peptide transport, and membrane, whereas copper, phosphopantetheine, copper transport, transmembrane beta strand, ligase, Fe 2 S 2 , receptor, bacteriocin transport, TonB box, TCA cycle, transport, enterobactin biosynthesis, iron, ion transport, and iron transport were all associated with downregulated DEGs ( Figure 2C,D, right panels, respectively). Prophage-and Phage-Related Genes Prophage-related genes constitute up to 13.5% of the E. coli genome [34,35], contributing to bacterial survival in hosts by increasing cell fitness and virulence. Recent studies have revealed that the expression of such prophage-related genes can (i) increase E. coli resistance to adverse conditions such as exposure to antibiotic, acid, oxidative, or osmotic stress; and (ii) influence metabolic remodeling, biofilm formation, cell movement, and growth [36][37][38][39][40]. Although our recent works revealed that one such prophage gene, dicF [41], plays a critical role in regulating cell division under anaerobic conditions [42], how other prophage-related genes are expressed and function under oxygen-limited conditions remains unclear. To identify other prophage-related genes that potentially contribute to cell fitness and survival in microaerobic environments, we compared the expression of prophage-and phage-related genes under microaerobic versus aerobic conditions. Using our RNA-seq transcriptomic dataset and based on 245 known prophage-related genes within 134 operons [2,33], we detected the expression of 200 prophage-related genes within 118 operons, reflecting 132 upregulated and 68 downregulated genes. As summarized in Table 2, highly upregulated (log 2 FC ≥ 1.5) prophage-and phage-related genes could be assigned to 15 operons, whereas the downregulated ones were solely localized in the fhuACDB operon that codes for proteins involved in ferrichrome transport. Notably, FhuA protein can also serve as a phage receptor [43]. Prophage-and Phage-Related Genes Prophage-related genes constitute up to 13.5% of the E. coli genome [34,35], contributing to bacterial survival in hosts by increasing cell fitness and virulence. Recent studies have revealed that the expression of such prophage-related genes can (i) increase E. coli resistance to adverse conditions such as exposure to antibiotic, acid, oxidative, or osmotic stress; and (ii) influence metabolic remodeling, biofilm formation, cell movement, and growth [36][37][38][39][40]. Although our recent works revealed that one such prophage gene, dicF showing the fold change log 2 value. Protein (increased/decreased) showing the abundance ratio value. Undetected transcripts or proteins are in black or indicated by "X", respectively. Non-prophage related gene marking within Interestingly, some of these transcripts encode TFs that control metal ion homeostasis; for instance, CusR for copper, and LscR, Dps, and FecI for iron. Theoretically, when the level of a TF increases, its target genes should be controlled accordingly, depending on whether the TF is a positive or negative regulator. We selected the IscR regulon for further analysis. Levels of the iscR mRNA decreased~4.4-fold (log 2 = 2.13; Table 3) under microaerobic conditions relative to those under aerobic growth, and the level of the respective protein, lscR, consequently also diminished (~1.3-fold; Table S2). This reduction in IscR abundance, along with an anticipated reduction in [Fe 2 S 2 ] iron-sulfur cluster availability (Table 1) required for IscR activity, could be accountable, at least in part, for the nearly 3-fold upregulation of torT (log 2 = 1.67; Table 4). The changes in IscR abundance had the most pronounced effect on the nrdHIEF operon, the genes of which exhibited a 10.8-59.3 foldchange (log 2 = 3.43 to log 2 = 5.89 in Table 4) in downregulated DEG expression. 52 Log 2 fold change values for up-and down-regulated mRNA and abundance ratio value of increased and decreased protein (in red and green, respectively) are indicated. Undetected molecules are marked by "X". *: No detected value under aerobic condition leading to a huge protein abundance ratio that is marked as "100". Log 2 fold change values for up-and down-regulated mRNA and abundance ratio value of increased and decreased protein (in red and green, respectively) are indicated. The RyhB sRNA acts as a global regulator of iron homeostasis [10]. We observed extremely low abundance of this sRNA under microaerobic conditions (~8-fold less compared to levels under aerobic conditions; log 2 = 3, Table 5) and it was not detectable by Northern blotting (see Result Section 2.3), an outcome consistent with Fur-dependent repression of ryhB transcription. Identification of Differentially Expressed sRNAs and Northern-Blot-Based Validation sRNAs are common in bacteria, where they play critical roles in regulating a wide range of cellular functions [45]. Our RNA-seq dataset also revealed differential expression of a number of sRNAs under microaerobic and anaerobic conditions. Of the 64 known sRNAs in E. coli [2,33], 18 exhibited >1.5-fold difference in abundance under microaerobic versus aerobic conditions ( Figure 3A). We employed Northern blot analysis to validate these results. Consistently, we observed that the abundance of RyhB was dramatically reduced under oxygen-limited conditions, rendering it almost undetectable under microaerobiosis ( Figure 4A, second panel from right). In contrast, the levels of several other sRNAs (e.g., CsrC, GcvB and RprA) were considerably higher under microaerobiosis ( Figure 4A). We detected two or more species of some sRNAs (e.g., GadY and RprA ( Figure 4A), GlmY, and RyeA ( Figure 4B)). To test whether the increase/decrease in abundance of certain sRNAs could be attributable to their higher metabolic stability, we also determined the half-lives of three sRNAs-namely CsrB, CsrC, and RyhB-by inhibiting their transcription by means of rifampicin treatment, and then determined their time-dependent decreased abundance using Northern blot analysis. As shown in Figure 4A-C,E-G, the half-lives of both CsrC and CsrB increased moderately under microaerobiosis (from 4.3 to 6.9 min and from 3.8 to 5.9 min, respectively), though only the steady-state level of CsrC increased dramatically ( Figure 5H). This outcome indicates that it is more efficient transcription, rather than increased RNA stability, that is responsible for the higher CsrC abundance under microaerobic conditions. Given that RyhB was almost undetectable in cells cultured under oxygen-limited conditions, we could not directly compare the half-life of this sRNA under different conditions ( Figure 5I-L). In addition to validating sRNA levels by Northern blotting, we also assessed levels of Hfq [46]-an RNA chaperone that plays an important role in facilitating sRNA/mRNA interactions-by means of Western blotting, which revealed only minor differences in abundance under aerobic and microaerobic conditions ( Figure 5M,N). Identification of Differentially Expressed sRNAs and Northern-Blot-Based Validation sRNAs are common in bacteria, where they play critical roles in regulating a wide range of cellular functions [45]. Our RNA-seq dataset also revealed differential expression of a number of sRNAs under microaerobic and anaerobic conditions. Of the 64 known sRNAs in E. coli [2,33], 18 exhibited >1.5-fold difference in abundance under microaerobic versus aerobic conditions ( Figure 3A). We employed Northern blot analysis to validate these results. Consistently, we observed that the abundance of RyhB was dramatically reduced under oxygen-limited conditions, rendering it almost undetectable under microaerobiosis ( Figure 4A, second panel from right). In contrast, the levels of several other sRNAs (e.g., CsrC, GcvB and RprA) were considerably higher under microaerobiosis (Figure 4A). We detected two or more species of some sRNAs (e.g., GadY and RprA ( Figure 4A), GlmY, and RyeA ( Figure 4B)). To test whether the increase/decrease in abundance of certain sRNAs could be attributable to their higher metabolic stability, we also determined the half-lives of three sRNAs-namely CsrB, CsrC, and RyhB-by inhibiting their transcription by means of rifampicin treatment, and then determined their time-dependent decreased abundance using Northern blot analysis. As shown in Figure 4A-C and 4E-G, the half-lives of both CsrC and CsrB increased moderately under microaerobiosis (from 4.3 to 6.9 min and from 3.8 to 5.9 min, respectively), though only the steady-state level of CsrC increased dramatically ( Figure 5H). This outcome indicates that it is more efficient transcription, rather than increased RNA stability, that is responsible for the higher CsrC abundance under microaerobic conditions. Given that RyhB was almost undetectable in cells cultured under oxygen-limited conditions, we could not directly compare the halflife of this sRNA under different conditions ( Figure 5I-L). In addition to validating sRNA levels by Northern blotting, we also assessed levels of Hfq [46]-an RNA chaperone that plays an important role in facilitating sRNA/mRNA interactions-by means of Western blotting, which revealed only minor differences in abundance under aerobic and microaerobic conditions ( Figure 5M,N). (left and middle panels, respectively) and csrA (right panel) mRNAs within the E. coli genome. The y axis represents the number of RNA-seq reads for the sRNAs and csrA mRNA on the largest scale of 400,000 and 4000, respectively. The coding region of each gene is shown in blue at the top of each panel, and expression is shown in blue and red for aerobic and microaerobic growth conditions, respectively. conditions. Mean values for CsrB (C), CsrC (G), and RyhB (K) half-lives under aerobic and microaerobic conditions are shown (encompassing three biological repeats, bars represent standard error). The dotted gray line indicates 50% of total RNA remaining. Black circles and blue squares represent the signal intensities corresponding to RNA samples from aerobic and microaerobic cultures, respectively. CsrB, CsrC, and RyhB half-lives under aerobic conditions were calculated as 3.8 ± 0.2, 4.3 ± 0.4, and 7.4 ± 0.1 min (C,G,K), respectively, whereas under microaerobic conditions they were 5.5 ± 0.3 min, 6.1 ± 0.4 min, and no detectable signal (see panels (B,F,J)), respectively. Bar graph shows the relative steady-state levels of small RNAs (time 0) normalized to their levels under aerobic conditions, which were arbitrarily set as 1. Experiments were performed with three biological replicates and representative images are shown. The steady-state level of CsrB under microaerobiosis relative to aerobiosis was 1.03 ± 0.05-fold (p-value = 0.49) (D), whereas for CsrC it was 5.32 ± 0.51-fold (p-value < 0.0001, indicated as ****) (H). Expression of RyhB was not detectable (nd) under microaerobic conditions (L). (M) Hfq protein abundance analyzed via Western blotting. Equal amounts of total protein were fractionated in 20% SDS polyacrylamide gels and transferred to a membrane, and the lower part of the membrane was probed with anti-Hfq antibody. The upper part of the membrane was used to detect GAPDH as a loading control. Experiments were performed with three biological replicates and representative images are shown. (N) Quantification of Hfq level. The signal obtained with anti-Hfq antibody was normalized using GAPDH and further processed to calculate the relative protein expression level, plotted as vertical bars. Hfq level under microaerobiosis was normalized to its level under aerobiosis, which was arbitrarily set as 1. The difference in Hfq level under these conditions was not statistically significant (p-value = 0.26). Proteome Analysis Corroborates Differential Protein Abundance under Changing Oxygen Availability To further validate our DEG analyses, we adopted a quantitative proteomic approach to analyze differential protein abundance under aerobic and microaerobic growth conditions. We conducted this analysis on aliquots of the same batches of cultured cells used for our above-described RNA-seq assays, encompassing two biological repeats for both growth conditions (samples O-1 and O-2 for aerobiosis; samples N-2 and N-3 for microaerobiosis). Identification of Differentially Abundant Proteins under Microaerobic Versus Aerobic Conditions We deployed commercially available isobaric iTRAQ * mass tags [47,48] to simultaneously analyze multiple biological samples. The identical masses and chemical properties of these isobaric tags enabled co-elution of various isotopologues. The isobaric tags of peptides were cleaved by collision-induced dissociation (CID) during tandem mass spectrometry (MS/MS), before assessment of peptide fragment ions to define their sequence and quantitation of the isobaric tags. Peptide identification and relative quantitation were determined concurrently. In total, we identified 1498 and 1488 proteins from cells grown under aerobic or microaerobic conditions, respectively (Table S2). Pairwise scatterplots revealed strong correlations between two biological repeats for the same growth condition (r = 0.98 and 0.97 for aerobic and microaerobic growth, respectively) ( Figure S2). We used protein abundance ratios to identify proteins that were differentially abundant under microaerobic growth conditions relative to aerobic growth. Using 95% confidence intervals with two standard deviations (SD), we assumed that ratios >1.39 or <0.698 (i.e., ratios with more than 1.39-fold increase or more than 1.43-fold decrease in protein abundance) indicated significant changes. Accordingly, we set the protein abundance ratio threshold to 1.5 and found 113 and 92 proteins that had increased and decreased in abundance, respectively, in the E. coli MG1655 cells grown under microaerobic versus aerobic conditions (Table S3). We explore the consistency among our proteomic and RNAseq datasets in the Discussion. Biological Function Operon Translational Regulator * Activator Inhibitor Attenuator Gene name rpsMKD-rpoA-rplQ Operon gene name showed red letter: increased; bold red letter with protein ratio ≥ 1.5; black letter: not detected. Operon gene name showed green letter: decreased; bold green letter with protein ratio ≤ −1.5; black letter: not detected. *: According to the RegulonDB & EcoCyc databases. The protein-protein interaction networks for increased (A) and decreased (B) differentially abundant proteins were generated using the STRING platform (https://string-db.org/). Abundance-increased proteins were involved in processes such as glycolysis, ATP metabolism, and coenzyme/small-molecule metabolism, for which proteins are represented in red, blue, and green, respectively. Abundancedecreased proteins were involved in ribosome biogenesis, post-transcriptional regulation of gene expression and peptide metabolism, and are indicated in red, blue, and green, respectively. Discussion The ability of enterobacteria to colonize the digestive systems of mammals is dependent on their capacity to adapt and thrive in low-oxygen environments. Previous studies [1-16] of E. coli revealed that the transition from aerobic to microaerobic/anaerobic conditions requires a substantial reprogramming of gene expression, which greatly affects the bacterial lifestyle and major cellular functions such as metabolism, transport, and energy production. Nevertheless, many details of the underlying regulatory networks, including post-transcriptional mechanisms that coordinate E. coli adaptation and survival under oxygen-limited conditions, remain poorly defined and merit further analysis. In this study, we employed a combination of transcriptomic and proteomic approaches to analyze differences in RNA and protein abundance for E. coli grown in minimal medium under microaerobic versus aerobic conditions. Significantly, the E. coli cells were grown in a Bench-Top Fermentor, allowing us to control key parameters (i.e., temperature, composition of the medium, oxygen level, and pH) and thereby ensuring that growth conditions were equivalent for each experimental culture. Our transcriptomic analysis of aerobic and microaerobic cultures uncovered numerous upregulated and downregulated genes. Annotation and functional clustering of DEGs using the web-based tools available on the STRING, GO, KEGG, UniProtKB, RegulonDB, . (A,B) The proteinprotein interaction networks for increased (A) and decreased (B) differentially abundant proteins were generated using the STRING platform (https://string-db.org/). Abundance-increased proteins were involved in processes such as glycolysis, ATP metabolism, and coenzyme/small-molecule metabolism, for which proteins are represented in red, blue, and green, respectively. Abundancedecreased proteins were involved in ribosome biogenesis, post-transcriptional regulation of gene expression and peptide metabolism, and are indicated in red, blue, and green, respectively. Discussion The ability of enterobacteria to colonize the digestive systems of mammals is dependent on their capacity to adapt and thrive in low-oxygen environments. Previous studies [1-16] of E. coli revealed that the transition from aerobic to microaerobic/anaerobic conditions requires a substantial reprogramming of gene expression, which greatly affects the bacterial lifestyle and major cellular functions such as metabolism, transport, and energy production. Nevertheless, many details of the underlying regulatory networks, including post-transcriptional mechanisms that coordinate E. coli adaptation and survival under oxygen-limited conditions, remain poorly defined and merit further analysis. In this study, we employed a combination of transcriptomic and proteomic approaches to analyze differences in RNA and protein abundance for E. coli grown in minimal medium under microaerobic versus aerobic conditions. Significantly, the E. coli cells were grown in a Bench-Top Fermentor, allowing us to control key parameters (i.e., temperature, composition of the medium, oxygen level, and pH) and thereby ensuring that growth conditions were equivalent for each experimental culture. Our transcriptomic analysis of aerobic and microaerobic cultures uncovered numerous upregulated and downregulated genes. Annotation and functional clustering of DEGs us-ing the web-based tools available on the STRING, GO, KEGG, UniProtKB, RegulonDB, and EcoCyc internet platforms revealed that oxygen level directly influences carbon metabolism, energy production, metal ion homeostasis, and cell envelope functions. Previous studies have shown that the generation of proton motive force by cytochrome bo oxidase facilitates ATP production by E. coli ATP synthase under aerobic conditions [49]. However, this process can become less efficient at low oxygen concentrations, potentially requiring the action of a second cytochrome oxidase (i.e., cytochrome bd-1 oxidase) with a much higher affinity for molecular oxygen. Indeed, our transcriptomic and proteomic data clearly show that the cydABX operon, which encodes this secondary oxidase, was strongly upregulated in E. coli grown under microaerobic conditions (see Table 1). Moreover, the same operon hosts another upregulated gene, namely ndh, which encodes NADH ubiquinone oxidoreductase (a co-factor of cytochrome bd-1 oxidase), responsible for 2a ubiquinol regeneration. ATP production through oxidative phosphorylation is mainly efficient during aerobic growth, with the TCA cycle greatly contributing to this process by producing the NADH and FADH that feed into the respiratory cycle. This latter occurs when succinate:quinone oxidoreductase (encoded by sdhCDA) converts succinate to fumarate and, concurrently, reduces ubiquinone to ubiquinol. Given the diminished role of oxidative phosphorylation under microaerobic conditions, the expression of genes coding for TCA cycle enzymes is likely reduced due to their repression by the FNR and ArcA TFs. Consistently, we observed downregulation of several operons coding for enzymes involved in steps 4 (sucAB; 2-oxoglutarate decarboxylase), 5 (sucCD; succinyl-CoA synthetase), 6 (sdhCDAB; succinate dehydrogenase), and 7 (fumAC; fumarase) of the TCA cycle. Furthermore, reduced production of these enzymes that host numerous Fe-S clusters implies a reduced cellular need for iron. Indeed, we found that many genes involved in iron homeostasis (fhuACDB, fepA-entD, fes-ybdZ-entF-fepE, tonB, feoABC) and the production of protein complexes responsible for iron incorporation into Fe-S clusters (sufABCDSE and iscRSUA) were repressed during microaerobic growth. Their repression is likely mediated by the TF Fur, which is activated under microaerobic conditions in the presence of free Fe 2+ ions [10]. Simultaneously, Fur-dependent upregulation of ftnA (ca. 19-fold) and bfr (ca. 2.5-fold) elevates levels of the iron-storage proteins ferritin (2.4-fold) and bacterioferritin (1.6-fold), which efficiently sequester free iron atoms. Higher abundances of both these proteins are likely attributable to the lack of RyhB-mediated translational repression, since concentrations of this sRNA are extremely low during microaerobic growth (see below for further details). Our transcriptomic analysis also highlighted an enhanced role for mixed-acid fermentation (MAF) during microaerobic growth, arising from ArcA-and FNR-mediated upregulation. E. coli cells employ MAF to convert glucose into various end-products such as formate, succinate, acetate, lactate, and ethanol. We detected upregulation of pflB under microaerobiosis, suggesting an increased production of formic acid and its subsequent conversion to hydrogen (H 2 ) and carbon dioxide (CO 2 ) by the formate hydrogenlyase complex encoded by the hycABCDEFGHI operon, with the latter also being upregulated under oxygen-limited conditions. Moreover, H 2 production appears to be coupled to reduction of menaquinone and the periplasmic protons responsible for the protein motive force that drives ATP production. This reaction is carried out by hydrogenase 1, which is encoded by another operon (i.e., hyaABCDEF) that is strongly upregulated in oxygen-poor environments. Similarly, we observed increased expression of several genes involved in the production of other known products of the MAF pathway, namely lactate (ldhA), ethanol and acetate (adhE), and succinate (i.e., fumB and frdB) ( Table 1). Although we anticipated observing some changes in central carbon metabolism and energy production under microaerobiosis, the differential expression of some of the other major gene clusters is somewhat puzzling. For instance, the reasons for upregulation of multiple genes involved in acid (low pH) responses (i.e., gadAXW, gadBC, hdeAB-yhiD, hdeD) and oligopeptide transport (i.e., oppABCDF) are unclear. Since our cell cultures were continuously grown in a fermentor, the pH of the medium and its content was consistent throughout both aerobic and microaerobic growth, indicating that enhanced expression of these operons was not attributable to any other environmental factor except oxygen limitation. Thus, the low oxygen concentration in the environment may serve as a signal for E. coli to adapt to acidic environments and oligopeptide availability. Both these scenarios are encountered by enterobacteria upon entering the mammalian digestive system, which is characterized (at least in some regions) by low pH and the presence of oligopeptides produced from food digestion (i.e., polypeptide digestion by proteases). This observation raises an intriguing hypothesis that low oxygen concentrations might serve as a universal signal to alert bacterial cells that they have entered a host digestive system. Apart from microaerobiosis promoting expression of respiratory and acid stress response genes, we also detected clear upregulation of operons involved in biofilm formation (i.e., csgDEFG and pgaABCD) under this condition. Biofilm production could be considered an adaptive strategy allowing E. coli to survive in low-oxygen environments. The expression patterns revealed by our transcriptomic analysis were largely confirmed by our proteomic data (Table 1). An unexpected exception was the regulation of the cusCFBA operon. Despite a decrease in the abundance of this polycistronic mRNA, the levels of each of the proteins encoded by this operon were increased (Table 1). The CusCFBA copper/silver efflux system contributes to maintaining copper homeostasis in low-oxygen environments [50]. The individual components of the tripartite CusCBA complex exist in a disassembled form to maintain the plasticity of the periplasm and its dynamic functions [51]. Although an increase in CusCFBA protein levels under microaerobic conditions is consistent with the documented role of this complex in copper tolerance at low oxygen concentrations [50], the exact transcriptional and post-transcriptional mechanisms responsible for the observed changes in cusCFBA expression at the RNA and protein levels are currently unknown. In addition, our proteomic data revealed a considerable reduction in the abundance of many ribosomal (r-) proteins under microaerobic conditions (Table 6 and Table S3). E. coli r-proteins are encoded by polycistronic operons, and they are normally autoregulated at the translational level [52]. This regulatory mechanism involves the respective free r-proteins binding to their own polycistronic mRNAs and inhibiting their translation (e.g., autoregulation of the rpsJ-rplCDWB-rpsS-rplV-rpsC-rplP-rpmC-rpsQ operon by L4). As the structures of the L4 binding sites in the polycistronic mRNA and ribosomal RNA closely mimic each other, L4 can act as an efficient inhibitor of its own mRNA only when it is present in excess relative to ribosomes and, therefore, is available for interaction with its cognate mRNA. In other words, a decrease in the concentration of ribosomes under microaerobic conditions should release r-proteins to inhibit translation of their cognate mRNAs, thereby reducing their abundance in vivo. Moreover, an additional extraribosomal function of L4 is to change the abundance of numerous mRNAs, mainly by inhibiting RNase E-dependent mRNA decay during bacterial adaptation to adverse environments [53]. Another extraribosomal function of L4 is to post-transcriptionally regulate Tna expression in the stationary phase of growth through its direct binding to the tna intergenic region [54]. Imbalances in ribosomal synthesis can release ribosomal proteins to perform other extraribosomal functions [55]. Thus, under microaerobic growth conditions, the abundance of many ribosomal proteins is reduced, possibly leading some free ribosomal proteins to perform other extraribosomal functions to maintain cell fitness. Interestingly, although the FNR TF potentially activates the transcription of multiple genes in oxygen-limited environments, our data suggest that many FNR-dependent genes remain silent, apparently due to repression by other factors acting at the transcriptional and post-transcriptional levels. For example, FNR-mediated gene activation of nitrate reductase does not occur in the absence of nitrate and may additionally be inhibited by sRNAs such as RprA [56]. Indeed, we found that levels of RprA were considerably higher during microaerobic growth, supporting its role in controlling nitrate respiration. In fact, apart from TF-mediated gene expression (such as through Fur, FNR, and ArcA, among others), sRNAs are also widely employed by E. coli to exert post-transcriptional control. sRNAs in E. coli range from~50 nucleotides (nt) (e.g., DicF; 53 nt) to >300 nt (e.g., CsrB; 369 nt) in length. However, detection of very short sRNAs by RNA-seq can be achieved only by including additional steps (i.e., specific size selection) in standard protocols, which are performed after fragmentation of the purified RNA and prior to cDNA library construction. Indeed, the shortest sRNA we detected was RdlD (66 nt), and we did not detect DicF (53 nt). Our assessment of sRNA abundance uncovered several that were differentially expressed under microaerobic conditions, and their expression patterns were confirmed by Northern blotting. Particularly notable was the substantial decrease in RyhB concentration under those conditions. It is conceivable that the reduced abundance of this sRNA is inversely correlated with levels of its targets. Indeed, we observed higher abundances of the sodB, ftnA, and bfr mRNAs and their translational products (i.e., superoxide dismutase B and the two iron storage proteins FtnA and Bfr, respectively). However, low RyhB abundance did not similarly increase expression of other known RyhB targets located in the iscRSUA, sucCDAB, and sdhCDAB operons (Table 1), which are known for their roles in assimilating iron and homeostasis of that ion in many essential metabolic enzymes. In fact, we detected diminished abundance of their respective transcripts (see downregulated clusters in Table 1), which may be attributable to transcriptional repression by other global regulators such as the TFs FNR [57] and ArcA [58]. The latter regulators are known to downregulate the sucCDAB and sdhCDAB operons under microaerobic conditions. Moreover, the decreased expression of sdhCDAB we report was likely due to its repression by Fur, another TF that greatly impacts gene expression during anaerobic growth [10]. Unlike RyhB, we identified a number of sRNAs as being more abundant under microaerobic conditions (i.e., CsrB, CsrC, GcvB, and RprA) (Figure 3). CsrB and CsrC exert their regulatory functions by binding to the translational inhibitor CsrA, thereby preventing interaction of the latter with the translation initiation regions of numerous transcripts, including pgaABCD (see upregulated clusters in Table 1), under microaerobic conditions. Translational activation of this operon via competitive binding of CsrB and CsrC to CsrA enhances polysaccharide biosynthesis, thereby promoting biofilm formation. Similarly, CsrB-and CsrC-competitive binding mechanisms are likely involved in the translational activation of other genes (e.g., iraD [59] and glgS [60]) that are likewise upregulated under microaerobic conditions. The iraD gene encodes an anti-adapter protein that inhibits RssB-mediated degradation of the sigma stress factor RpoS, whereas GlgS is known as an inhibitor of cell motility [61]. CsrA often acts as a translational repressor, but it can also activate gene expression [60,61]. Although previous integrated transcriptomic data [62,63] have indicated that CsrA globally controls the levels of a large number of transcripts, the specific role of this translational repressor under microaerobic growth, i.e., when sRNAs CsrB/C are much more abundant than csrA mRNA (>60-fold; Figure 4C), remains to be determined. Interestingly, in comparison to the relatively short half-lives of CsrB (1.4 min) and CsrC (2.2. min) in Luria-Bertani (LB) medium [64], our substitution of rich medium (LB) with minimal medium and depletion of oxygen synergistically increased the stability of these small RNAs. Since both CsrB and CsrC are regulatory RNAs and substrates of RNase E, their increased stability could be attributable, at least in part, to lower RNase E levels under microaerobic conditions [42]. Notably, levels of another sRNA, GcvB, also increased (~1.8-fold) under microaerobic conditions. Despite the well-documented role of GcvB in downregulating the oppABCDF operon under aerobic conditions [65], our transcriptomic and proteomic data indicate that this sRNA does not inhibit oppABCDF expression under microaerobic conditions. Our finding that levels of aspartate 1-decarboxylase (PanD), a GcvB target previously reported to be involved in pantothenate biosynthesis [66], were decreased under microaerobiosis support the idea that GcvB may downregulate this biosynthetic pathway in oxygen-limited environments. That same study [66] reported other elements of the GcvB targetome, including the csgDEFG operon. It is conceivable that inhibited translation by GcvB-as well as by another sRNA, RprA, also upregulated under microaerobic conditions-likely results in reduced csgDEFG transcript levels. The notion that RprA exerts an active repressional role is supported by its contribution to inhibition of dgcM translation [2,33], which reduces levels of DgcM, a member of the signaling cascade that controls Curli biosynthesis. It is well documented that SsrA RNA (also known as tmRNA) is involved in ribosomeassociated quality control. It releases stalling ribosomes from truncated mRNAs lacking stop codons through a tmRNA-mediated mechanism, termed trans-translation, which includes peptide-tagging of incompletely synthesized polypeptides for degradation [67]. SsrA processing that leads to functional SsrA-tmRNA translation activity requires cleavage of the SsrA precursor by RNase E [68]. We found that SsrA was the most abundant sRNA under both aerobic and microaerobic growth conditions ( Figure 3B,C). Moreover, its abundance was 1.8 times higher under microaerobiosis ( Figure 3A), suggesting that protein quality control plays an important role in conferring cellular fitness under low-oxygen conditions. Bacterial Strain and Growth Conditions To prepare subcultures from fresh overnight cultures, E. coli K-12 strain MG1655 was grown overnight at 37 • C for 16 ) 2 ). The 16 h fresh overnight culture was diluted into 750 mL fresh M9 medium (to OD 460 = 0.04 to 0.05) in a 1 L fermentation vessel chamber (Winpact Parallel Fermentation System FS-05-220, Saratoga, CA, USA). For aerobic culture conditions, air was continuously pumped into the chamber at 0.4 LPM (liters per min). For microaerobic culture conditions, oxygen levels in the chamber containing fresh M9 medium were initially decreased by supplying N 2 at 0.4 LPM until dissolved oxygen (DO) reached 0. N 2 was then pumped for a further 30 min before diluting the overnight culture in the chamber to OD 460 = 0.04 to 0.05, and finally the N 2 supply was turned off. The chamber was completely sealed and the culture was allowed to grow under microaerobic conditions without any additional gas supply. Aerobic and microaerobic cultures were both grown at 200 rpm, 37 • C, and maintained at pH 7.0 by automatic titration with sterile 1 M KOH. Cultures were harvested at OD 460 = 0.5 to 0.6 for transcriptomic or proteomic analyses. For RNA half-life analysis, rifampicin at a final concentration of 50 mg/mL was used to inhibit new RNA synthesis, and an aliquot of the culture was collected for halflife determinations. In brief, 42 mL of culture from multiple biological replicates for each time-point was collected into 50 mL tubes with 7 mL (1/6 volume) of ice-cold stop solution (5% phenol and 95% ethanol (v/v)) for RNA isolation (see details below). Bacterial pellets were harvested following centrifugation at 4000× g, 4 • C for 15 min, and stored at −80 • C before use. We prepared 5 and 10 biological repeats for aerobic and microaerobic conditions, respectively. RNA Isolation Total RNA was extracted as described previously [17]. In brief, bacterial pellets were resuspended in 4 mL KJ medium (50 mM glucose, 25 mM Tris-HCl pH 8.0, 10 mM EDTA pH 8.0, 100 mM NaCl), lysed by placing into boiling 4 mL buffer (0.2 M NaCl, 20 mM Tris-HCl pH 7.5, 40 mM EDTA, 0.5% SDS), and boiled in a boiling water bath for 45 sec before adding 4 mL of acidic phenol (pH = 4.5) and mixing gently by slowly inverting the tube~20 times. Total RNA was extracted in aqueous phase by centrifugation at 4000 g, 4 • C for 1 h. The RNA was precipitated in 1 volume of isopropanol and 1/10 volume of 3 M sodium acetate (pH 7.8) at −20 • C. All RNA samples were maintained in isopropanol at −20 • C before use. When RNA isolation was performed on aliquots, the same volume of culture from the same batch of biological replicates was used for protein isolation, Western blot analysis, and proteomic analysis. RNA-seq Analysis Total RNA in isopropanol was pelleted by centrifugation at 15,000 rpm for 15 min, washed with 70% ethanol, and then resuspended in DEPC H 2 O. DNase I (Thermo Fisher antibody (produced in our laboratory, rabbit anti-His-tag Hfq purified protein) was diluted at 1:10,000, whereas α-GAPDH monoclonal antibody (SignalChem, Richmond, VA, USA) was diluted at 1:1000. Membranes were washed once with 1 × TBST buffer for 5 min, before they were incubated with secondary antibody. Secondary mouse-or rabbit-HRP antibody (GE Healthcare, Chicago, IL, USA) was diluted at 1:10,000 and incubated for 1 h at room temperature. Finally, membranes were washed three times with 1 × TBST buffer for 5 min at room temperature. Signals were detected by means of an ECL Western Blotting Detection kit (GE Healthcare) and captured by a BioSpectrum 815 system (UVP). Protein Extraction, iTRAQ Labeling, and LC-MS/MS Sample proteins (O-1 and O-2 for aerobiosis; N2 and N-3 for microaerobiosis) were purified using a commercialized B-PER ® Bacterial Protein Extraction Reagent (Thermo Scientific) according to the manufacturer's instructions. Extracted protein digestion, iTRAQ labeling of peptides and subsequent LC-MS/MS analysis, iTARQ signal normalization, and protein quantitation were performed by the core services of the Mass Spectrometry Facility (Academia Sinica, Taipei, Taiwan), as previously described [69]. Briefly, the extracted proteins were subjected to trypsin digestion at 37 • C overnight, lyophilized, and then reconstituted in iTRAQ reaction buffer. Equal amounts of peptides from each sample were individually labeled by adding iTRAQ Reagent 113, iTRAQ Reagent 114, iTRAQ Reagent 117, or iTRAQ Reagent 118, and vortexing the resulting mixtures at room temperature for 1 h. The iTRAQ-labeled peptides were then desalted using a ZipTip concentrator (Merck, Kenilworth, NJ, USA) and mixed. The multiplexed samples were further analyzed by LC-ESI-Q-TOF mass spectrometry. The resulting MS/MS spectra were exported using Mascot Distiller with default parameters. Mascot search results that satisfied the standard criteria [69] revealed the qualified peptides. Their normalized iTRAQ signals were used to quantify the relative abundances of each peptide as well as their fold-changes.
v3-fos-license
2017-09-26T14:10:27.692Z
2015-12-01T00:00:00.000
26498636
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.calstate.edu/cjhp/article/download/1830/1664", "pdf_hash": "a279047b72a332e6b1d69019cf3805b8b1574c1a", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42648", "s2fieldsofstudy": [ "Medicine" ], "sha1": "a279047b72a332e6b1d69019cf3805b8b1574c1a", "year": 2015 }
pes2o/s2orc
Predictors of Gestational Weight Gain in Mexican American Women in Los Angeles Background and Significance: Extremes of gestational weight gain (GWG) are associated with newborn and pregnancy complications, postpartum obesity and chronic illnesses. In the United States, Mexican American women are the largest subgroup of Hispanics but have been studied least often. The purpose of the study was to determine the prevalence, characteristics, and predictors of GWG in Mexican American women. Methods: A retrospective, correlational design used data from charts (n=684) in a federally qualified health center in Los Angeles. Prevalence of GWG was inadequate, 22%; adequate, 33%; and excessive, 45%. Risk factors for excessive GWG were hypertension (p = .04), overweight (p = .00), or obese pre-pregnancy BMI (p = .01). Conversely, women who had gestational diabetes (p = .02), ate more snacks (p = .01), were multiparous (p = .03), and less acculturated (p = .03) experienced less excessive gain. Conclusions: Efforts to prevent excessive GWG in Mexican Americans should be targeted to women having their first baby and those with high pre-pregnancy BMI. One strategy may be recommending diet/exercise similar to that used in women with gestational diabetes. For women who are less acculturated and/or who are multiparous, strategies that will minimize inadequate GWG may improve newborn outcomes. © 2015 Californian Journal of Health Promotion. All rights reserved. In the longer term, inadequate and excessive GWG appear to alter the fetal intrauterine environment, resulting in obesity in childhood (Salsberry & Reagan, 2005), adolescence (Salsberry & Reagan, 2007), and in Type 2 diabetes and atherogenic profiles in adulthood (Lau, Rogers, Desai, & Ross, 2011;Rasmussen & Yaktine, 2009).Therefore, optimizing GWG improves not only maternal health but that of the next generation. Sociocultural Characteristics Acculturation.In Hispanics, acculturation into U.S. culture has been associated with negative prenatal behaviors such as alcohol use and smoking (Zambrana, Scrimshaw, Collins, & Dunkel-Schetter, 1997) while preservation of the Mexican culture is evidenced by positive behaviors -healthy diets and avoidance of substances (Dixon, Sundquist, & Winkleby, 2000). Subsequently, fewer perinatal complications and low birth weight infants are observed in less U.S.-acculturated women (Callister & Birkhead, 2002).However, their wider age-range of childbearing results in more offspring, and greater exposure to the weight changes of pregnancy (Bowie et al., 2007). Paternal education.As a proxy indicator of socioeconomic status, parental education may be associated with GWG (Williams and Collins, 2005).Lower education and poverty are associated with high body mass index (BMI), a key factor in excessive GWG (Drewnowski & Specter, 2004;Herring et al., 2012;Molarius, Seidell, Sans, Tuomilehto, & Kuulasmaa, 2000).In one study, parents with 12+ years of education were more likely to have first-born infants who survived the first year of life.Stillbirth was 1.7 times more likely in families with a paternal education of less than 10 years, and 1.2 times more likely than when maternal education was less than 10 years (Arntzen, Magnus, & Bakketeig, 1993).Hence, lower education, particularly paternal education, is associated with poorer living conditions, including reduced access to food and health care resources. Biologic Characteristics Maternal age.Adolescents and younger women (Deputy, Sharma, Kim, & Hinkle, 2015;Howie, Parker, & Schoendorf, 2003;Koleilat & Whaley, 2013;Schieve et al., 1998) are more likely to gain excessively.Though related to concurrent maturation, it is of concern because of the risk of postpartum weight retention and the potential for young women to move to a high BMI category by the next pregnancy (Gould Rothberg et al., 2011).There is less consistency in older women.Deputy et al. (2015) reported that inadequate GWG was more likely in multiethnic women 35 and older while Puerto Rican women over 30 years of age were at 2.5 times greater risk for excessive GWG than younger women (Chasan-Taber et al., 2008). Parity. Large multiethnic studies of women of all ages have reported primiparity versus subsequent births as a covariate for excessive GWG (Abrams & Parker, 1990;Brawarsky et al., 2005;Wells et al., 2006).Adolescent primiparas gained 5.28 pounds more than multiparas (Scholl et al., 1988); had twice the likelihood of excessive GWG than multiparas (Howie, et al., 2003) and Harris et al. (1997) reported primiparity as an independent predictor of excessive GWG followed by infant birth weight, and maternal BMI. Excessive GWG and high pre-pregnancy BMI are modifiable factors that have independent as well as synergistic influences on hypertension in pregnancy.Hispanic women with excessive GWG had a three-fold risk of hypertension and four-fold risk of preeclampsia compared to women who gained appropriately.Those with obese pre-pregnancy BMI were 2.7 times greater risk of having pregnancy related hypertension than women in lower BMI categories (Fortner, Pekow, Solomon, Markenson, & Chasan-Taber, 2009).In NHB women with obese BMI the risk of hypertension was 1.5 fold greater but when compounded by excessive GWG, increased to 2.6 fold (Chandrasekaran, Levine, Durnwald, Elovitz, & Srinivas, 2014). Gestational diabetes mellitus. Increased risk for gestational diabetes mellitus (GDM) is associated with excessive GWG in early pregnancy (Carreno et al., 2012;Hedderson, Gunderson, & Ferrara, 2010).However, once diagnosed with GDM, Hispanic women may be more likely to gain inadequately (Walker et al., 2009) or adequately (Chasan-Taber et al., 2008) overall due to dietary and exercise modifications. Behavioral Characteristics Exercise.Physical activity varies by age, socioeconomic status, and acculturation.Hispanics are the most physically inactive ethnic group in the United States (Evenson & Wen, 2010) and in pregnancy, expend 30% less energy than NHW women (Schmidt, Pekow, Freedson, Markenson, & Chasan-Taber, 2006).First generation Hispanic women who immigrated to the U.S. before 25 years of age and with greater English proficiency reported more physical activity than older women with less English (Evenson, Sarmiento, & Ayala, 2004).Puerto Rican women most likely to engage in modest to more intense exercise were U.S. born, college-educated, had higher income, and fewer children compared to those more likely to engage in domestic activity who preferred Spanish and had more children (Chasan-Taber et al., 2007). Maternal food intake.Although believed that more U.S.-acculturated Hispanic women take on the energy dense, high-fat diet common in young women in the U.S., this assumption is not well documented.In Mexican American adolescents, increasing GWG was positively associated with acculturation, but during pregnancy, they were more likely to eat traditional "more healthy" foods prepared by their mothers (Gutierrez, 1999).A review of acculturation and Hispanic food practices found no positive correlation between acculturation and dietary fat intake despite evidence that fatrelated behaviors differed based on acculturation. Less acculturated women consumed more whole milk and used fat in food preparation; whereas the more acculturated consumed more fast food, snacks, and added fats.More acculturated individuals consumed more fruit, rice, and beans and less acculturated individuals consumed less sugar and sugarsweetened beverages than more acculturated individuals (Ayala, Baquero, & Klinger, 2008).Therefore, it may be that the overall dietary pattern and lifestyle of women with less acculturation (e.g., consuming foods with natural fat rather than added fats and a diet higher in fiber, Bacardi-Gascon, Dueñas-Mena, & Jimenez-Cruz, 2007), in addition to physical work involved in food preparation, promotes adequate GWG. Factors associated with GWG have been reported in other Hispanic sub-groups (Chasan-Taber et al., 2008;Tovar et al., 2010;Tovar, Chasan-Taber, Bermudez, Hyatt, & Must, 2012) but few studies have disaggregated Mexican women for study.It was hypothesized that a model of non-modifiable factors (age, parity, acculturation, paternal education), modifiable factors (pre-pregnancy BMI, exercise, maternal food intake), and covariates (hypertension, gestational diabetes) for GWG would identify Mexican women at risk for extremes of GWG who would benefit from strategies to promote healthy pregnancy gain. Design and Sample The study was a retrospective, correlational design.Data were obtained from patient records from a federally qualified health center in Los Angeles.The center has a busy women's health clinic where prenatal care is provided by nursemidwives and physicians.The majority of patients are low-income and Hispanic (84%) (HRSA, 2014). A purposive sample of Mexican/Mexican American women (n=684) was achieved by screening 1,257 charts of women with Hispanic surnames who delivered infants between 2007 and 2009.Inclusion criteria were: term gestation (last prenatal visit 36+ weeks), singleton, maternal age 12+ years old, ethnicity documented as Hispanic with place of birth in Mexico or the United States.If the chart indicated birthplace in U.S., the woman was contacted to determine self-identification as Mexican American.The chart had to include a pre-pregnancy weight or weight documented at < 14 weeks gestation.Charts were excluded if a pre-existing condition known to impair metabolism (e.g.thyroid disease and diabetes) was noted. Charts analyzed must have indicated enrollment in the Comprehensive Perinatal Services Program (CPSP), a Medicaid case management program for pregnant women in California.Women enrolled in CPSP have provider prenatal visits and separate risk assessments inclusive of nutrition, health education, and psychosocial components. Procedures and Measures Data was manually extracted from medical records by the researcher and two registered nurses and entered onto a code sheet developed for the study.Data was extracted from: (a) the American College of Obstetrics and Gynecology (ACOG) Antepartum Record (ACOG, 2008), (b) CPSP Prenatal Combined Assessment/Reassessment Tool (CPSP, 1998), and (c) PFFQ (CPSP, 2004).The study protocol was approved by the Institutional Review Board of the investigators' affiliation as well as the health center. Variables The outcome variable, GWG, was measured by subtracting the self-reported pre-pregnancy weight from the last recorded maternal weight during the pregnancy.If pre-pregnancy weight was unknown, the first recorded weight during the first trimester was used.The latter was considered to be a reasonable approach as first trimester weight gain is minimal (Hytten, 1991;Picciano, 2007). GWG was categorized as inadequate, adequate, or excessive based on 2009 IOM guidelines which use pre-pregnancy BMI categories to determine adequate gain: underweight (BMI < 18.5 kg/m 2 ), 28 to 40 pounds; normal weight (BMI 18.5 -24.99 kg/m 2 ), 25 to 35 pounds; overweight (BMI 25-29.9kg/m 2 ), 15 to 25 pounds, and obese (BMI > 30 kg/m 2 ), 11 to 20 pounds.Inadequate GWG was less than the recommendation in each BMI category, adequate GWG was within the recommendation, and excessive GWG was above the recommendation. Predictor variables: acculturation, paternal education, maternal age, pre-pregnancy BMI, parity, hypertension, gestational diabetes mellitus, exercise, and maternal food intake, were selected based on the review of the literature and their availability on the chart forms (Table 1)., 2008).Each food intake variable was measured as servings per week except for snacks, snacks per day.e Content validity of questions selected for measurement of acculturation was based on Short Acculturation Scale for Hispanics (SASH), a 12-item scale widely used with Hispanics (Marin, Sabogal, Marin, Otero-Sabogal, & Perez-Stable, 1987).SASH constructs are preferred language and language spoken with others (home and friends). Data Analysis Descriptive statistics were calculated on all variables of interest.A series of one-way ANOVAs and chi-square tests of independence were conducted to examine whether there were systematic sociocultural, biologic, or behavioral differences among Mexican American women who experience inadequate, adequate, or excessive GWG (Tables 1-4). In order to determine whether any of the characteristics identified in Tables 2-4 predicted Mexican American women's GWG, a multinomial logistic regression model was examined.Given the theoretical plausibility of each characteristic as predictors of GWG, it was decided a priori to include all statistically significant predictors from Tables 2-4 in the regression model.A p value of less than or equal to .05 was considered statistically significant.SPSS version 20 was used for the analysis. Results In Biologic Characteristics The three GWG categories were significantly different for biologic characteristics of prepregnancy BMI, parity, hypertension, and GDM (Table 3).Adequate GWG occurred most often (41.6%) in women who were underweight or of normal weight prior to pregnancy.Excessive gain occurred most frequently in women who were overweight (63.5%) or obese (42.2%) at pregnancy onset.Very few women (n = 10) were underweight at the onset of pregnancy. Women with normal pre-pregnancy BMI more commonly had adequate GWG (41.3%) than those with overweight (25.3%) or obese BMI (30.1%).Of the study's 199 primiparous women, 109 (54%) gained too much and by parity, comprised the majority (58.3%) in the excessive gain group.Hypertension was also most common in women with excessive GWG.Inadequate gain was more common in women with GDM, in particular those managed with diet or oral medication.Of the 74 women with Behavioral Characteristics The GWG categories were significantly different (p < .001)for one behavioral characteristicsnacks per day (Table 4).Women with inadequate GWG ate more snacks per day (2.13 + 0.99) than women with adequate GWG (2.03 + 0.92) or with excessive GWG (1.75 + 1.04).Among the three categories, there were no significant differences in exercise, maternal food intake, or intake of specific food groups. Table 5 shows that inadequate GWG was associated with multiparity (OR = 2.09) and was less likely in women with less acculturation (OR = 0.50).Determinants of excessive weight gain were hypertension (OR = 2.16) and high prepregnancy BMI; overweight (OR = 4.65) or obese (OR = 2.17) as compared to normal prepregnancy BMI.Protective factors for excessive gain were GDM (OR = 0.41), multiparity (OR = 0.47), less acculturation (OR = 0.57) and eating more snacks per week (OR = 0.96). Discussion This is the first known study to document prevalence of GWG in a homogeneous group of Mexican/Mexican American women.The finding that one third of the women in this study gained adequately is consistent with previous literature regarding women of all ethnicities (Barbara Abrams, Altman, & Pickett, 2000;Caulfield et al., 1996;Cogswell et al., 1999;Olson & Strawderman, 2003).It is also consistent with older studies of other Hispanic groups in the United States: 33% in Puerto Ricans ( Chasan-Taber et al., 2008); 32% in Hispanics in the San Francisco area (Brawarsky et al., 2005) and 33.9% in Hispanic women in New Mexico (Walker et al., 2009). During the study period, the IOM released new GWG guidelines that use World Health Organization BMI categories instead of the previous ones based on the Metropolitan Life insurance tables (IOM, 2009).These classify smaller proportions of women as underweight before pregnancy while a larger proportion are classified as overweight.The limitations of comparing this sample to older studies are that the 2009 IOM guidelines result in fewer women with inadequate GWG and more women with excessive GWG than if the 1990 IOM guidelines were applied.Our study had a larger proportion of Mexican American women who gained adequately and a smaller proportion who gained excessively compared to recent large multi-ethnic studies (Hunt, Alanis, Johnson, Mayorga, & Korte, 2013;Chihara et al., 2014).Compared to a study of Hispanics in Texas, our sample had fewer women with inadequate gain, more women with adequate gain, and similar prevalence of excessive gain (Sangi-Haghpeykar, Lam, & Raine, 2014).The positive differences may be due to the large proportion of less acculturated women in our sample who were less likely to gain excessively and more likely to have adequate weight gain than more acculturated.System factors-easily accessed prenatal care, case management, and the provision of multilayered care by providers, health educators, and support staff at the center could also be influential in these differences. Models of Gestational Weight Gain Models of GWG using sociocultural, biologic, behavioral, and variables have been proposed.In NHW women, Olson and Strawderman (2003) described a "biopsychosocial model" that explained 27% of the variance in overall GWG. Factors associated with greater GWG were decreased physical activity, consuming more food, overweight pre-pregnancy BMI, and low income while factors associated with inadequate GWG were low BMI, consuming less food, and smoking.Models of GWG in Hispanic women have been reported in Puerto Ricans in Massachusetts (Chasan-Taber et al., 2008); Hispanics in New Mexico (Walker et al., 2009) and in Houston, Texas (Sangi-Haghpeykar, Lam, & Raine, 2014).In the Latina GDM Study, Chasan-Taber et al. (2008) found that older maternal age, more prenatal visits, overweight, parity and acculturation were associated with both inadequate and excessive GWG among Puerto Rican women.In our study, predictor variables accounted for 20% of the variance in the model of GWG in Mexican American women and are described in the next section. Acculturation.Less acculturation to the U.S. was protective of both extremes of GWG.This finding is partially consistent among Puerto Ricans, in which U.S.-born women were more likely to gain excessively (Chasan-Taber et al., 2008), and women with less than 10 years of U.S. residence were less likely to gain excessively (Tovar et al., 2012).Similarly, more acculturated (U.S. born) Hispanic women in Houston, Texas were more than 3 times likely to gain excessively (Sangi-Haghpeykar et al., 2014) than non-U.S.born, and Spanish speaking Hispanic women in Southern California were about half as likely to have excessive GWG.Our study found that Mexican women with less acculturation were also less likely to have inadequate GWG -a finding not previously reported. Hypertension.Recent national attention to identification and treatment of hypertensive disorders during pregnancy compels health workers to be aware of its associated factors.In our study, Mexican American women with hypertension were twice as likely to have excessive GWG.This is an important finding in that excessive GWG is modifiable and can be influenced by correct advice and information (Cogswell et al., 1999;Stotland et al., 2005).Therefore, preconception interventions for women with high BMI as well as strategies to promote adequate gain in pregnancy may be a double-pronged approach to decreasing the incidence of hypertension in pregnancy. Parity.In our study, women in their first pregnancy were more likely to gain excessively.This is consistent with studies which have suggested that the first/ index pregnancy has higher GWG than subsequent pregnancies in multiethnic women (Brawarsky et al., 2005;Wells et al., 2006) and in Puerto Ricans (Chasan-Taber et al., 2008). Recent studies of NHW primiparas reported excessive gains of 74.3% and 74% (Chung et al., 2013;Restall et al., 2014) -higher than reports from older studies which may in part reflect the application of the 2009 GWG guidelines.In comparison, in our study, 54.7% of primiparous women gained excessively, which is still of great concern because women of minority groups are more likely to retain excess postpartum weight than NHW women (Gould Rothberg, et al., 2011;Walker, Freeland-Graves, Milani, George, et al., 2004).Snack consumption.Snack consumption was the only behavioral variable related to GWG.Mexican women consuming more snacks per week were less likely to have excessive gain during pregnancy.In contrast, studies of NHW women and adolescents have not reported this association (Olson & Strawderman, 2003;Stevens-Simon & McAnarney, 1992).Our finding is consistent with a recent study in which pregnant women with high pre-pregnancy BMI were randomized to a treatment group of which part was to consume three meals and three snacks each day (Petrella et al., 2014).Women in the treatment group had lower GWG than those in the control group.The hypothetical basis for our finding is suggested in the next section. Gestational diabetes mellitus.In our study, women with GDM were 60% less likely to have excessive GWG.Our findings were unlike Walker et al. (2009), who found that Hispanic women with GDM were more likely to have inadequate gain.However, Chasan-Taber et al. (2008) reported that more Puerto Rican women achieved adequate GWG when they had GDM than those without GDM. Women with GDM have more difficulty metabolizing carbohydrates than pregnant women in general.This results in high maternal levels of glucose which may result in larger infants with more body fat.To promote euglycemia, women with GDM engage in medical nutrition therapy and daily exercise (American Diabetes Association, 2012).In practice, three meals and two to three snacks daily are recommended to distribute glucose intake and to reduce postprandial glucose fluctuations.Daily exercise is recommended to improve glycemic control (ACOG, 2013a).The fact that women with GDM experience such dietary/activity treatment, and are more closely monitored than women without GDM, suggests the benefit of lifestyle modifications and increased vigilance with respect to pregnancy weight gain. Pre-pregnancy body mass index.The key predictor of excessive gain in Mexican American women was high BMI -being overweight and obese at the beginning of pregnancy (4.65 and 2.17 times) compared to women with normal BMI.This is similar to, but more extreme than in Hispanic women in New Mexico (Walker et al., 2009).As women with normal pre-pregnancy BMI have reduced risk of preterm birth compared to those with lower or higher BMIs, this further reinforces the benefit of achieving optimal pre-pregnancy BMI (Kosa et al., 2011).Therefore, preconception health promotion should be directed to Mexican American women with overweight prepregnancy BMI, given their propensity to gain excessively, compounded with increased risk for postpartum retention (Walker, Freeland-Graves, Milani, Hanss-Nuss et al., 2004), which increases the likelihood of obese pre-pregnancy BMI (Gould Rothberg, et al., 2011) in subsequent pregnancies. Study Strengths and Limitations Study data was abstracted from the ACOG Antepartum Record and forms used for the California Comprehensive Perinatal Services Program which were designed to assess individual pregnant women's health, obstetrical, psychosocial and nutritional risk.No other studies to date were found in which these tools were used to study a minority population such as Mexican American women. Study limitations were that the retrospective design limited the analysis to available data in clinic records.Although tools such as that used for food intake in this study have the advantages of suggesting dietary patterns over an extended period (a week in this study), disadvantages include uncertain validity, inadequate food lists (particularly culturally specific foods), and inconsistent estimate of portion sizes (Fowles, Sterling, & Walker, 2007).Further, as the study was conducted in a health clinic with a majority of low-income Hispanics, it does not represent the Mexican American population across socioeconomic spectra. Interaction effects were not examined due to the exploratory nature of this study.For instance with advancing age and subsequent pregnancies, pre-pregnancy BMI also increases (Bowie et al., 2007).As discussed, sociocultural factors such as acculturation and paternal education influence biologic and behavioral variables.Future research may build upon the initial predictors identified in this study, with thought given to inclusion of other known interaction effects. Conclusion Of all women, Mexican American women experience pregnancy most often and therefore are most vulnerable to the effects of GWG.The study reported a model of GWG to identify Mexican women at risk for extremes of GWG who could benefit from strategies to promote healthy pregnancy gain.Key factors associated with gaining excessively; high pre-pregnancy BMI and primiparity alert the clinician to provide early prenatal information on GWG and to seek additional support (dietician, group classes, pregnancy exercise/Zumba) for the woman with high BMI and/or is a first time mother.The relationship between hypertension and excess GWG encourages vigilance for hypertension when excess gain is developing as well as supports efforts to prevent excess gain which may reduce hypertension. Postpartum care should target women who have gained excessively and/or started in a high prepregnancy category by concerted support to exclusively breastfeed for at least six months; resume/initiate some form of cardiovascular exercise; and ensure that she has been given the option of a long acting reversible contraceptive.Women with GDM were less likely to gain excessively -therefore a similar diet and exercise regimen may have the same result in women who are at greater risk for excessive gain who do not have GDM.Multiparous women were more likely to gain inadequately than primiparous so they should be provided with education that will encourage behaviors to achieve optimal gain.Low acculturation conferred protection against both extremes of GWG.This may be another aspect of the Hispanic paradox in which the adherence to traditional behaviors contributes to better health outcomes.Continued research is needed to identify strategies that resonate with Mexican/Mexican women, especially those at particular risk for extremes of weight gain; to promote health of the woman, her baby, and subsequent generations. Table 1 . Sum of the number of servings in food categories, protein, milk, grains, fruits and vegetables and fats and sweets, and snacks Note: a If pre-pregnancy weight was unknown, first recorded prenatal weight during first trimester was used.b In pregnancy hypertension can be a preexisting condition or conditions, preeclampsia or gestational hypertension, that develop during pregnancy (ACOG, 2013b).c Excluded pre-existing diabetes.d Categories were determined with recommendation by expert registered dietician with a specialty in pregnancy nutrition and with comparison to the MyPyramid for Moms, the USDA macronutrient recommendations for pregnancy (U. S. Department of Agriculture Table 3 . Biologic Characteristics by Gestational Weight Gain Table 4 . Behavioral Characteristics by Gestational Weight Gain a Reference category: Less than 12 years education b Reference category: Normal c Reference category: Least acculturated d Reference category: Primiparity
v3-fos-license
2022-06-29T15:06:41.674Z
2022-06-27T00:00:00.000
250106164
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fped.2022.932170/pdf", "pdf_hash": "1d9fcf807265ce34ba055207f189e58263559501", "pdf_src": "Frontier", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42649", "s2fieldsofstudy": [ "Medicine" ], "sha1": "e4b945318d10bf6f1ec8b9fce84bc09e334d90e7", "year": 2022 }
pes2o/s2orc
SARS-CoV-2 B.1.1.529 (Omicron) Variant Causes an Unprecedented Surge in Children Hospitalizations and Distinct Clinical Presentation Compared to the SARS-CoV-2 B.1.617.2 (Delta) Variant Background In the midst of successive waves of SARS-CoV-2 variants, the B.1.1.529 (omicron) variant has recently caused a surge in pediatric infections and hospitalizations. This study aimed to describe and compare the symptoms, explorations, treatment and evolution of COVID-19 in hospitalized children during the successive B.1.617.2 (delta) and B.1.1.529 (omicron) waves. Methods This observational study was performed in the Pediatric Pulmonology Department of a University Hospital in Paris, France. All hospitalized children aged between 0 and 18 years who tested positive for SARS-CoV-2 using reverse transcription polymerase chain reaction (RT-PCR) in nasopharyngeal swabs from July 15th to December 15th 2021 (delta wave), and from December 15th 2021 to February 28th 2022 (omicron wave) were included. Results In total, 53 children were included, 14 (26.4%) during the delta wave and 39 (73.6%) during the omicron wave (almost three times as many hospitalizations in half the time during the latter wave). During the omicron wave, hospitalized patients were mostly aged < 5 years (90 vs. 71% of all the children during omicron and delta waves, respectively), and tended to have fewer underlying conditions (56 vs. 79% during omicron and delta waves, respectively, p = 0.20). The omicron variant was also responsible for a different clinical presentation when compared to the delta variant, with significantly higher and often poorly tolerated temperatures (p = 0.03) and increased digestive symptoms (p = 0.01). None of the three patients who were older than 12 years were fully vaccinated. Conclusion The dramatic increase in the hospitalization of children with COVID-19 and the modification of the clinical presentation between the latest delta and omicron waves require pediatricians to remain vigilant. It should also encourage caregivers to ensure vaccination in children older than 5 years, for whom the BNT162b2 COVID-19 vaccine has been deemed safe, immunogenic, and effective. INTRODUCTION In France and throughout the world, the surge in coronavirus diseases caused by the variant of concern B.1.1.529 (omicron), reached a peak that was five to six times higher than that caused by any of the previous severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) variants (1)(2)(3). After an important decrease in the SARS-CoV-2 circulation in June 2021, the French public health agency observed a progression in infections due to the B.1.617.2 (delta) variant that began in mid-July (4). The later switch between the delta and omicron variants began in mid-December 2021 (5). Whereas the incidence of COVID-19 had been far lower in children than in adults, it multiplied in children more than eight times during the period of this change in the SARS-CoV-2 virus variant. Specifically, the incidence of COVID-19 increased from 634/100,000 in the age groups 0-9 and 10-19 years at the delta-wave peak (week 49, 2021) to 4,877 and 6,828/100,000, respectively, in the same age groups at the omicron-wave peak (week 3, 2022) (4). However, concerns about the high infectivity of the omicron variant have been balanced by its apparent lower severity in adults, with less severe symptoms and decreased hospitalization rates (2,6,7). This reduction in disease severity has partly been attributed to the widespread use of COVID-19 vaccines in adults (8,9). In contrast, pediatricians observed a surge in pediatric hospitalizations due to COVID-19 during the omicron wave (5,10,11). In the United States and South Africa, the peak of child hospitalizations resulted in a patient load that was four times higher than during the delta wave, with the largest increase occurring in children under 4 years of age (12,13). Further, more children needed hospitalization in the intensive care unit (ICU) and/or invasive ventilation (12,13). Interestingly, the monthly hospitalization rate in children aged 12-17 years was six times higher in nonvaccinated patients than in fully vaccinated patients (13). Compared to the delta variant, the omicron virus appears to have a predilection for the upper respiratory airways and digestive tract (5,12). Reports have also described atypical cases of convulsions and cerebral venous thrombosis in children, making this a variant of concern, especially for pediatricians (14,15). To date, few studies have compared children with SARS-CoV-2 infection during the delta and omicron waves. Moreover, for the development of vaccines for children, it is important to precisely describe how children are affected by successive waves (16). Therefore, this study aimed to describe and compare the symptoms, explorations, treatment, and evolution of COVID-19 in children during the delta and omicron waves. PATIENTS AND METHODS This observational study was performed in the Pediatric Pulmonology Department of the University Hospital Trousseau, Assistance Publique Hôpitaux de Paris (APHP) Paris, France. According to the information on SARS-CoV-2 circulation in France, the arrival of the B.1.617.2 (delta) variant began in mid-July 2021 and that of the B.1.1.529 (omicron) variant began in mid-December 2021 (5). As such, patients between 0 and 18 years of age hospitalized in this department for COVID-19 between July 15th 2021 (arrival of the delta wave) and February 28th 2022 (end of the omicron wave) were identified using the hospital's "Programme de Médicalisation des Systèmes d'Information" (PMSI) database. This allowed for an exhaustive search of all children testing positive for SARS-CoV-2 by realtime reverse transcription polymerase chain reaction (RT-PCR) using nasopharyngeal swabs, who were admitted to this hospital. The study was approved by the local ethics committee of our institution, which waived the need for patients' consent (Study PED_COVID N • 20200717191204). Patient information was retrieved from medical records, including COVID-19 transmission history, clinical, biological (blood tests and viral RT-PCR findings) and radiological information, and the medical evolution. Considering the SARS-CoV-2 variant circulation in France, children hospitalized between July 15th 2021 and December 15th 2021 were included in the "delta-group, " and children hospitalized between December 16th 2021 and February 28th 2022 were in the "omicron-group." Continuous data were expressed as median [interquartile range (IQR)], while categorical data were expressed as numbers and proportions (%). Descriptive statistics are presented for all study variables. We used Fisher's exact test or Pearson's chisquared test (with Yates' continuity correction when necessary) to compare categorical and qualitative data and implemented the Wilcoxon rank sum test to evaluate continuous variables. A p-value of <5% was interpreted as evidence of a statistically significant difference. The analyses were performed using SAS software (version 9.4; Cary, NC, USA). Distribution of the Hospitalizations According to SARS-CoV-2 Waves and to Age The total number of children hospitalized monthly for COVID-19 between 1 January 2021 and 28 February 2022 is presented in Figure 1. During the study period (July 15th 2021 to February 28th 2022), 53 children aged 0-18 years were hospitalized for COVID-19 in our Pediatric Pulmonology Department. Among them, 14 (26.4%) were included in the delta group and 39 (73.6%) in the omicron group, while the duration of the study period was double that of the omicron wave. Indeed, the first group extended over 5 months (i.e., from July 15th to December 15th, 2021), whereas the second only over 2.5 months (i.e., from December 15th 2021 to February 28th 2022). While all of the 53 included children had positive PCR for SARS-CoV-2 in nasopharyngeal swab, only part of the SARS-CoV-2 variants were identified by Novaplex TM SARS-CoV-2 Variants I and IV Assays (Seegene, South Korea). Among the 14 children of the delta group, 7 (50%) were confirmed SARS-CoV-2 delta variant; and among the 39 children of the omicron group, 22 (56%) were confirmed SARS-CoV-2 omicron variant. The distribution of hospitalizations according to age group (<5, 5-11, and >11 years) is reported in Figure 2. Of the three patients in the omicron group who were older than 12 years, none were fully vaccinated. Two of them had not been vaccinated, and one had received an incomplete vaccination with only one injection 2 weeks prior to the onset of symptoms. Baseline Clinical Characteristics The patients' baseline clinical characteristics according to the wave group are detailed in Table 1. During the omicron wave, hospitalized patients were mostly aged < 5 years (90 vs. 71% In the delta group, the large majority of patients (11/14, 79%) had a pre-existing condition such as asthma, interstitial lung disease, congenital myopathy, obesity, Crohn's disease, and sickle cell disease; and 5 were infants under 3 months of age. In the omicron group, 22/39 (56%) children had an underlying condition: 7 had a respiratory disease (asthma, tuberculosis, cystic fibrosis, bronchodysplasia, interstitial lung disease, Langerhans histiocytosis, and Schwachman-Diamond syndrome), 2 a hematologic disease (sickle cell disease, Hodgkin's lymphoma), 2 a genetic disorder (Prader-Willi, CHARGE syndrome), one a cardiologic defect (pulmonary valvular stenosis), and 8 were infants under 3 months of age. Clinical Presentation and Explorations at COVID-19 Onset The clinical presentation at COVID-19 onset is described in Table 2. The omicron variant caused significantly more digestive symptoms, such as diarrhea (33% vs. 0) during the omicron and delta waves respectively (p = 0.01), and refusal to eat (46 vs. 7%) during the omicron and delta waves respectively (p = 0.01). No patient in the omicron group presented with hemoptysis, compared to three patients (21%) in the delta group (p = 0.01). Body temperature was significantly higher during infections with the omicron variant than in those with the delta variant [39.2 • C (38.9; 39.4) vs. 38.5 • C (38.5; 38.7), respectively; p = 0.02]. Although not statistically significant, the proportion of children with poor symptom tolerance and deterioration of general health status was higher in the omicron group (59 vs. 36%) during the omicron and delta waves respectively, p = 0.21. Detailed information on the main explorations performed at admission is provided in Table 3. Six children (50%) were coinfected with other respiratory viruses during the delta wave and 22 (63%) during the omicron wave (details on the different viruses are provided in Table 3). Chest X-ray and thoracic CTscan, when abnormal, were similar in both groups, with features of lung consolidation without specific localization. One patient in the delta group presented with pleural effusion and one in the omicron group with bilateral pneumothorax. No pulmonary embolisms were observed during these waves in our department. Management and Evolution The management and clinical evolution are detailed in Table 4. Two patients (14%) required nutritional support during the delta wave and 11 (28%) during the omicron wave. During the latter period, the median [IQR] duration of nutritional support was 2 [1.25; 3] days. There were no differences in patient management or disease evolution between the two groups. Two patients (one during the delta wave and one during the omicron wave) were already on home oxygen therapy and non-invasive ventilation due to chronic respiratory insufficiency prior to hospitalization for COVID-19. For both, oxygen or ventilation needs increased respectively during 2 and 11 days before returning to previous support levels. DISCUSSION This study compared the incidence and clinical symptoms of children hospitalized for COVID-19 during the delta and omicron waves. During the omicron wave, there was a major increase in the number of hospitalizations, with almost three times as many hospitalizations in half the time when compared to the delta variant, with the vast majority of children younger than 5 years. There were also distinct clinical characteristics, with higher temperature and poorly tolerated fever and a predilection for upper respiratory airways and digestive symptoms during the omicron wave. According to the World Health Organization (WHO) data, the omicron variant has been responsible for five to six times more confirmed SARS-CoV-2 infections in Europe and America (1). Although it has been suggested that the omicron variant is associated with lower hospitalization rates due to a suspected reduction in disease severity, this wave has caused an important increase in the number of hospitalizations in children (1,7,(11)(12)(13). This is in line with the surge in the number of hospitalizations for COVID-19 observed in our pediatric pulmonology department, which almost tripled between the two waves in half of the time. The higher infectivity of the omicron variant has been attributed to an exceptional number of mutations in the spike glycoprotein-binding human ACE2, resulting in increased infectivity of nasal epithelial cells and ACE2-positive cells (3,17). These alterations in virus conformation influence antibody neutralization and facilitate viral immune escape, making it a variant of concern (3,18). In the light of these findings, questions have arisen regarding the vaccine efficacy. Although studies have suggested a decrease in vaccine-induced immunity after the second dose, others have shown that boosters can restore neutralizing immunity (19)(20)(21). Lauring et al. showed in adults that three doses of mRNA vaccine were necessary to obtain the same protection for the omicron variant as that provided for other variants after two doses (22). Similar results were observed in immunocompetent adolescents (12-17 years old), where vaccine efficacy toward the omicron variant was restored after three doses (23). In our study, none of the three patients older than 12 years hospitalized during the omicron wave were fully vaccinated. This result, along with the observation by others that children were more susceptible to infections/reinfections during the omicron wave despite vaccination or previous infection, requires that children be vaccinated when possible, and this includes the need for the booster dose (13,24). This is supported by the 6-fold increase in the monthly hospitalization rate in non-vaccinated adolescents compared to that in vaccinated children during the omicron wave (13). Along with the increased number of child hospitalizations, the symptoms observed at COVID-19 onset were somewhat different when subsequently infected by the omicron variant or by the delta and previous variants (12,(25)(26)(27)(28). In Italian children, an analysis of online search trends suggested increased upper respiratory airway symptoms and possibly poorly tolerated fever, whereas dyspnea and anosmia/ageusia seemed less frequent (25). The latter finding could also be an indicator of the younger age of infected children and their inability to report such symptoms. Indeed, we found that hospitalized children were mostly aged under 5 years (90% during the omicron wave and 71% during the delta wave). We observed similar symptoms at the onset of infection to those reported by Cloete et al. in South African children (12). As such, we found that omicron caused significantly higher temperatures, diarrhea, and refusal to eat. The rate of underlying conditions was also in agreement with that reported by Cloete et al., with only 56% of the children hospitalized during the omicron wave vs. 79% during the delta wave. Similar to other studies, this study observed slightly more frequent upper respiratory airway symptoms during omicron waves (29,30). The higher susceptibility to target upper airways could be a real concern in young children, a population prone to severe upper airway infections due to a smaller respiratory tract (29). A recent retrospective cohort study showed results similar to ours, with adults infected by the omicron variant being younger, with less frequent comorbidities and dyspnea, and more frequent upper respiratory airway symptoms (31). Finally, although these symptoms were not observed in this study, others have highlighted the risk of convulsions and venous cerebral thrombosis in children infected with the omicron variant (12,15). Nevertheless, neurological signs have already been described in previous waves and should remain a cause of concern in children (27,28). The main limitation of this study is its retrospective and monocentric nature, which led to small number of inclusions. However, the scarce literature on the infections caused by the omicron variant in children makes it important to report the clinical features in the pediatric population and the specificities compared to previously described waves. In conclusion, during the omicron wave, there was a major increase in the number of hospitalizations of children for COVID-19. These children were mostly under 5 years of age, younger than during previous waves (27). Unfortunately, children under 5 years of age cannot benefit from the vaccination as available SARS-CoV-2 vaccines are recommended for older children. Although the BNT162b2 COVID-19 vaccine has been deemed safe, immunogenic, and effective in preventing COVID-19 infection (16), concerns have arisen regarding the risk of myocarditis, especially in adolescents (32)(33)(34)(35). This could explain why vaccination rates remain low in children aged 5-11 years old (36). For example in France, although the BNT162b2 COVID-19 vaccine is available for the children aged 5-11 years old since December 22, 2021, only 5% received at least one dose of as of April 29, 2022. Thus, describing the continuous evolution of COVID-19 symptoms and severity in children is essential for improving vaccination adherence. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Local Ethics Committee of our institution, which waived the need for patients' consent (Study PED_COVID N • 20200717191204). Written informed consent from the participants' legal guardian/next of kin was not required to participate in this study in accordance with the national legislation and the institutional requirements. AUTHOR CONTRIBUTIONS JT, BP, and HC were involved in the methodology, formal analysis, investigation, data curation, writing the original draft, reviewing and editing the manuscript, designing of tables and graphs, and verified the underlying data. AS, GA, LBe, LBi, AD-A, GT, and NN data were involved in data provision and reviewing and editing the manuscript. All authors had full access to all data in the study and accept responsibility for the decision to submit for publication.
v3-fos-license
2017-08-02T23:44:23.667Z
2016-11-28T00:00:00.000
17508838
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11239-016-1446-0.pdf", "pdf_hash": "a59f8b3319b009173c2a2d5bc8bed45fe3e55d4e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42650", "s2fieldsofstudy": [ "Medicine" ], "sha1": "a59f8b3319b009173c2a2d5bc8bed45fe3e55d4e", "year": 2016 }
pes2o/s2orc
Vitamin K antagonists: relative strengths and weaknesses vs. direct oral anticoagulants for stroke prevention in patients with atrial fibrillation Vitamin K antagonists (VKAs) have been the mainstay of anticoagulation therapy for more than 50 years. VKAs are mainly used for the prevention of stroke in patients with atrial fibrillation (AF) and the treatment and secondary prevention of venous thromboembolism. In the past 5 years, four new agents—the direct factor Xa inhibitors apixaban, edoxaban and rivaroxaban and the direct thrombin inhibitor dabigatran [collectively known as direct oral anticoagulants (DOACs) or non-VKA oral anticoagulants]—have been approved for these and other indications. Despite these new treatment options, the VKA warfarin currently remains the most frequently prescribed oral anticoagulant. The availability of DOACs provides an alternative management option for patients with AF, especially when the treating physician is hesitant to prescribe a VKA owing to associated limitations, such as food and drug interactions, and concerns about bleeding complications. Currently available real-world evidence shows that DOACs have similar or improved effectiveness and safety outcomes compared with warfarin. Treatment decisions on which DOAC is best suited for which patient to maximize safety and effectiveness should take into account not only clinically relevant patient characteristics but also patient preference. This article reviews and highlights real and perceived implications of VKAs for the prevention of stroke in patients with non-valvular AF, with specific reference to their strengths and weaknesses compared with DOACs. Electronic supplementary material The online version of this article (doi:10.1007/s11239-016-1446-0) contains supplementary material, which is available to authorized users. Introduction Vitamin K antagonists (VKAs) provide effective anticoagulation and have been the mainstay of anticoagulation therapy for more than 50 years. VKAs are mostly used as long-term anticoagulant therapy, including for the prevention of stroke in patients with atrial fibrillation (AF) and the treatment of venous thromboembolism (VTE). Warfarin currently remains the most frequently prescribed oral anticoagulant (OAC) for these indications, even with the approval within the past 5 years of four new agents, namely the direct factor Xa inhibitors apixaban, edoxaban and rivaroxaban and the direct thrombin inhibitor dabigatran [collectively known as novel/non-VKA/direct OACs (DOACs)]. On average, patients with AF have a five-fold higher risk of stroke than patients without AF-irrespective of whether they have paroxysmal or chronic AF [123]. AF-related strokes are associated with an approximately 50% increased risk of disability and a 60% increased risk of death at 3 months compared with strokes of other aetiologies [83]. The number of strokes caused by AF-related thromboembolisms may be even higher than currently thought because data from recent studies have shown that cryptogenic strokes (i.e. those without a well-defined aetiology) account for approximately 30% of ischaemic strokes [112]. Evidence suggests that up to 30% of patients with cryptogenic stroke may have AF [54,110]. Therefore, effective Abstract Vitamin K antagonists (VKAs) have been the mainstay of anticoagulation therapy for more than 50 years. VKAs are mainly used for the prevention of stroke in patients with atrial fibrillation (AF) and the treatment and secondary prevention of venous thromboembolism. In the past 5 years, four new agents-the direct factor Xa inhibitors apixaban, edoxaban and rivaroxaban and the direct thrombin inhibitor dabigatran [collectively known as direct oral anticoagulants (DOACs) or non-VKA oral anticoagulants]-have been approved for these and other indications. Despite these new treatment options, the VKA warfarin currently remains the most frequently prescribed oral anticoagulant. The availability of DOACs provides an alternative management option for patients with AF, especially when the treating physician is hesitant to prescribe a VKA owing to associated limitations, such as food and drug interactions, and concerns about bleeding complications. Currently available real-world evidence shows that DOACs have similar or improved effectiveness and safety outcomes compared with warfarin. Treatment decisions on which DOAC is best suited for which patient to maximize safety and effectiveness should take into account not only clinically relevant patient characteristics but also patient preference. This article reviews and highlights real and perceived implications of VKAs for the prevention of stroke Electronic supplementary material The online version of this article (doi:10.1007/s11239-016-1446-0) contains supplementary material, which is available to authorized users. 1 3 anticoagulation is vital for the long-term management of patients with AF at an intermediate or high risk of stroke [27,125]. VKAs reduce the rate of stroke by approximately 60% [65], whereas antiplatelet therapy is much less efficacious (reducing the event rate by approximately 20%) and has almost the same bleeding risk as oral anticoagulation therapy [2,85]. Despite guideline recommendations and clear evidence that oral anticoagulation therapy is indicated in patients with AF and who have a CHA 2 DS 2 -VASc score of ≥1, several studies report that, on average, only 60% of eligible patients receive anticoagulation therapy. The pharmacological characteristics of VKAs, particularly their narrow therapeutic window combined with many drug-drug and drug-food interactions, necessitate regular coagulation monitoring and dose adjustments [3,5]. An important measure for anticoagulation control with VKAs is the percentage of time a patient spends within the target therapeutic range [i.e. international normalized ratio (INR) of 2.0-3.0]. A number of studies have shown that INR control of VKA therapy is suboptimal in routine clinical practice and, in general, patients spend approximately 40% of time outside the recommended INR range. Poor INR control is associated with increased risk of stroke (INR <2.0) and bleeding (INR >3.0) [88,122]. Real-life evidence demonstrates that time in therapeutic range (TTR) also varies depending on care setting, such as whether patients are managed by a dedicated anticoagulation clinic or by their general practitioner, cardiologist or neurologist [91]. DOACs have been developed to overcome the limitations of VKA therapy. These agents are now approved in many countries worldwide for the prevention of stroke and systemic embolism in patients with non-valvular AF, as well as for other thromboembolic disorders (Table 1) based on data from phase III, randomized clinical trials [11,20,22,38]. DOACs were at least as good as, if not superior to, warfarin in terms of efficacy for the prevention of stroke in patients with non-valvular AF and also offered a better safety profile (all four major trials consistently indicated reduced rates of intracranial and fatal or life-threatening bleeding compared with the respective warfarin arm) [32,52,58,102,109]. Real-world evidence of DOACs is accumulating, and available data support the findings of the phase III clinical studies (for example [13,15,16,18,26,117,121]). In general, current guidelines recommend DOACs in preference to VKAs [27,114] or as an alternative to VKAs [74,94] for prevention of stroke or systemic embolism in patients with non-valvular AF. However, VKAs are still regarded as the 'gold standard' by some physicians and continue to be prescribed to many patients, including those who have had difficulties maintaining their INR within the target therapeutic range [48,88]. This article highlights real and perceived implications of VKAs for the prevention of stroke in patients with non-valvular AF, with specific reference to their strengths and weaknesses compared with DOACs. Furthermore, it provides practical guidance on which patients should be switched from VKA to DOAC therapy, which patients should stay on VKA therapy and which DOAC should be given to which patient. Finally, this paper discusses the most suitable overall approach to reducing the burden of AF-related stroke. Characteristics of vitamin K antagonist therapy: why it works and areas of inadequacy The pharmacological characteristics of different VKAs, such as warfarin, phenprocoumon and acenocoumarol ( Table 2) are associated with several advantages and practical limitations [3,44]. VKAs have several inherent advantageous characteristics. They are not eliminated by the kidneys and, therefore, can be used in patients with severe renal impairment. Moreover, the need for regular INR monitoring encourages regular physician-patient contact despite being inconvenient and imposing additional costs. However, although regular physician visits may be beneficial from a medical point of view, poor medication adherence is usually attributable to multiple, interlinked factors and there is no evidence that regular physician visits alone can increase patient adherence [23]. In the case of a missed VKA dose, patients are at less immediate risk of a thrombotic event than patients missing a dose of DOAC, and non-adherent patients may benefit from the slow offset of action. However, (similar to initiation of therapy) reinitiating therapy after missing several doses of a VKA may actually result in a profound prothrombotic state [3,8]. Many physicians are highly familiar with the management and the responsible use of VKAs. Moreover, drug costs of VKAs are significantly lower than those of DOACs. Therefore, physicians may be hesitant to prescribe any of the DOACs. On the downside, VKAs have an indirect anticoagulant mechanism of action, impairing the synthesis of several vitamin K-dependent coagulation factors (Fig. 1), which results in a slow onset and offset of the anticoagulant effect. On initiation, VKAs are inherently prothrombotic (a fact often overlooked by physicians) because they inhibit the natural anticoagulant proteins C and S faster than inhibiting the coagulation factors X, IX, VII and II: this creates a temporary imbalance in favour of procoagulation factors [3,8]. In a large case-control study in more than 70,000 patients with AF, warfarin was associated with a 71% increased risk of stroke in the first 30 days of treatment compared with longer periods of treatment [8]. Hence, bridging therapy with a fast-acting, parenteral anticoagulant (e.g. enoxaparin, unfractionated heparin) is necessary on initiation of VKA therapy. For surgery or other interventional procedures, the slow offset of action may delay the procedure. Furthermore, bridging therapy with a fast-acting anticoagulant may also be necessary after the procedure to ensure efficient anticoagulation. VKAs have a narrow therapeutic range (Fig. 2) and, therefore, require regular coagulation monitoring and dose adjustments in some patients to keep the anticoagulation intensity within the therapeutic range [3]. Data show that patients on VKAs are effectively anticoagulated only approximately 60% of the time, or even less in some countries [6,88]. Keeping patients within the target therapeutic range is further complicated by VKAs having multiple food and drugs interactions [3]. These factors can have a significant impact on patients' daily lives, such as considerable time spent in the clinic for coagulation monitoring and dietary restrictions, all of which may reduce patients' quality of life [3]. Furthermore, routine anticoagulation monitoring confers additional costs: the cost of the test itself, travel, nurse visits, missed work and the increased workload for physicians and other healthcare staff [1]. Characteristics of the direct oral anticoagulants: what renders them so attractive and what are their limitations? The pharmacological characteristics of DOACs provide many practical advantages over VKA therapy (Table 2). Direct targeting of factor Xa or thrombin allows for a much faster effective anticoagulation effect-within 0.5-4 h [51, Table 1 Indications and dosing regimen of DOACs in the EU [11,20,22,38] ACS acute coronary syndrome, AF atrial fibrillation, bid twice daily, DOAC direct oral anticoagulant, DVT deep-vein thrombosis, od once daily, PE pulmonary embolism, VTE venous thromboembolism a Not recommended in patients with CrCl <15 mL/min b Contraindicated in patients with CrCl <30 mL/min c Started with a half dose 1-4 h after completion of surgery followed by full doses from the next day onwards; reduced dose of 150 mg od (taken as two tablets of 75 mg) in patients with one or more of the following: CrCl 30-50 mL/min; receiving concomitant verapamil, amiodarone or quinidine; aged ≥75 years d Reduced dose of 30 mg od in patients with non-valvular AF or VTE plus one or more of the following clinical factors: CrCl 15-50 mL/min; low body weight ≤60 kg; concomitant use of the following P-glycoprotein inhibitors: cyclosporine, dronedarone, erythromycin or ketoconazole e After the initial dosing period of 15 mg bid for 3 weeks, a reduced dose of 15 mg od should be considered if the patient's assessed risk for bleeding outweighs the risk for recurrent VTE f Reduced dose of 110 mg bid in patients with non-valvular AF or VTE aged ≥80 years or receiving concomitant verapamil; consider this reduced dose based on individual assessment of thromboembolic risk and bleeding risk in: patients aged 75-80 years, patients with CrCl 30-49 mL/min; patients with gastritis, oesophagitis or gastroesophageal reflux, and other patients at increased risk of bleeding g Reduced dose of 2.5 mg bid in patients with non-valvular AF and serum creatinine ≥1. 5 (Fig. 2). Moreover, the kinetics of DOACs (e.g. rivaroxaban [78]) closely mimic those of the low-molecular-weight heparin enoxaparin. Therefore, in contrast to VKAs, bridging with a parenteral anticoagulant is not necessary with the DOACs [66]. DOACs also have a much shorter half-life compared with VKAs, making bridging to interventions or surgery obsolete [67]. DOACs have predictable pharmacokinetics and pharmacodynamics and a lower potential for food and drug interactions [11,20,22,38]. These agents can, therefore, be given at fixed dosing schedules without the need for dietary restrictions or routine coagulation monitoring. However, the lack of the requirement for routine monitoring does not negate the need for regular physician-patient contact and patients should schedule regular visits. Although the frequency of these visits should be determined by bleeding risk (HAS-BLED score), age and renal function, patients are recommended to return every 3 months for a review of their treatment [67]. Measurement of the anticoagulation effect and/or drug levels may be helpful in certain clinical circumstances, such as in the event of suspected overdose, during bleeding events, prior to urgent surgery, in patients with deteriorating renal function or when determining the use of thrombolysis. Unlike VKAs, DOACs are eliminated renally, albeit at different rates; renal impairment affects exposure and the associated risk of bleeding. Renal clearance of the absorbed active drug is approximately 27% for apixaban [22], 35% for rivaroxaban [92], 50% for edoxaban [38] and >80% for dabigatran [46]. Based on these characteristics, apixaban, edoxaban and rivaroxaban are not recommended in patients with AF and who have creatinine clearance (CrCl) Table 2 Overview of pharmacological characteristics of direct oral anticoagulants and vitamin K antagonists [3,59,66,71,106,119] CYP cytochrome P450, NR not reported, P-gp P-glycoprotein, R-(R)-enantiomer, S-(S)-enantiomer, t max time to reach maximal plasma concentration a Rivaroxaban 20 mg: 66% under fasting conditions (mean area under the plasma concentration-time curve increased by 39% when given with food) b The 15 and 20 mg doses of rivaroxaban should be taken with food to enhance their absorption [11,22,38] and dabigatran is contraindicated in patients with CrCl <30 mL/min [20]. Furthermore, edoxaban should be used with caution in patients with high CrCl because of reduced efficacy [38]; In the US, edoxaban should not be used in patients with CrCl >95 mL/min [39]. Vitamin K antagonists: performance in real-world practice Time in therapeutic range and real-world effectiveness and safety VKAs can only provide clinical benefit if the anticoagulation effect is kept within the therapeutic range (INR 2.0-3.0); poor INR control can lead to an increased risk of thromboembolism (INR <2.0) or bleeding (INR >3.0) [5]. TTR during VKA therapy is higher during controlled clinical studies than in daily practice, owing to their strict study protocols and the regular follow-up with patients. Moreover, TTR control in daily clinical practice is also dependent on whether the patient is managed by a dedicated anticoagulation clinic or elsewhere, such as by a general practitioner, cardiologist or neurologist [91]. The rate of fatal and major bleeding events was low (0.25 and 1.1% per year, respectively) in patients whose anticoagulation with warfarin was managed by an anticoagulation clinic achieving a median TTR of 68% [91,101]. The inability to maintain TTR is well reported: in the Registry of Canadian Stroke Network, 74% of patients with known AF who were taking warfarin at the time of ischaemic stroke had sub-therapeutic anticoagulation [53]. Additional evidence emphasizes that stroke prevention with a VKA is effective in patients who have a good individual mean TTR (>75%) [91]. Data from the GARFIELD-AF registry indicate that only 29% of VKA-treated patients had good anticoagulation control, defined as a TTR ≥70%, and that heavy alcohol use was associated with poor anticoagulation control (TTR ≤60%) [118]. Patients with poor control had a significantly higher risk of death [hazard ratio 2.87; 95% confidence interval (CI) 1. 97-4.19] and stroke/ systemic embolism (hazard ratio 1.98, 95% CI 1.13-3.47) than those with a TTR >60% [118]. Data collected outside of anticoagulation clinics (and, therefore, most likely in patients with suboptimal anticoagulation control) suggest that real-world effectiveness fails to reproduce efficacy data for VKAs seen in clinical studies. However, warfarin has been shown to prevent stroke and systemic embolism more effectively than placebo or acetylsalicylic acid. A large meta-analysis of clinical study data demonstrated a 62% reduction in the risk of stroke and systemic embolism with warfarin therapy compared with placebo/acetylsalicylic acid therapy [64]. A real-world Canadian study in patients with AF showed that warfarintreated outpatients had a significantly lower risk of stroke compared with patients who did not receive any antithrombotic treatment (risk ratio 0.31) [29]. The risk of bleeding is much higher in clinical practice compared with the rates reported in clinical studies. A large cohort study in 125,195 patients with AF demonstrated a high risk of haemorrhage during the first 30 days of warfarin therapy (11.8% per year): considerably higher than the rates of 1-3% reported in randomized controlled trials [55]. In the GARFIELD-AF registry, treatment at an anticoagulation clinic or thrombosis centre was associated with a better TTR compared with other settings (proportion of patients with TTR >60%: 57.1 vs. 46.2%) [118]. There are efforts to simplify the management of VKAs by way of patient self-testing and self-management. Initial clinical trials had encouraging outcomes [19]; however, patients in these trials had a high level of education, which is not necessarily a true representation of all patients encountered in daily clinical practice. Finally, studies trying to optimize the benefit-risk ratio of VKAs by lowering the INR range to 1.5-2.5 failed, with inferior efficacies but similar bleeding compared with standard-dose VKA therapy [103]. Patient preference and compliance to vitamin K antagonist therapy Limitations and inconveniences that both physicians and patients associate with VKA therapy are contributing to their under-prescription in patients with high risk of stroke and systemic embolism. In the GARFIELD-AF registry, 38.0% of patients with a CHADS 2 score ≥2 did not receive anticoagulant therapy; 7.2% of patients with AF and CHADS 2 ≥2 had refused treatment for various reasons, including inconvenience of regular blood tests, dietary restrictions, bleeding risk and an under-appreciation or lack of knowledge regarding the risk of stroke. As well as being unwilling to start VKA therapy, many patients with AF who are initiated on VKA therapy discontinue or are non-adherent [47,56,77,98]. For example, of 125,195 patients newly diagnosed with AF in Canada from 1997 to 2008, 9% did not collect their second prescription of warfarin within the first half year and 32% discontinued therapy within 1 year, rising to 43% at 2 years and 61% at 5 years [56]. Similarly, in a US study, more than one in four new warfarin starters discontinued therapy within a year [47]. In another study, 40% of patients were non-adherent to VKA therapy (>20% of days with missed doses or >10% of days where extra doses were taken in addition to the prescribed dose), and this percentage was significantly associated with poor anticoagulation control [77]. Underuse and inappropriate use of anticoagulation therapies Large registries published between 2005 and 2009 by the European Heart Survey, the German Competence NETwork on AF (AFNET) and the Canadian Stroke Network suggest that 30-60% of patients with AF who are eligible according to guidelines are not prescribed anticoagulation therapy [53,93,97]. In the Registry of Canadian Stroke Network, only 10% of patients with acute stroke with known AF were therapeutically anticoagulated at time of hospital admission [53]. Underuse of anticoagulation in these patients had unfavourable implications: approximately 80% of the resulting strokes were disabling or fatal [53]. The global GARFIELD-AF registry (2009)(2010)(2011) found that 34% of patients with a CHADS 2 score ≥2 received antiplatelet therapy [76]. Of patients with a CHADS 2 score ≥2 who received anticoagulation therapy, 62% received a VKA. In addition, 43% of patients with a CHADS 2 score of 0 received anticoagulation therapy. Taken together, there appears to be underuse of anticoagulation therapy in patients at moderate to high risk of stroke and systemic embolism and overuse in patients at low risk-demonstrating that, in real-life practice, prescribed therapy is often not based on evidence-based risk schemes and guidelines [76]. In almost half of the cases (48.3%) in which VKA therapy was not prescribed, this was the physician's choice and not based on guidelines or contraindications to therapy; the physician's reasons included concerns over bleeding risk (7.4%), concerns over the risk of falling (6.5%), concerns over patient compliance (5.3%) and perceived low risk of stroke (4.1%). Many of the concerns given as reasons for not prescribing VKA are not supported by actual data. For example, a prospective study in patients on OACs at high risk of falls did not have a significantly increased risk of major bleeding events [40]. Poor TTR in everyday clinical practice, coupled with low rates of adherence or high discontinuation rates, and a general underuse of VKA therapy supports the need for alternative oral anticoagulation options that are easier to manage and more convenient than VKA therapy. In the next sections, we review clinical studies and real-world data, with the practical advantages associated with DOAC therapy compared against the aforementioned limitations associated with VKAs. Vitamin K antagonists vs. direct oral anticoagulants: outcomes of phase III studies and real-life evidence Efficacy and safety Results from phase III trials of DOACs, with a wide range of patients with AF worldwide, showed that all DOACs are at least as effective as warfarin, with similar or lower rates of major bleeding [32,34,52,58,102]. Importantly, a direct, head-to-head comparison of these studies is not feasible because the study designs and study populations were different. In a meta-analysis of all four DOACs in phase III trials for stroke/systemic embolism prevention in patients with AF vs. warfarin, these agents reduced the risk of haemorrhagic stroke by 51% and the risk of intracranial haemorrhage by 52% [109].This favourable benefit-risk profile extends to many subgroups and ethnicities including the Asian population in which the rate of intracranial bleeding is reduced by up to 80% (as reviewed elsewhere [31,72,73]). Conversely, the risk of gastrointestinal bleeding was 25% higher with DOACs than with warfarin, owing to bleeding events with dabigatran (150 mg twice daily), edoxaban (60 mg once daily) and rivaroxaban (20 mg) ( Tables 3, 4) [32-34, 43, 52, 58, 102, 109]. There are several studies comparing real-life effectiveness and safety of VKAs with DOACs (mainly dabigatran or rivaroxaban vs. warfarin; data for apixaban are emerging; data for edoxaban are currently lacking; Table S1 in the electronic supplementary material). Published studies to date demonstrate similar or improved effectiveness with DOACs compared with VKAs (Table S1 in the electronic supplementary material). Recent publications showed discrepancies in real-world effectiveness and safety outcomes with DOACs compared with previously published database analyses or compared with phase III clinical trial results (Table 3). Real-life evidence from the international, non-interventional, observational phase IV XANTUS study demonstrates that rates of stroke and major bleeding were low in patients receiving rivaroxaban [26]. Data from the Dresden NOAC Registry suggest that rates of major bleeding may be lower with rivaroxaban, apixaban and dabigatran therapy compared with VKA therapy [13,16,18,90]. Moreover, these data show that real-life rates of major bleeding with rivaroxaban were similar (Dresden NOAC Registry [16,102]) or lower (XANTUS [26]) compared with findings from ROCKET AF [16,102]. Other observational studies mainly demonstrate that rivaroxaban and dabigatran have similar or reduced rates of major bleeding compared with VKAs, and reflect the decreased incidence of intracranial haemorrhage and increased incidence of gastrointestinal bleeding [30]. Adherence, persistence and discontinuation Adherence is defined as the extent to which the patient acts in accordance with the prescribed interval and dose of the dosing regimen and can also be defined as the percent of doses taken as prescribed [36]. Persistence measures the duration of drug therapy during which the patient takes medication without exceeding the permissible gap (usually 60 days). Two retrospective US database analyses showed that patients with AF were significantly more persistent with rivaroxaban than with warfarin, reporting patient persistence with warfarin dropping to <70% at 6 months of therapy [80,95]. A retrospective US database analysis demonstrated that persistence was higher with dabigatran than with warfarin at 6 months (72 vs. 53%) and 1 year (63 vs. 39%) [126]. This study also showed that patients with a low-to-moderate stroke risk (CHADS 2 <2) or with a higher bleeding risk (HEMORR 2 HAGES >3) were more likely to discontinue treatment than patients with a high stroke risk or lower bleeding risk [126]. When comparing persistence or adherence among DOACs, two retrospective analyses of different US databases demonstrated that use of the once-daily medication rivaroxaban was associated with Table 3 Main efficacy and safety results from the phase III clinical trials of the direct oral anticoagulants approved for prevention of stroke in patients with non-valvular atrial fibrillation Values in bold indicate a statistically significant difference between the direct oral anticoagulant and warfarin GI gastrointestinal, ICH intracranial haemorrhage, NR not reported, NMCR non-major clinically relevant, SE systemic embolism a Intention-to-treat analysis b Data with additional events as per [34] or [33] or [43] c Primary efficacy endpoint in ENGAGE-AF was time to adjudicated stroke or systemic embolic event d Primary safety outcome in RE-LY and ARISTOTLE e Major or minor bleeding (minor bleeding was any bleeding not considered to be a major bleeding event) f Primary safety outcome in ROCKET AF significantly higher rates of persistence at 1-year follow-up or significantly higher adherence (percentage of patients who had a proportion of days covered ≥80% during their follow-up) than with the use of the twice-daily medication dabigatran [37,96]. A Danish nationwide cohort study in approximately 3000 patients with non-valvular AF reported that over 75% of patients treated with dabigatran adhered to therapy more than 80% of the time (as measured by proportion of days covered) [57]. Published data on reallife adherence with edoxaban and apixaban are not yet available. In the phase III studies (across various follow-up periods), discontinuation rates were: significantly lower with apixaban compared with warfarin in ARISTOTLE; similar between rivaroxaban and warfarin in ROCKET AF and between edoxaban and warfarin in ENGAGE-AF; but significantly higher with dabigatran compared with warfarin in RE-LY, mainly owing to dyspepsia [32,52,58,102]. VKA discontinuation rates in real-life practice range from 25 to 38% at 1-year follow-up and are higher than those reported in controlled phase III studies (10-35% over a median follow-up period of 1.8-2.8 years) [13,32,52,58,95,102,109]. In an analysis of data collected from patients with AF in the Dresden NOAC Registry, discontinuation rates with dabigatran (25.8% per year) were similar to those observed with VKAs in daily practice, whereas discontinuation rates with rivaroxaban therapy (13.6% per year) were much lower than those with VKA therapy [12,15]. Persistence probabilities at 1 year were 53.1, 47.3 and 25.5% with rivaroxaban, dabigatran and VKA, respectively, and adherence with a high medication possession ratio (≥80%) was 61.4% for rivaroxaban and 49.5% for dabigatran [14]. Together, real-life data suggest that, in the long-term, patients receiving DOACs have better protection against stroke or systemic embolism than patients receiving a VKA. In the US, 33-69% of all medication-related hospital admissions are estimated to be attributable to poor medication adherence, with the resulting costs of nonadherence being approximately $100 billion/year [100,115]. An analysis of adverse events based on hospital data identified warfarin as a medication that was most commonly implicated in hospitalization of adults aged ≥65 years (33.3%) owing to adverse drug effects [24]. The true cost of vitamin K antagonist therapy In addition to treatment effectiveness and safety, cost-effectiveness is another consideration for decision making by healthcare professionals who have several therapy options. VKAs are often perceived to have lower costs; however, although costs for the drug itself are lower when comparing with DOACs, the true cost of VKA treatment needs to take into account the expenses related to the general management of therapy. These include routine coagulation monitoring, adverse clinical outcomes during therapy (such as bleeding and thromboembolic events) and as a result of non-adherence. In clinical practice, the estimated mean numbers of hospitalization days, outpatient visits and AF-related hospitalizations associated with rivaroxaban are reported to be lower than those associated with warfarin [79,81,82]. Similar published real-world evidence is not yet available for apixaban, edoxaban or dabigatran. A cost-modelling analysis suggests that, based on the expected number of thrombotic or bleeding events avoided with use of DOACs vs. warfarin, medical costs are reduced when DOACs are used instead of warfarin/placebo for the prevention of stroke in patients with non-valvular AF or for the treatment of VTE [4]. However, a model simulation based on the Slovenian healthcare payer perspective using 2014 costs demonstrated that cost-effectiveness of the DOACs vs. warfarin is highly sensitive to warfarin anticoagulation control [75]. With a TTR of 60%, the probability that warfarin was a cost-effective option was unlikely (probability 1%). This percentage rises with increasing TTR: at a TTR of 70%, warfarin was more cost-effective than DOACs in half of the simulations [75]. Reversal of anticoagulant effect and management of bleeding There is currently limited clinical experience with specific reversal agents for the DOACs. However, although vitamin K is a direct, effective reversal agent for VKAs, a normal INR is generally only achieved over approximately 24 h, which would not help in the case of clinically important bleeding events such as intracranial haemorrhage [3,84]. Therefore, coagulation factor concentrates need to be administered in parallel with vitamin K to restore haemostasis quickly [3,84,111]. In most clinical situations, the short half-lives of the DOACs obviate the need for reversal, and standard procedures for bleeding management are normally sufficient to control bleeding events [28]. In fact, specific reversal agents for DOACs would be very rarely needed in daily clinical care. In exceptional clinical situations (such as life-threatening bleeding or emergency surgery associated with a high bleeding risk), coagulation factor concentrates such as prothrombin complex concentrate, activated prothrombin complex concentrate or recombinant factor VIIa may be considered [11,20,22,38]. However, there is limited clinical experience with these agents in patients with bleeding events. Haemostatic agents such as prothrombin complex concentrate or recombinant factor VIIa may increase the risk of thromboembolism if they are administered when the plasma concentration of the anticoagulant is low [124]. Therefore, the risk with the use of these agents has to be balanced with their potential for bleeding control. Recent months have seen the clinical approval of idarucizumab, a specific reversal agent for dabigatran, based on results of a phase III study [104,105] (Table 5). Moreover, results with a specific reversal agent, andexanet alfa (http://www.clinicaltrials.gov, NCT02220725 and NCT02329327) for factor Xa inhibitors have shown that it has the potential to quickly and effectively reverse the anticoagulation effect of rivaroxaban and apixaban [113]. Andexanet alfa is expected to be approved in 2017 [107]. PER977 (Perosphere) is being assessed as a reversal agent for edoxaban in clinical trials with promising preliminary results [7,35] (http://www.clinicaltrials.gov, NCT02207257). All DOACs, UFH and LMWH PER977: a small, synthetic, water-soluble, cationic molecule that is designed to bind specifically to UFH and LMWH through non-covalent hydrogen bonding and charge-charge interactions and similarly also binds to edoxaban, rivaroxaban, apixaban and dabigatran In development [7,35] When to switch and when not to switch from vitamin K antagonists to direct oral anticoagulants Patients who have been initiated on VKA therapy can be switched to a DOAC (see individual Summary of Product Characteristics for further details [11,20,22,38]). This switch should be based on a clinical benefit-risk assessment. Specific reasons for switching may include-but are not limited to-poor INR control, stroke/systemic embolism or serious bleeding during VKA therapy, poor compliance (e.g. relating to the inconveniences of VKA therapy), patient preference to switch to a DOAC therapy, reduced long-term costs and fear of bleeding (particularly within the fragile patient population). Switching strategies are reviewed in the updated practical guide of the European Heart Rhythm Association (EHRA) [66]. The effectiveness and safety of switching patients with AF from a VKA to DOAC therapy has been demonstrated in the Dresden NOAC Registry [17,90]. Data from this registry regarding patients who switched from a VKA to rivaroxaban or dabigatran for stroke prevention or VTE treatment suggest that the potential for bleeding should be monitored carefully in the first few days after the transition, during which residual VKA activity may remain [90]. One study reported that only 75% of VKA patients had an INR measurement documented before they were started on a DOAC; on average, DOAC was started within 2-5 days after the last intake of VKA. At the 30-day follow-up, the rates of major cardiovascular events (0.8%; 95% CI 0.3-1.8) and major bleeding complications (0.3%; 95% CI 0.0-1.0) were low, with a rate of any bleeding of 12.2% (95% CI 9.8-14.8) in patients with and without INR testing of the residual VKA effect [17]. A Danish analysis demonstrated the importance of adherence to the switching protocols outlined in the Summary of Product Characteristics for dabigatran [116]. This study evaluated real-world outcomes in patients with AF: there was an increased risk of thromboembolism and bleeding with dabigatran in previous VKA users. The authors of this study cautiously interpreted these unexpected results as reflecting patient selection and drugswitching practices. Dabigatran use in VKA-naïve patients was reported to be safe [116]. The EHRA practical guide provides a schematic overview of switching protocols from a VKA to a DOAC and vice versa and also emphasizes the importance of adherence to the established switching strategies [66]. Some patients, especially those with good INR control and TTR, may prefer to continue with VKA therapy instead of switching to a DOAC. Patients may also benefit from continued VKA therapy, including those with contraindications to DOAC therapy. For example, patients with endstage kidney disease (CrCl <15 mL/min) have significantly increased risks of stroke and bleeding compared with patients with normal kidney function [99]. End-stage kidney disease is also associated with reduced activity of cytochrome P450 2C9, leading to lower warfarin dosing requirements [41]. Patients with valvular AF as opposed to non-valvular AF (particularly in those with mechanical valves) should be treated with a VKA because DOACs are not approved in these patients [11,20,22,38]. Moreover, the outcomes of the RE-ALIGN trial that assessed dabigatran vs. warfarin in patients with mechanical valves reinforced the recommendations of the current guidelines against the use of DOACs in these patients. This trial had to be terminated prematurely owing to an excess of thromboembolic and bleeding events among patients receiving dabigatran (150, 220 or 300 mg twice daily) [42]. Which direct oral anticoagulant for which patient? In the absence of a head-to-head trial with DOACs (no such trial is planned or ongoing), no direct answer can be provided to this question. The overall aim should be that all patients with AF who are indicated for anticoagulation should receive appropriate therapy. As discussed at the beginning of this article, a substantial proportion of patients with AF who should be receiving anticoagulation are not receiving OAC therapy of any form. DOACs-overcoming several of the limitations of VKAs-offer alternative and potentially preferred therapy options [27] both in treatment-naïve patients with newly diagnosed AF in need of anticoagulation therapy and in patients with AF at risk of stroke and systemic embolism who are not receiving appropriate therapy or who have poorly controlled VKA therapy. The choice of which DOAC is the right agent for which patient, initially choosing between a direct thrombin inhibitor and factor Xa inhibitor, should be based on the pharmacokinetics/pharmacodynamics and integration of the clinical data with respect to the patient's characteristics. The following recommendations, based on the EHRA practical guide, can be used for decision making [66]. In patients with renal impairment, factor Xa inhibitors (rivaroxaban, apixaban or edoxaban) should be preferred over dabigatran. Similarly, factor Xa inhibitors have demonstrated no change to the benefit-risk profile in elderly patients and in patients with a pronounced cardiovascular co-morbidity compared with other patient groups (with especially favourable data for rivaroxaban [60,63,86]). Patients with a history or high risk of gastrointestinal bleeding may have a lower risk of bleeding complications with apixaban and low-dose edoxaban than with dabigatran, rivaroxaban or high-dose edoxaban; however, dabigatran, rivaroxaban and warfarin may have similar rates of gastrointestinal bleeding in real-life clinical practice [30]. Furthermore, there is some evidence that patients with a high risk for ischaemic stroke may benefit from a direct thrombin inhibitor (i.e. dabigatran) [108]. More data from real-life studies will shed light on which agent provides the best benefit-risk ratio for which patient. Conclusions The availability of DOACs provides an alternative management option for patients with AF, especially when the treating physician is hesitant to prescribe a VKA owing to the associated limitations, such as routine coagulation monitoring and dose adjustments, food and drug interactions and concerns about bleeding complications. Overall, currently available real-world evidence shows that DOACs have similar or improved effectiveness and safety outcomes compared with warfarin. With regards to which DOAC is best suited for which patient to maximize safety and effectiveness, more prospective real-world data are required because database studies show divergent outcomes. Overall, recommendations in the EHRA practical guide suggest actions taking into account not only clinically relevant patient characteristics but also patient preferences. Adherence to therapy is an important factor to achieve best outcomes, and there is some evidence that patients adhere better to oncedaily medications compared with those taken twice-daily.
v3-fos-license
2023-10-27T15:23:41.556Z
2023-10-25T00:00:00.000
264503026
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1424-8220/23/21/8697/pdf?version=1698216787", "pdf_hash": "a198eab93c5f1ad18d6dde28be8060ce500c5208", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42651", "s2fieldsofstudy": [ "Medicine" ], "sha1": "66e2ae3250e0fd12926b2b322a96b00bbc9dd9e8", "year": 2023 }
pes2o/s2orc
Age and Gender Impact on Heart Rate Variability towards Noninvasive Glucose Measurement Heart rate variability (HRV) parameters can reveal the performance of the autonomic nervous system and possibly estimate the type of its malfunction, such as that of detecting the blood glucose level. Therefore, we aim to find the impact of other factors on the proper calculation of HRV. In this paper, we research the relation between HRV and the age and gender of the patient to adjust the threshold correspondingly to the noninvasive glucose estimator that we are developing and improve its performance. While most of the literature research so far addresses healthy patients and only short- or long-term HRV, we apply a more holistic approach by including both healthy patients and patients with arrhythmia and different lengths of HRV measurements (short, middle, and long). The methods necessary to determine the correlation are (i) point biserial correlation, (ii) Pearson correlation, and (iii) Spearman rank correlation. We developed a mathematical model of a linear or monotonic dependence function and a machine learning and deep learning model, building a classification detector and level estimator. We used electrocardiogram (ECG) data from 4 different datasets consisting of 284 subjects. Age and gender influence HRV with a moderate correlation value of 0.58. This work elucidates the intricate interplay between individual input and output parameters compared with previous efforts, where correlations were found between HRV and blood glucose levels using deep learning techniques. It can successfully detect the influence of each input. Introduction Heart rate variability (HRV) is defined by the heart rate variations caused by the periodic change of heart rhythm over time in the absence of physiological activity, postural changes, and emotional stimuli.This labels HRV as a noninvasive marker of the autonomic nervous system (ANS) function [1]. Several papers presented work on the relation between ANS and HRV [2,3].Since ANS influences cardiac control, changes in HRV are expected due to an inflammatory response (protection against infection) or ANS blockade.Additionally, increased physical activity [4] or excitement reflects increased heartbeats while oxygen is delivered to the body.Dependencies of ANS on gender are also reported [5].It has so far been concluded that [6], with aging, ANS responds slower to heartbeat-increasing stimuli (e.g., physical activity), which is one possible reason for the negative correlation between HRV and age. We have previously shown that HRV parameters can predict glucose levels by noninvasive methods [7].This has brought the concept one step closer to reality and has once more established point-of-care (POC) HRV measurements as one of the most promising candidates for noninvasive glucose testing.Nevertheless, the actual applicability of the technique depends on the degree to which other influences on HRV, such as age and gender, can be systematically eliminated. HRV is associated with a high risk of heart disease and death in different age groups [8], and it is exciting to understand its variability to increase its applicability as a diagnostics/prevention tool.HRV parameters vary by multiple factors, including gender and age.This paper shows the correlations between short-, middle-and long-term HRV measurements and those parameters in healthy patients and patients with arrhythmia and diabetes. To be more accurate in estimating HRV, we need to apply correction factors in calculations.This research's primary goal is to identify the dependencies of HRV on age and gender to build better glucose classifications in future research. The time-domain HRV parameters are SDNN, ASDNN, SDANN, NN50, pNN50, and rMSSD, and the nonlinear HRV parameters consist of SD1, SD2, and SD1/SD2.In addition to the standard calculation methods, we implement HRV methods that eliminate the influence of various heart arrhythmias, which prevent proper indication of the ANS performance [9].The methods are based on the elimination of extreme changes in the NN intervals that differ by more than 15%, calculated by concatenating the clean segments or averaging the corresponding HRV on the complete interval or smaller time intervals (1 h, 30, 20, 10, 5, 2, and 1 min). Analyzing the individual correlations of gender, age, and heart condition data on the HRV parameters, we use the trained models on a fourth, proprietary dataset.Next to the patient as data mentioned above, this dataset also contains patients' blood glucose data, more specifically, the averaged 3-month HbA1c levels.We aim to find the dependence function of each HRV parameter using trend analysis and deep learning regression models. Our focus is on investigating these influences while, for the first time compared with other authors, taking into account a combination of the following: • Inclusion of healthy subjects and subjects with arrhythmia.A comprehensive overview of related work shows that most research was conducted on healthy subjects.This paper includes research on those who have experienced heart arrhythmia (irregular patterns in the heartbeat or heart rhythm) or are being cared for.• Inclusion of short-term, medium-term, and long-term HRV.Most researchers investigate only long-term HRV, and we aim to include various sizes of short-term HRV in this research.• Inclusion of specific HRV calculation methods.In our study, we include particular HRV measurement methods [9] to extract the essential performance of ANS, eliminating the influence of heart damage that might influence the HRV results. Using the abovementioned approach, we hope to elucidate the origins of discrepancies in the conclusions of some previously published works where one of these factors might not have been considered. This research aims to present the data distribution and find correlations between HRV parameters and age, gender, and other related health conditions, such as glucose level regulation ability. Research hypotheses: • RH1: There is a significant negative correlation between HRV parameters (SDNN, RMSSD) and age, irrelevant of heart arrhythmia.• RH2: HRV (SDNN, RMSSD) is lower for women than for men, irrelevant of heart arrhythmia.• RH3: The age-and gender-related decline in HRV (SDNN, SADNN, RMSSD) is more pronounced in patients with heart arrhythmia than in healthy individuals. Research questions: • RQ1: What is the difference in the correlation between HRV parameters and age/gender over time for healthy subjects and patients with arrhythmia?• RQ2: Which HRV parameter is the most representative to estimate age and gender dependence for healthy subjects and patients with arrhythmia?• RQ3: How does the relationship between HRV parameters and age/gender change in patients with arrhythmia? The results from this research may bring us one step closer to noninvasive blood sugar measurement with point-of-care (POC) devices and commercial ECG measurement accessories. Related Work This section includes a structured and in-depth discussion of the related works, categorized into subsections: HRV, glucose measurement, and machine learning (ML) methods, elucidating the support they provide for our research hypotheses and the rationale behind our method selection. Heart Rate Variability Several research papers address the HRV relation to a combination of age and gender.Age has been consistently shown to be negatively correlated with HRV, albeit minor differences in the amount and persistence of this correlation with age can still be found. Umetani et al. [10] present a significant negative correlation between aging and HRV parameters (pNN50, SDNN, and rMSSD particularly) with 95% confidence.The authors explain an HRV decline with aging, showing pNN50 as the primary contributor with a rapid decline, while rMSSD and SDNN decline gradually.This study also indicates that HRV decreases slowly with aging and at a different rate in male and female subjects. A similar pattern of HRV decreases with aging was found to be steeper for men (1.07/year) than for women (0.68/year) (p < 0.05) [11] without significant gender difference in the association of heart rate to BMI.Supporting these two works is the research by Jensen et al. [12], who also concludes that HRV parameters are negatively correlated with age. A cross-sectional survey of 4580 healthy Chinese men and women aged 20-85 years was performed to detect correlations of age, gender, and BMI with HbA1c, which can be derived from and is correlated with HRV [13].The study shows that glycohemoglobin levels (HbA1c) increased with age among all groups divided into quartiles. HRV predictability was the focus of research realized by Voss et al. [14].They show that HRV increases in the elderly subject group (age 50-74 years) compared to the younger subject group (25-49 years) and discuss that significant modifications of the HRV indices in terms of age disappeared within the last two age decades (age range 55-74 years).General dependence on gender for many HRV indices, particularly from FD, STSD, SPPA, IA, ACOR, and AMI (highly significant), was proven in young subjects.It is shown that those dependencies disappear with increasing age.According to HRV analysis methods, the influences of age and gender on HRV indices differed partly, whereas in general, the gender influences were considerably weaker than the age influences. Correlation Analysis Several research studies analyze the correlation of HRV between men and women.This was usually conducted based on a subset of the parameters with the highest influence on HRV (SDNN, SDANN, ASDNN, RMSSD, NN50, pNN50, SDNNi, HF, LF).Ramaekers et al. [15] explain that cardiac autonomic modulation, as determined by HRV, is significantly lower in healthy women than in healthy men.HRV difference by gender was also concluded by Antelmi et al. [11], finding that HF, rMSSD, and pNN50 measures were more significant among women compared with men (p < 0.05) in all age groups. Jensen et al. [12] found that women had lower HRV than men, addressing that the SDNN time-domain parameter was lower in women than in men.Although lower HRV was concluded among women compared with men, analyzing all time-domain parameters, only SDNNi decreased significantly (p < 0.05) in females.They also took into account frequency-domain parameters, showing that only LF was especially (p < 0.05) decreased in females [16].Interestingly, these gender differences have been shown to diminish after the age of 50 [10]. HRV was also shown to be susceptible to other other factors, both innate and acquired.Obesity and weight loss in correlation with HRV parameters were analyzed by Karason et al. [17].The study showed that obese subjects had significantly lower overall HRV (SDNN), which was due to a reduction in both long-term HRV (SDANN) and, in particular, short-term HRV (SDNN index).The study covered a weight loss group, showing a significant decrease in heart rate (8% prolongation of mean RR) and an increase in overall HRV (SDNN). HRV was also analyzed concerning race, and initial research concludes racial differences that show Afro-Caribbean subjects having a lower sympathetic drive than agematched Caucasians [18,19]. Several studies show an inverse correlation of HRV with heart rate itself [11].A similar influence of heart rate on HRV has already been demonstrated [20,21]. All studies focus their research on healthy patients.Most of the studies focus solely on long-term measurements, and to our knowledge, only two studies [14,17] analyze the impact of short-term HRV, measuring over periods of less than or equal to 30 min. Machine Learning Methods for Glucose Measurement ML techniques have gained significant traction in healthcare, offering powerful tools for analyzing complex medical data.In the HRV analysis context, ML algorithms have been employed to extract meaningful insights from HRV data and improve predictive models.Several studies have demonstrated the effectiveness of ML methods in HRV-based risk assessment and disease diagnosis [22,23]. Several recent studies have investigated using HRV for noninvasive glucose monitoring.Gusev and Poposka [24] used ML and neural network methods to correlate HRV with glucose levels, achieving a mean absolute error of 10.5 mg/dL.This means that the average difference between the predicted and actual glucose levels was 10.5 mg/dL.A mean absolute error (MAE) of 10.5 mg/dL is considered acceptable for noninvasive glucose monitoring. Avci et al. [25] also used ML techniques to develop a noninvasive glucose monitoring system based on HRV, achieving a mean absolute error of 12.3 mg/dL.This is slightly higher than the error achieved by Gusev and Poposka, but it is still within an acceptable range.Wang et al. [26] used a combination of HRV and ML to develop a system with a mean absolute error of 11.4 mg/dL.This is closer to the error achieved by Gusev and Poposka, and it suggests that combining HRV with ML can improve the performance of noninvasive glucose monitoring.Zhang et al. [27] used deep learning to develop a system with a mean absolute error (MAE) of 10.8 mg/dL.This is the lowest error reported in any of the studies, and it suggests that deep learning is a promising approach for noninvasive glucose monitoring. The golden standard for glucose monitoring is a blood test, which has an MAE of about 5 mg/dL.However, blood tests are invasive and inconvenient, so there is a need for more accurate and convenient methods of glucose monitoring.Noninvasive glucose monitoring systems with an MAE of 10.5 mg/dL could be a valuable tool for people with diabetes or other conditions that require frequent glucose monitoring.It is important to note that the MAE of a noninvasive glucose monitoring system can vary depending on the individual and the conditions under which the system is used.For example, the MAE may be higher if the person exercises or has certain medical conditions.Additionally, the MAE may improve over time as the system is further developed and refined. Given the different dependencies of HRV, its use for predictive purposes requires a more profound understanding to determine its baseline to varying ages for both genders.This implies that all parameters must be considered to understand the parameter landscape and the different influences fully.Moreover, focusing on the short-and medium-term measurements is crucial to integrate this technique into POC measurement devices.The practical applicability of this technique for glucose prediction will strongly depend on its relevance not only in healthy individuals but also in those with arrhythmia and diabetes as the most prevalent chronic diseases in concerned patients.It is, therefore, necessary to include data on such patients in this research. Calculation of HRV Parameters HRV parameters are primarily analyzed in time [28] and frequency domains [1].Timedomain HRV parameters are calculated for changes in heart rate between successive normal (NN) beats.Frequent-domain parameters mainly concern the amount of energy in the ECG signal within different frequency bands (ranges). HRV Calculation Methods There are multiple heartbeat types, which are classified in the following five categories: normal (N), atrial (A), ventricular (V), supraventricular (S), fusion (F), and unclassifiable or paced beat (Q), according to AAMI EC57 [29] or IEC 60601-2-33 standards [30].The calculation of time-domain HRV parameters is based on the analysis of the beat-to-beat intervals.We also know that HRV variables heavily depend on the heartbeat type [31]. Only sequences of NN intervals without A or V heartbeats are analyzed for HRV calculation [32].It is proved that all ventricular and supraventricular beats and various types of atrial and ventricular arrhythmia significantly impact the validity of HRV results, so their elimination is essential. Patients with diagnosed atrial fibrillation (AFIB) [33] are excluded from the dataset.Additionally, measurements with a detected sinus pause or arrest will be excluded from HRV parameter processing. We have developed a proprietary algorithm to extract only NN intervals from the dataset to obtain meaningful HRV parameters [9,34].This algorithm avoids all beats categorized in V, S, F, and Q beat types and all those N beats where the NN length exceeds a predefined allowable change specified by a threshold (in this research, we have used a value of 15%).Since the muscle movements and loose contacts of ECG electrodes generate a lot of noise that corrupts the ECG signal, the beat detection may have a minor performance by introduced artifacts or by more extended periods of uninterpreted segments.To reduce this side effect, our proprietary algorithm avoids those ECG segments where an artifact has been detected.The algorithm is used to avoid those ECG segments where an artifact has been detected by applying the NN threshold rule defined by (1), where NN(i) represents the length of the segment between consecutive N beats calculated for a particular detected beat with identification i, and T is the threshold. Allowable NN intervals do not include small-length sequences specifying that the number of NN intervals should be over a predefined threshold (in this research, we have used a value of 6 breaks).The sequences of allowed NN intervals for the calculation of HRV are then used for the calculation of the overall HRV for the analyzed time frame by two methods [9]: • Average calculating the average of HRV calculated on these sequences; • Combined calculated over a concatenated sequence of these allowed NN sequences. The corresponding HRV parameters will be labeled by a letter A (for average) or C (for combined) preceding the HRV parameter. Analyzed HRV Since frequency-domain calculation requires the execution of high-complexity algorithms (for fast Fourier transform or discrete Fourier transform), in this research, we address only the following time-domain HRV: • SDNN the standard deviation of all NN intervals is usually analyzed as a median of the variability.It consists of parts from the sympathetic and parasympathetic nervous systems.The SDNN can be described as the regulation system's overall variability or total power. • SDANN the standard deviation of the average NN interval for all 5 min periods of the entire recording (higher values indicate increased parasympathetic activity); • ASDNN the average of the standard deviation of all R-R intervals for all 5 min segments in the recordings; • RMSSD the square root of the root mean square of the sum of all differences between successive NN intervals (higher values indicate increased parasympathetic activity); • NN50 the number of pairs of successive NN intervals that differ by more than 50 ms in the entire recording (higher values indicate increased parasympathetic activity); • pNN50 the percentage of successive intervals that differ by more than 50 ms (higher values indicate increased parasympathetic activity). Analyzed Time Intervals HRVs are differentiated by the length of the ECG measurement time interval.The standard size for short-term measurements is 5 min, while the standard for long-term measurements is 24 h [35]. In this research, we analyze more details on the length of the time interval for HRV calculation and categorize the following HRV [9]: • Short-term for all measurements up to 30 min; • Medium-term for measurements from 30 min up to 8 h; • Long-term for measurements from 8 h up to 24 h. Evaluation Methods In this section, we describe the methods and metrics employed to address our research questions and hypothesis regarding the impact of various factors on HRV.We utilize a combination of statistical techniques and metrics to assess these relationships effectively. Statistical Analysis To analyze the data and answer the research questions, we used the following statistical metrics: • Pearson correlation evaluates the linear relationship between two continuous variables. A relationship is linear when a change in one variable is associated with a proportional change in the other variable [36].The Pearson method calculates the r coefficient as a positive value corresponding to the cases when the second variable tends to increase with an increase in the first and negative value when the second variable tends to decrease.The higher absolute r-value close to 1 means a stronger correlation.The p-value is the probability that you would have found the current result if the correlation coefficient were zero (null hypothesis).The correlation is statistically significant if the likelihood is lower than the conventional 5% (p < 0.05). • Spearman correlation evaluates the monotonic relationship between two continuous or ordinal variables.In a monotonic relationship, the variables tend to change together, but not necessarily at a constant rate.The Spearman correlation coefficient is based on the ranked values for each variable rather than the raw data [37]. • If a significant correlation is found between specific parameter values using the abovementioned methods, a regression function can be used to model the trend of the parameter values over time [38].To test the research question about the effect of age on HRV, we will use a linear regression model. • The t-test is a statistical test to compare the means of two groups [39], and in this paper, we compare the means of the HRV scores of the different data groups (male vs. female, healthy patients vs. patients with arrhythmia conditions).Testing the age and gender effect on HRV (RQ3) with a t-test needs normally distributed groups of data in healthy patients. Visual Representation of Results To analyze and better understand the results, we use several graphical tools for a better explanation of statistical measures. • Scatter charts [42] will be used for showing the data distribution along with the accompanying dependency function. • Box and whisker charts [43] will be used for identifying the degree of dispersion (spread) and skewness in the data and pointing out potential outliers. Datasets The datasets used in this research contain 30,000 electrocardiogram (ECG) recordings from both healthy subjects and patients with arrhythmia measured on 283 different patients.The HRV data are then extracted out of the raw ECG data, as described in Section 3. The Gluco dataset contains the ECG recordings along with the instantaneous blood glucose level and HbA1c as an indication of a 2-month average glucose level.Patients were hospitalized at the Clinic of Cardiology, and long-term 24 h ECG was measured for each patient along with instantaneous blood glucose level and HbA1c as an indication of a 2-month average glucose level.The average age of the subjects in the dataset was 60, and the standard deviation of age was 10.The patients were 92 men aged 41 to 81 and 51 women aged 40 to 86.The recordings were made at the same sampling rate and the resolution as EDB, and cardiologists also annotated them.Diabetic patients are treated with a specific method (diet, medicaments, insulin), and their ability to regulate their glucose level is classified between GD, BD, and ND classes.The criteria for the screening and diagnosis of each glucose regulation class can also be determined on the values of the fasting glucose measurement based on -Gluco ND, a group of patients with no diabetes; -Gluco GD group includes patients with diabetes and good glucose regulation using diet, medicaments, or insulin; -Gluco BD group consists of patients with diabetes and bad glucose regulation, although using diet, medicaments, or insulin. MitDB consists of only 30 min ECGs, so only the first-time duration method is used.EDB consists of 2 h ECGs, so the 30 min ECGs are calculated with the sliding window approach and one 2 h HRV calculation.The NSRDB and Gluco databases consist of longterm 24 h measurements, so the 30 min, 2 h, and 8 h HRV parameters are calculated with the sliding window approach.The analysis of the number of HRV calculation samples (for all datasets) is shown in Table 1. Modeling Methodology We used a linear regression model with a prepossessing layer for data normalization and the dense layer as the regular deeply connected neural network layer.The model with the best performance used the Adam optimizer [47] and the mean absolute error loss function [48] running for 100 training epochs.Age and gender were included as independent variables in our predictive models to assess their potential impact on glucose levels.Age, gender, and HRV are multifaceted variables that can exhibit intricate and nonlinear interactions.These relationships may not be readily discernible through simple descriptive statistics alone.Models allow us to capture and quantify these complex interactions effectively and move towards defining normal HRV values for different age groups. Performance Evaluation Methodology The interquartile range (IQR) measures variability by dividing a dataset into quartiles.Quartiles divide a rank-ordered dataset into four equal parts.The values dividing each piece are the first, second, and third quartiles, denoted by Q1, Q2, and Q3, respectively.Q1 is the "middle" value in the first half of the rank-ordered dataset.Q2 is the median value in the set.Q3 is the "middle" value in the second half of the rank-ordered dataset. Our prediction model identifies Q1, Q2 (avg), and Q3 quartiles for the HRV prediction.The score is then measured with the number of quartiles that coincide with the measured quartiles for the specific age group.Afterward, the model performance is measured by TPR, FPR, PPV, and F1 scores. To obtain the metrics mentioned above for the model's performance, we first need to calculate TP, FP, FN, and TN.We use a method based on ranks, where TP is calculated as the correctly predicted rank, and FP is the expected rank but not the measured level.FN is calculated as the rank that was not anticipated but was measured.TN is calculated as the rank that was not predicted and not counted.Figure 1 illustrates an example of the calculations for the evaluation methodology.The data distribution is analyzed for sets of time measurements with different durations, including all analyzed short-, medium-and long-term measurements, including the following time intervals to calculate HRV: We use the sliding window method to compare the results obtained from various datasets reasonably.ECG strips are extracted within a window that slides across the overall ECG data stream according to a specified interval.Two variables are considered during this approach: window length (w) and sliding offset (s).We have used a sliding offset of s = 5 min for short-and medium-term measurements and s = 10 min for long-term measurements, thereby obtaining more samples to be analyzed in a given period. Two hundred eighty-eight tests were conducted for six datasets, four duration intervals, six HRV parameters, and two methods.The results are structured in subsections highlighting the influence of gender and age in isolation and, ultimately, the combined influence of both parameters on HRV. Variations of HRV parameter values are observed for each age group, database, and measurement time interval and shown in box plots to reveal a clear picture of the HRV parameter differences for each of the influence factors. Scatter plots observe a trend line and the dependency function for the specific plot.Pearson and Spearman correlation coefficients were used to generate the correlation matrices and determine the most representative HRV parameters influenced by age and gender for healthy patients and patients with some of the conditions mentioned above. F1 score, ROC, and AUC are used as performance metrics for our developed classification ML models.Additionally, MSE, RMSE, and MAE are three metrics used for evaluating the performance of developed regression models that find the normal HRV value ranges for different age groups and medical conditions. Results This section presents the results obtained from the conducted tests of the specified experiments, analyzing them in subsections that cover overall age dependence, gender dependence, and dependence on their health status concerning the autonomic nervous system. Many images and correlation tables were generated for each conducted testing, analyzing the age and gender aspects and health status.This is why, in this section, we will show only the most important research results. Age Dependence Figure 2 shows the data distribution for the A_SDNN HRV parameter for all datasets, divided by age groups, correspondingly for healthy patients and patients with arrhythmia. We observe that A_SDNN decreases with aging for healthy patients (EDB, NSRDB, Gluco GD, and Gluco ND).On the other hand, the databases that contain patients with arrhythmia (Gluco BD and MitDB) do not show the same distribution and age dependence. Dependence on the Average or Combined Calculation Method Figures 3 and 4 present the differences between the average and combined methods for the calculation of HRV parameters.The first method is to calculate an average of the calculated HRV within the clean segments of the analyzed ECG measurement.In contrast, the second one is the combined method, which concurs clean segments and calculates one HRV value for the analyzed measurement.The concatenation of clean segments presents an increase in the typical HRV values.Thus, we consider the average method more suitable for calculating age and gender dependency. Our research has shown that the average method gives more relevant representative results than the combined method.Since the combined method concatenates the HRV calculation into one large HRV measurement, it presents larger values than the actual average.We focus on values calculated with the average method in our further analysis. Gender Dependence We observe that females have lower A_SDNN HRV parameters in the MitDB and Gluco (BD, GD, and ND) datasets, which is not the case using the EDB and NSRDB datasets.For the A_rMSSD parameter, females have higher HRV using all datasets except Gluco GD and EDB (although the difference using the EDB dataset is less significant). We observe HRV differences for age and gender groups within the analyzed datasets (Figures 5 and 6) and conclude a trend of lower HRV values for older people.The difference is more noticeable for the A_SDNN parameter.Figure 8 shows the data distribution for A_SDNN and A_RMSSD parameters and the accompanying dependency function.The arithmetic expression of dependency functions will be presented for healthy patients and patients with arrhythmia.The functions are calculated using the EDB database for healthy patients and the MitDB database for patients with arrhythmia. Age and Gender Dependence Functions The dependency function of the A_SDNN parameter for healthy patients (EDB) is calculated by (2).In the following equations, y represents the calculated HRV parameter, and x represents age, showing the average value for the corresponding HRV parameter.y = 60.62 − 0.49x (2) The dependency function for the A_SDNN parameter calculated on patients with arrhythmia is shown in Equation ( 4). Analyzing the patients with arrhythmia, the dependency function for the A_rMSSD parameter is calculated by Equation ( 5).y = 37.71 − 0.046x (5) Prediction Based on Regression Model This section shows the practical usage of current state-of-the-art deep learning methods for predicting typical HRV values for different age groups and genders.We have built a regression model using Tensorflow 1.13.1 [49] and Keras 2.2.3 [50]. A preprocessing layer for data normalization and the dense layer as the regular deeply connected neural network layer were included in the modeling process.The inputs for the deep learning model were age group and gender.The model outputs mean, minimum, and maximum value predictions for a given parameter.Currently, the sequential model from Keras supports one value prediction per trained model.Thus, we trained three separate models for the outputs per HRV parameter.The results of the predictions are presented in Table 2 and visualized in Figure 9. Discussion The results presented in this work show the different dependencies and correlations of HRV parameters with age and gender for healthy patients, diabetic patients, and patients with arrhythmia.This section will evaluate the results and perform a correlation analysis. Statistical Analysis Several research papers address the connection of HRV parameters with age and gender.A negative correlation of HRV parameters with increasing age is a common conclusion in the literature [10,12,15,20,21]. In this study, we found that males and females have different HRV profiles, as evidenced by the significant differences in the distributions of the HRV parameters A_SDNN and A_rMSSD between the three groups from our database Gluco (Table 3).This finding suggests that gender is an important factor to consider when using HRV to monitor health and well-being..6570 0.183 0 Legend: F stands for Female, M stands for Male, G_A stands for Gender_A, G_B stands for Gender_Bm, S_A stands for Sample_A, S_B stands for Sample_B, U_S stands for U_Statistic, U_P stands for U_P_Value, T_S stands for T_Statistic, T_P stands for T_P_Value, KS_S stands for KS_Statistic, and KS_P stands for KS_P_Value. Our research confirms the negative correlation (stated in Hypothesis 1 ) between HRV and age.Our research's best r correlation coefficient is the Age/A-ASDNN correlation (r = −0.42,p = 0.014), using the EDB database (only healthy patients).The result is similar to the one presented by Ramaekers et al. [15] (r = −0.47,p < 0.001) and Umetani et al. [10] (r = −0.41,p < 0.05), which are using only healthy subjects.The p-value = 0.014 makes this comparison statistically significant. It is important to note that the HRV dependence on age and gender is also present in patients with arrhythmia (research Hypothesis 3).MitDB (consisting of patients with arrhythmia) shows an inverse correlation for HRV with both age and gender, compared with the other databases that contain only healthy patients.For the gender parameter, this is presented in Table 4, where MitDB has a negative correlation coefficient of r = −0.58 and p-value of 0.0018 for A_SDNN.At the same time, the databases for healthy and mixed patients show lower correlations, some of which even reach positive values of up to r = 0.19, with a corresponding p-value of approximately p = 0.2385.While this correlation was positive, it did not reach statistical significance at the conventional threshold of p < 0.05 for healthy patients.We see the same trend in HRV correlations for the age parameter, shown in Table 5.The A_SDANN parameter shows the most significant difference for the patients with arrhythmia, having r = 0.42 with a highly significant p-value of approximately p = 0.0004, indicating a strong and statistically significant positive correlation in this subgroup.In Table 4, we also see that the patients from the healthy and mixed datasets achieve a negative correlation (r = −0.42,p = 0.001) for the A_ASDNN parameter and age, while the MitDB arrhythmia database shows a positive correlation of r = 0.27, p = 0.009.We then performed HRV correlation on each class in our proprietary dataset.The results confirm the abovementioned assumption of inverse HRV dependency for patients with diabetes compared with healthy patients.For HRV/gender correlation, the most significant difference is with the A_SDANN parameter, having r = −0.07 for healthy patients (ND) and r = −0.15for patients with diabetes with good glucose regulation (GD).The results are shown in Tables 6 and 7, correspondingly, Pearson and Spearman correlations. HRV is lower among women than men [11,16].Our studies confirmed a negative correlation with r = −0.58,p-value = 0.0018, for the correlation between gender and A_SDNN using the MitDB database (for patients with arrhythmia). Deep Learning Models We have used machine and deep learning methods to develop a classification model to classify three classes, ND, GD, and BD, based on the calculated HRV values specified in this research.We used 10-fold cross-validation to reveal the best-performing model.The features used for training a model are only the HRV measures: A_SDNN, A_rMSSD C_SDNN, and C_rMSSD.The label values are GD, BD, and ND. Our experiments included tests with 15 different algorithms (Table 8).We used the F1 score for comparison instead of accuracy because of the great class imbalance in our datasets.The F1 score is a more reliable metric for evaluating imbalanced datasets because it takes both precision and recall into account. Comparison with Related Work Results Umetani et al. [10] presented a dependency function for SDNN and age defined with y = 186.9− 1.5x and rMSSD/age described with y = 32 − 0.69x. For the rMSSD parameter, we obtained similar yet less gradual results for healthy patients.We can conclude that HRV declines with significantly slower rates for patients with arrhythmia. For the A_SDNN parameter, we also see differences between healthy patients and patients with arrhythmia or diabetes.The dependency function of age and SDNN for healthy patients rapidly declines, which is not the case for the other patients in the dataset.Compared with Umetani et al., our dependency function for A_SDNN in MitDB also shows a decline but is more gradual.This suggests that the relationship between age and SDNN is different for healthy patients and patients with arrhythmia or diabetes.This inverse coefficient to HRV is also present in the bad glucose regulation group of the Gluco dataset presented above. Our work extends the work of Benichou et al. [51] by investigating the impact of age and gender on HRV in patients with diabetes.The abovementioned findings suggest that age and gender may play a role in the relationship between HRV and blood glucose levels in patients with diabetes.This is important to consider when developing noninvasive glucose measurement devices based on HRV. We show a significant improvement of 7% in the random forest supervised classification model for predicting glucose levels [7] on the Gluco dataset using the mean absolute error (MAE) metric, by including age and gender in the parameters list, compared with a baseline model that did not include these variables. Evaluation of the Research Hypotheses and Questions Further on, we evaluate the hypothesis and research questions.In patients with severe heart conditions, the correlation between the SDNN and RMSSD parameters was significantly weaker than in patients with no or mild heart conditions, as seen in Tables 6 and 7.This suggests that patients with severe heart conditions have less variability in their heart rate, which can be a sign of autonomic dysfunction.The potential implications of these findings are that HRV parameters could be used as a biomarker for severe heart conditions. As mentioned above, the results identify the SDNN parameter as the most representative of the distinction between healthy patients and patients with arrhythmia or diabetes.Furthermore, the SDNN parameter is also most dependent on age and gender compared with other HRV parameters. Conclusions Given the different dependencies of HRV, its use for predictive purposes requires understanding its normal range with age and gender differences.This research shows those differences moving towards accurately defining average HRV parameter values. In this research, we analyzed six HRV parameters over four different datasets, three of which have been (thoroughly) studied in the literature (SDNN, rMSSD, NN50).We also included a proprietary database (GLUCO) that included average blood sugar levels (HBA1c). Paper Contribution In all three databases that have been analyzed by other research groups (EDB, MitDB, NSRDB), by including the gender and heart condition data for the patients in the dataset, we found previously undiscovered correlations between some HRV parameters and gender, age, and heart medical conditions.The correlations were also notably present in the Gluco database, with the highest Pearson correlation coefficient between HRV and age of r = 0.42, p-value = 0.0004.Furthermore, comparative analysis between the correlations above in each dataset revealed significant differences in the HRV parameter correlation in the MitDB database (r = −0.48,p-value = 0.0018), as it solely comprises patients suffering from severe heart conditions.We have successfully identified these differences, enabling us to correct these input parameters in the analyses of the proprietary Gluco database and build better glucose estimation models.In patients with severe heart conditions, the correlation between the SDNN and RMSSD parameters was significantly weaker than in patients with no or mild heart conditions.This suggests that patients with severe heart conditions have less variability in their heart rate, which can be a sign of autonomic dysfunction.The potential implications of these findings are that HRV parameters could be used as a biomarker for severe heart conditions.By tracking changes in HRV parameters over time, doctors could identify patients at risk of developing severe heart conditions.Additionally, HRV parameters could be used to monitor the effectiveness of treatment for severe heart conditions. We have also successfully identified strong correlations between other HRV parameters and blood sugar levels.Using this, we could predict and categorize patients into three groups according to the predicted blood sugar levels: (i) no diabetes, (ii) diabetes and good glucose regulation using medicaments, and (iii) diabetes and bad glucose regulation despite medicaments. The primary contribution of our research lies in the comprehensive analysis of HRV parameters across different datasets, encompassing healthy individuals, diabetic patients, and patients with arrhythmia.Our key takeaway is the identification of distinct HRV patterns associated with different health conditions and demographics.In the results section, we have demonstrated that HRV parameters exhibit statistically significant correlations with age and gender, in both healthy subjects and those with medical conditions.Furthermore, we have delineated the typical HRV values for specific demographic groups, shedding light on the expected variations.Our research offers a novel perspective on HRV parameter values, underlining the importance of considering demographic factors in their interpretation.This information has implications for clinical practice, as it can aid in detecting anomalies in HRV patterns early and contribute to more precise diagnostic and prognostic assessments.Furthermore, this research opens a promising route to bringing noninvasive blood sugar level measurements one step closer to end users, as it can be used with ECG data acquired with any modern tracking monitor, including smartwatches, wristbands, and ECG alert devices with >125 Hz measure rate, which opens the capability of obtaining a new measurement of blood sugar every 30 s.This would enable early detection of blood glucose level disbalance and proactively alert potential patients about a condition that could be diabetes related.Patients with arrhythmia show inverse correlation coefficients for the A_SDNN, A_rMSSD HRV parameters, which is very important for blood glucose regulation.We also concluded that the presence of heart arrhythmia is a very important factor to consider when using HRV to estimate blood glucose regulation ability. Challenges for Future Research This section discusses the potential practical applications of our findings in healthcare or individual contexts, along with addressing potential limitations and challenges for future research or applications.The following list presents practical applications of HRV for noninvasive glucose measurement in healthcare or individual contexts: • Wearable devices: HRV can be used to develop wearable devices that can track glucose levels in real time.This could be a valuable tool for people with diabetes who want to have more control over their condition. • Monitoring treatment efficacy: HRV can be used to monitor the efficacy of treatment for certain diseases, such as heart failure and arrhythmias.This can help doctors to ensure that patients are receiving the correct treatment and that the treatment is working effectively.• Personalized medicine: HRV can be used to personalize medicine, which is the practice of tailoring medical treatment to the individual patient.This can be performed by using HRV to identify patients at risk for certain diseases or by using HRV to monitor treatment response.• Early warning system: HRV can be used to develop early warning systems for hypoglycemia and hyperglycemia.This could help people with diabetes to avoid serious complications, such as diabetic ketoacidosis and coma. On the other hand, the following list shows some potential limitations or challenges for future research: • Other factors can also affect both HRV and glucose regulation ability, such as smoking, the presence of dyslipidemia, the level of physical activity, the duration of diabetes, the therapy received for existing diseases, and the presence of other diseases. • HRV data analysis and conversion can be challenging: As described in the methodology section, HRV calculation out of raw ECG data and analysis can be challenging and requires substantial computing power, specialized equipment, and expertise.This can limit the availability of HRV data, and it can make it difficult to use HRV findings in clinical practice.• HRV is not a perfect biomarker for glucose: HRV is not a perfect biomarker for glucose, and it can be affected by several factors, such as stress, anxiety, and physical activity. It is important to interpret HRV findings in the context of other clinical information, such as blood glucose levels. Despite these limitations, HRV is a promising biomarker with the potential to improve noninvasive glucose monitoring.Continued research in this area is likely to lead to new and innovative applications of HRV for this purpose. Figure 1 . Figure 1.Performance evaluation of predicted range. 3. 5 . Experimental MethodologyHRV parameters are calculated for all benchmark and testing datasets.The following age groups are analyzed in this research: Figure 2 . Figure 2. Age distribution of the A_SDNN parameter for healthy patients (left) and patients with arrhythmia or diabetes (right). Figure 3 . Figure 3. Distribution of the SDNN parameter calculated by the average (left) and combined (right) methods for the overall dataset.The diamond shapes present the outliers detected in the datasets. Figure 4 . Figure 4. Distribution of the rMSSD parameter calculated by the average (left) and combined (right) methods for different datasets.The diamond shapes present the outliers detected in the datasets. Figure 5 . Figure 5. Distribution of A_rMSSD (left) and A_SDNN (right) for different age and gender groups in the Gluco ND dataset. Figure 7 Figure7presents the dependency function for the A_SDNN parameter using the Gluco ND dataset for male and female patients. Figure 7 . Figure 7. Distribution and dependency function of A_SDNN in the Gluco ND dataset: female (left) and male patients (right). Figure 8 . Figure 8. Distribution and dependency function of A_rMSSD (right) and A_SDNN (left), calculated on the EDB database.Equation (3) presents the dependency function for the A_rMSSD parameter calculated on healthy patients.y = 42.19 − 0.32x (3) Figure 9 . Figure 9. Regression model to predict normal ranges of A_rMSSD (left) and A_SDNN (right) for different ages and gender. Figure 10 presents Figure 10 presents the achieved ROC and AUC values for the extra trees classifier and Table 9 the confusion matrix.According to the F1 scores, the best-performing model is the extra trees classifier, achieving an F1 score of 76.77%.Additionally, the specified model shows the mean squared error (MSE) = 7.15 and the mean absolute error (MAE) = 3.21.The achieved value shows that HRV can determine the diabetes class. The KS test is used to evaluate whether the data groups are coming from a different population.The calculation of the correlation coefficients between HRV and age and gender for each group compares the magnitude and direction of the corresponding correlations between the groups.•Statisticalmetricstoevaluate regression model performance include mean squared error (MSE), root mean squared error (RMSE), and mean absolute error (MAE).In this research, we develop regression models to find the normal HRV value ranges for different age groups and medical conditions.MSE is calculated as the average of the squared differences between the predicted values and the actual values, and RMSE is the square root of MSE.The average of the absolute differences between the predicted values and the actual values reveals the MAE.•The evaluation of the ML models is based on the determination of true and false positives (TP and FP) for correct and wrong detections of the positive class, and also true and false negatives (TN and FN) addressing the negative class.The true positive rate (TPR) or sensitivity (recall) is the proportion of positive cases that are correctly identified by the model, and the false positive rate (FPR) is the proportion of negative cases that are incorrectly detected.The positive predictive value (PPV) or precision is the proportion of correctly detected positive cases versus all positive predictions. test and Mann-Whitney U [41] test to test the patients where the HRV scores are not normally distributed, as it is a nonparametric test without any assumptions about the data distribution.The number of positive class samples is much smaller than the number of negative class samples, revealing a large class imbalance factor.Therefore, we use the F1 score, calculated as the harmonic mean of TPR and PPV to evaluate the classification models that predict the blood glucose regulation ability based on HRV data.Additionally, we use receiver operating characteristic (ROC) as a graph that presents the dependence of TPR and FPR for different thresholds of the model's output.The area under the ROC curve (AUC) graph provides an aggregate measure of performance applied for different classification thresholds. with an 11-bit resolution over a 10 mV range.Two or more cardiologists independently annotated each record.The subjects were 23 men aged 32 to 89 and 21 women aged 23 to 89.Those patients are included in this research as patients with known arrhythmia.HRV parameters are calculated for all short-term HRVs (up to 30 min).The MIT-BIH Arrhythmia Research Center also maintains this database and contains recordings from subjects with various arrhythmias.The recordings were made at the same sampling rate and resolution as NSRDB, and cardiologists also annotated them.• European ST-T Database (EDB) [46] consists of 90 annotated excerpts of ambulatory ECG recordings from 79 subjects.The subjects were 70 men aged 30 to 84 and 8 women aged 55 to 71.Each record is 2 hours long and contains two signals sampled at 250 samples per second with a 12-bit resolution over a nominal 20-millivolt input range.The European Society of Cardiology created this database containing recordings from subjects with ST-segment and T-wave abnormalities.The recordings were made at a higher sampling rate than NSRDB and MitDB, and cardiologists annotated them.• Gluco proprietary dataset consists of 143 unique patients aged between 40 and 86 [34]. Massachusetts, and the subjects were all referred to the Arrhythmia Laboratory for evaluation of suspected arrhythmias.Cardiologists carefully annotated the recordings to ensure that they were free of significant arrhythmias.• MIT-BIH Arrhythmia Database (MitDB) [45] contains 44 half-hour excerpts of twochannel ambulatory ECG recordings obtained from 44 subjects studied by the BIH Arrhythmia Laboratory.The recordings were digitized at 360 samples per second per channel Table 1 . Number of samples and patients in the datasets. Table 2 . Results from the regression model that predicts HRV versus age and gender. Table 3 . Statistical tests for different groups in the Gluco database. Table 5 . Spearman correlation between HRV and age on 30 min files. Table 6 . Pearson correlation between HRV and age on Gluco dataset, with the classes BD, GD, and ND. Table 7 . Spearman correlation between HRV and age on the Gluco dataset, with the classes BD, GD, and ND. Table 8 . Performance of developed models. Table 9 . Confusion matrix for the extra trees classifier to classify HbA1c classes based on HRV.
v3-fos-license
2023-07-07T22:16:38.531Z
2023-06-01T00:00:00.000
259349423
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/156151/20230605-6841-1yaw4pf.pdf", "pdf_hash": "e134ed3607fde647557e15f83e8ffbc6fd45ab9a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42653", "s2fieldsofstudy": [ "Medicine" ], "sha1": "5203fa11f4f10ab57af6316649749debe8bcf116", "year": 2023 }
pes2o/s2orc
Semimembranosus: A Rare Muscle Herniation and Review of the Literature Extremity muscle hernias are rare pathologies, most of which are managed conservatively. In symptomatic cases, surgical intervention may be required. This study represents a case of a rarer muscle hernia, semimembranosus, in a 43-year-old patient and describes the surgical technique of grafting with synthetic nonabsorbable polypropylene surgical mesh as well as the review of the literature about extremity muscle hernias. Introduction Extremity muscle hernias are rare pathologies in which the muscle protrudes outward as a result of a defect caused by a congenital pathology or past trauma on the muscle fascia. Most of these cases are managed conservatively; however, surgical treatment is preferred if they become symptomatic. Among modern surgical techniques are primary repair, fasciotomy, autologous fascia lata reconstruction as well as alternatives such as grafting with synthetic meshes [1][2][3][4]. The tibialis anterior is regarded as the most frequently herniated muscle in the literature, as it is subjected to trauma more frequently [3][4][5]. In diagnosing muscle hernias, which are provoked especially as a result of extremity motion and become visible, ultrasonography (USG) is the preferred method because it is a dynamic imaging technique [6][7][8][9][10]. This case study will focus on the surgical technique used for the repair of the semimembranosus muscle hernia with synthetic mesh, as well as the review of the literature on muscle hernias. Informed consent had been obtained from the patient for the study. Case Presentation A 43-year-old male patient who was admitted to our department with complaints about the swelling on the right side of the distal posterior thigh of which he had been aware for the past 20 years, and recently accompanied by pain, had been working while standing for prolonged periods of time as his profession required him to do so (a waiter). Upon physical examination, a 1.5 cm, mobile, soft-consistency mass was detected on the right distal posteromedial thigh. It was observed that the mass became apparent while flexing the knee. The dynamic USG revealed a defect on the semimembranosus muscle fascia as well as the muscle protruding through the defective area during motion ( Figure 1). Whilst no pathology was detected on the MRI performed in a resting position (Figure 2a), the MRI that was performed while the muscle was contracted revealed that the semimembranosus muscle protruded toward the skin (Figure 2b). Surgical repair with synthetic mesh was planned for the patient diagnosed with a semimembranosus muscle hernia. Surgical operation was performed through the 10 cm longitudinal incision immediately above the area with the fascial defect on the femur distal posteromedial, which had previously been marked with USG on the patient while in the left lateral decubitus position. Once the fascial defect was explored, the defected area was sutured to the surrounding undamaged fascial tissue with synthetic absorbable sutures (No:1 round vicryl, ETHICON © ; Raritan, New Jersey) using appropriate tension with synthetic nonabsorbable polypropylene surgical mesh (8x3 cm prolene, ETHICON © ) (Figures 3, 4). No weight-bearing was applied on the right extremity of the patient during the postoperative three weeks. Following the range of motion exercises, the patient returned back to his daily routines six weeks postoperatively. No infection or foreign body reaction was observed after the surgery. It was observed during the postoperative third-month visit that the patient had no complaints, nor any pain or swelling with movement. The VAS value, which was 80 prior to the operation regressed down to 30 after six weeks and 0 after three months postoperatively. Five years after the operation, it was confirmed in a telephone conversation that the patient resumed his active work life without any complaints. Discussion Focal muscle protrusion through muscle fascia, which has weakened or lost its integrity is considered as muscle hernia [11]. Although it can be seen in the upper extremities, it is more common in the lower extremities [12]. Much as it is seen among athletes as a result of exposure to trauma and excessive activity, it has also been reported for occupational groups with excessive activity such as the patient in our case. Although muscle hernias were considered rare pathologies in the past as a consequence of dynamic methods for diagnosis not being preferred as well as the fact that the disease is asymptomatic, an increase in the number of these pathologies is being reported in the literature in parallel with the advancement of the dynamic diagnostic techniques [5,6,10,11,13]. Muscle hernias were categorized into two groups for the first time by Ihde in 1929 as structural and traumatic [10]. Structural muscle hernia is when the muscle protrudes the weakened muscle fascia, especially after chronic stress. Trauma-related muscle hernia can occur both as a result of a direct penetrative injury or a closed fracture fragment damaging the fascia, as well as an impact suffered by a contracted muscle resulting in an indirect trauma [2]. Furthermore, muscle hernias that are secondary to chronic compartment syndrome or develop postoperatively may be seen [14]. Chronic effortrelated compartment syndrome generally affects military personnel, mountaineers, skiers, and athletes. Much as herniations have been reported in the literature most frequently on the tibialis anterior muscle as a consequence of it being prone to trauma, they have also been reported for the other lower extremity muscles; in particular, the extensor digitorum longus, peroneus longus, peroneus brevis, gastrocnemius, and hamstring muscles [3][4][5]15]. In addition, muscle hernias should also be considered among potential causes of inexplicable pain or inflammation in extremities even without a traumatic history. A muscle hernia can be considered a palpable soft mass lesion or nodule in clinical assessment. It can be singular or multiple and is usually unobservable while the muscle is in a resting position. Patients are usually admitted to the clinic with complaints such as pain, weakness, cramp, and numbness seen, especially post-activity. Infection, varicose veins, vascular malformations, and especially neoplasms should be excluded from the distinguishing diagnosis of muscle hernias [16]. The most significant feature of muscle hernias that distinguishes them from these pathologies is that the pain frequently disappears with rest and resurfaces with activity. Keeping muscle hernia in mind during the distinctive diagnoses of mass lesions can be considered an effective precaution against the need for additional invasive procedures such as biopsy and the period in which the patient would be affected psychologically as a result of it failing to be diagnosed in the future. Since the protrusion of the muscle through the fascia defect is more apparent during movement, dynamic imaging methods should be preferred in radiological examination [2]. Although the majority of literature supports the view that favors the fact that this protrusion, which is caused by movement, generally happens during isometric contraction, Naffaa et al. reported that herniation will not always become visible with muscle contraction but will do so with contraction when the herniation on the longitudinal axis of the muscle appears on the body of the muscle, however, herniations that are in the vicinity of the muscle origin or insertion would be visible in resting position [9]. Despite the lack of a consensus in the literature regarding an imaging method, USG, with its ease of access, low cost, and especially dynamic examination capabilities, has been suggested as the gold standard method by many authors [6,8]. Çarlı et al. noted the user-dependent variability as the only disadvantage of the USG [8]. When diagnosing the herniated muscle without it being reduced, it is important to carefully apply the probe gently [17]. The standard MRG, albeit useful in excluding distinctive diagnoses, is insufficient in diagnosing muscle hernias [12]. Kramer et al. pointed out in their study that of 18 patients who were examined with MRG, only three were diagnosed with muscle hernia [1]. However, in situations where dynamic USG fails to identify the defect, it is recommended that dynamic MRG should be used for the identification of the size and structure of the defect or to better observe the muscle fascia demarcation [12]. Moreover, MRG is considered to be helpful for the surgical planning of muscle hernias. Follow-up is sufficient for managing these cases, as long as the asymptomatic cases do not constitute an aesthetic concern. On the other hand, conservative methods (rest, elastic bandages, anti-inflammatory drugs, etc.) should be used for symptomatic cases [13]. Surgical treatment is recommended for cases that do not respond to conservative treatment [18]. Alternatives such as primary repair, fasciotomy, partial muscle excision, reconstruction with autologous fascia lata, or grafting with synthetic mesh are among the surgical treatment techniques [4,15,18]. Despite the fact that primary repair is known to provide successful restoration of small defects, the development of the compartment syndrome has made longitudinal fasciotomy the most recommended and safest technique [3]. However, Kramer et al. reported ongoing complaints post-fasciotomy at a rate of 53% in their study, which was conducted with 26 athletes [1]. For this reason, as well as potential negative cosmetic results, we did not favor longitudinal fasciotomy on this patient. On the other hand, the primary repair method is not recommended for defects such as the one present in our case, whose 4x3 cm fascia defect is deemed relatively large, especially if the ends of the fascia are strained since it is suitable for small defects and carries an important complication risk such as compartment syndrome [13]. Miniaci et al. reported a case that resulted in a drop foot as a complication of evolving and repetitive debridement, following the primary repair of the tibialis anterior muscle hernia, which led to a search for alternative treatment options [3]. Furthermore, as they cause morbidity in the donor area and extended the surgical duration, fascia lata or similar autografts are preferred less often. Favoring results have been reported in recent literature regarding the repair with synthetic mesh and acellular collagen matrix for massive defects and mobile patients [15,18]. Tarrant et al. showed in their study that repair that was performed by using an acellular dermal matrix showed biological compatibility and efficiency of results without any complications for four consecutive post-traumatic massive fascial defect herniation [19]. Due to the conditions of our clinic, grafting with a synthetic nonabsorbable mesh was preferred for our case. The relief of pain during the sixth postoperative week and the resumption of complete activity after the second month demonstrate that stability has been achieved and is coherent with the literature. The lack of recurrence in the long term and the disappearance of the pain alongside the regained functionality have been consistent with similar studies [20].
v3-fos-license
2021-05-08T00:04:16.315Z
2021-02-13T00:00:00.000
233925664
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.journaljpri.com/index.php/JPRI/article/download/31143/58447", "pdf_hash": "8f6f647f41210c5ad5dae623f32af8d6cd2dda4d", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42654", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "49297fad57e1d84454e22d9492fbe2d39c6e34a3", "year": 2021 }
pes2o/s2orc
Anti- Parkinsonian Drug Estimation by RP-HPLC Aim: The main aim of the current study is to give best and simple method for the estimation of antiparkinsonian drugs named Carbidopa, levodopa and entacapone. Study Design: Simultaneous estimation of Carbidopa, levodopa and entacapone was performed by using Quadrapumped (SHIMADZU Prominance-i, LC-2030C) RP-HPLC equipped with PDA detector. Place and Duration of Study: Chalapathi Drug Testing Laboratory, Chalapathi Institute Of Pharmaceutical Sciences, Lam, Guntur-522034, Andhra Pradesh, India during the period of August 2019 to February 2020. Methodology: The assets of the study can determined as the process of qualification and quantification was done on SHIMADZU Prominance-i, LC-2030C system equipped with Phenomenex ODS (150 x 4.6 mm, 5μm) column and mobile phase was optimized using combination of acetonitrile and 0.1% ortho phosphoric acid in the ration of 50:50 v/v at a flow rate 1.0 ml/min. The wavelength was set as 270nm at ambient temperature by injecting 20μl of solution and the run time was fixed for 5 min. Results: Calibration plot shown best regression over the concentration range of 5-160 μg/ml of Carbidopa, Levodopa and Entacapone standard solutions. The LOD and LOQ were found to be 0.85 and 2.54 μg/ml for Entacapone, 0.24 and 0.71 μg/ml for Levodopa, 0.14 and 0.43 μg/ml for Carbidopa respectively. The accuracy of the proposed method was determined by performing recovery studies and was found to be between 98-102%. The repeatability testing for both sample Original Research Article Nadendla and Abhinandana; JPRI, 33(2): 14-25, 2021; Article no.JPRI.65558 15 and standard solutions was found as %RSD<2.0% which is within the acceptable limits showing that the method is precise as well. The proposed method was successfully applied for the marketed formulations of Carbidopa, Levodopa and Entacapone tablets. In addition the main feature of proposed method is economic and eco-friendly with less retention time around 5.0 min. Conclusion: Including all the optimized method parameters and statistical results given it can be concluded as a new, simple, sensitive, precise and accurate economical analytical method was developed and validated by RP-HPLC for the detection and quantification of Carbidopa, Levodopa and Entacapone which can be applied to the marketed formulation where there are no official compendial methods reported for this particular combination. The high sensitivity (LOD), mobile phase utilized and run time (=5) can be determined as an important features for this proposal. INTRODUCTION Parkinson's disease is a progressive disorder of the nervous system that affects movement. Young adults rarely experience Parkinson's disease. It ordinarily begins in middle or late life, and the risk increases with age. People usually develop the disease around their sixties or older. Men are one-and-a-half times more likely to get Parkinson's disease than women. Parkinson's disease is caused by the gradual break down or death of certain nerve cells in the brain. This leads to a reduction in the amount of a chemical called dopamine in the brain. Carbidopa, levodopa and entacapone is the combination of drugs approved by U.S. FDA in June 2003, to treat adults with Parkinson's disease. Levodopa is an immediate precursor to dopamine. Enta capone is a reversible catechol-O-methyl transferase inhibitor which prevents the degradation of levodopa. Carbidopa is a aromatic peripheral L-amino acid decarboxylase inhibitor. Combination of Levodopa, carbidopa & enta capone (Catechols), intended as improved therapy for Parkinson Disease [1-8]. Chemical Resources Entacapone, levodopa and carbidopa working standards are procured as a gift sample from Aurobindo Pharma Pvt., India. ortho phosphoric acid (OPA), triethyl amine are purchased from LOBA chemical laboratories Pvt. Ltd., HPLC grade water and acetonitrile are purchased from Thermo Fisher Scientific Pvt. Ltd., India. METHOD VALIDATION The analytical method validation was done according to ICH Q2 (R1) guidelines of validation of analytical methods for the parameters of specificity, system suitability, linearity, the limit of detection (LOD), the limit of quantification (LOQ), accuracy, precision and robustness were discussed [9][10][11][12]. Specificity Specificity is the ability of the analytical method to produce a response for the analyte in the presence of other components present in the solution; technically they can be like impurities, degradants or matrix. In this method the specificity is tested for the standard solution and blank and found no interference in the blank injection. Tailing factor and theoretical plates were taken into consideration. System Suitability System suitability was performed for the standard solution and confirmed the suitability by taking tailing factor, theoretical plates, % RSD and retention time parameters into the consideration [13][14][15][16]. Linearity The linearity of an validation parameter which confirms the ability of a method (within a given 16 solution; technically they can be like impurities, degradants or matrix. In this method the or the standard solution and blank and found no interference in the blank injection. Tailing factor and theoretical plates System suitability was performed for the standard solution and confirmed the method suitability by taking tailing factor, theoretical plates, % RSD and retention time parameters The linearity of an validation parameter which confirms the ability of a method (within a given range) to obtain test results which will be directly proportional to the concentration of analyte in the sample. By giving different concentrations of sample solutions it is confirmed that the method is linear in 5-160µg/ml range with 0.999 regression value. Limit of Detection (LOD) The detection limit of an individual analytical procedure is the lowest amount of analyte in a sample which can be detected but not necessarily quantitated as an exact value. The values were determined by calculating from slope and regression line by following the equation [17][18]. obtain test results which will be directly proportional to the concentration of analyte in the sample. By giving different concentrations of sample solutions it is confirmed that the method 160µg/ml range with 0.999 LOD = 3.3* σ / S The detection limit of an individual analytical procedure is the lowest amount of analyte in a sample which can be detected but not necessarily quantitated as an exact value. The values were determined by calculating from regression line by following the Limit of Quantification (LOQ) LOQ is the parameter which will explain about the detection and quantification of lowest amount. In the method the values of LOQ was determined from the following formula. LOQ = 10* σ / S Precision Precision is an analytical procedure expresses the closeness of agreement between a series of measurement obtained from multiple sampling of the same homogeneous sample under prescribed conditions. In the current study the % RSD for the sample solution was found below <2.0 [19]. Accuracy Accuracy can be defined as the closeness of agreement between accepted reference value and the value found. In this study recovery was calculated by standard weighing method for 50%, 100% and 150%. Robustness A robustness method was performed to confirm whether the method is capable of reproducibility during the deliberate changes taken place in the proposed method. Identification of Wavelength Approximately 100 mg of Entacapone, Levodopa And Carbidopa is weighed and transferred into 100 ml volumetric flasks individually, to that 70 ml of diluent is added and sonicated to dissolve the compounds, mixed well and made up to the mark with diluent. From those solutions 2 ml is transferred into 3 individual 100 ml volumetric flask, mixed well and made up to the mark with diluent. The prepared solutions are scanned between 200-400 nm to detect the max. All the spectra are overlaid and the isobestic point is identified as 270 nm. Specificity By injecting blank solution it is confirmed that there is no inference found in the standard chromatogram by taking tailing factor and theoretical plates into consideration. System Suitability Six replicate injections of sample were given for the test of system suitability and found % RSD was within limits (<2.0). Results were given in Table 1. Precision The precision of the relative standard deviation of individual area of entacapone and levodopa and carbidopa were found to be within limits. Intra-day precision Intraday precision is determined by analyzing same concentration of entacapone, levodopa and carbidopa for six times in the same day. Inter-day precision Interday precision is determined by analyzing the same concentration of entacapone, levodopa and carbidopa on different days. Linearity For linearity, Six linear concentrations of enta capone, levodopa and carbidopa (5-160μg/ml) were injected in a triplicate manner. A plot of average peak area versus the concentration in μg/ml or mg/ml is made and from this the correlation coefficient, y-intercept (const. of regression) and slope (coefficient of regression) of the regression line were calculated. Accuracy Accuracy can be defined as the closeness of agreement between accepted reference value and the value found. In this study recovery was calculated by standard weighing method for 50%, 100% and 150%. Robustness Robustness of the method is performed by altering the chromatographic conditions such as pH of the buffer, wavelength, mobile phase composition and observed the variation of the results which should be within the acceptance criteria. CONCLUSION There is no official compendial method was reported for the estimation of entacapone, levodopa and carbidopa. Therefore the proposed method which is new, simple, sensitive, precise and accurate economical analytical method can be used for the regular analysis and also can be applied to the commercial formulation. Depending on all the validated parameters it can be confirmed that this method is the best one that can be applied for the estimation of both active pharmaceutical industries and also commercial pharmaceutical labs. The high sensitivity (LOD), mobile phase utilized (ecofriendly) and run time (=5) can be determined as an important features for this proposal. DISCLAIMER The products used for this research are commonly and predominantly use products in our area of research and country. There is absolutely no conflict of interest between the authors and producers of the products because we do not intend to use these products as an avenue for any litigation but for the advancement of knowledge. Also, the research was not funded by the producing company rather it was funded by personal efforts of the authors. CONSENT It is not applicable. ETHICAL APPROVAL It is not applicable.
v3-fos-license
2022-12-21T16:26:42.540Z
2022-12-19T00:00:00.000
254912645
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1113/JP283174", "pdf_hash": "12ccebeca8f09d47ee3e65128022b2e509328ae2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42655", "s2fieldsofstudy": [ "Psychology", "Biology" ], "sha1": "337ff1d9185f1a3e5d8676b8f7a9dcc4adaefb7e", "year": 2023 }
pes2o/s2orc
Skin properties and afferent density in the deterioration of tactile spatial acuity with age Abstract Tactile sensitivity is affected by age, as shown by the deterioration of spatial acuity assessed with the two‐point discrimination task. This is assumed to be partly a result of age‐related changes of the peripheral somatosensory system. In particular, in the elderly, the density of mechanoreceptive afferents decreases with age and the skin tends to become drier, less elastic and less stiff. To assess to what degree mechanoreceptor density, skin hydration, elasticity and stiffness can account for the deterioration of tactile spatial sensitivity observed in the elderly, several approaches were combined, including psychophysics, measurements of finger properties, modelling and simulation of the response of first‐order tactile neurons. Psychophysics confirmed that the Elderly group has lower tactile acuity than the Young group. Correlation and commonality analysis showed that age was the most important factor in explaining decreases in behavioural performance. Biological elasticity, hydration and finger pad area were also involved. These results were consistent with the outcome of simulations showing that lower afferent density and lower Young's modulus (i.e. lower stiffness) negatively affected the tactile encoding of stimulus information. Simulations revealed that these changes resulted in a lower build‐up of task‐relevant stimulus information. Importantly, the reduction in discrimination performance with age in the simulation was less than that observed in the psychophysical testing, indicating that there are additional peripheral as well as central factors responsible for age‐related changes in tactile discrimination. Key points Ageing effects on tactile perception involve the deterioration of spatial sensitivity, although the contribution of central and peripheral factors is not clear. We combined psychophysics, measurements of finger properties, modelling and simulation of the response of first‐order tactile neurons to investigate to what extent skin elasticity, stiffness, hydration, finger pad area and afferent density can account for the lower spatial sensitivity observed in the elderly. Correlation and commonality analysis revealed that age was the most important factor to predict behavioural performance. Skin biological elasticity, hydration and finger pad area contributed to a lesser extent. The simulation of first‐order tactile neuron responses indicated that reduction in afferent density plays a major role in the deterioration of tactile spatial acuity. Simulations also showed that lower skin stiffness and lower afferent density affect the build‐up of stimulus information and the response of SA1 (i.e. type 1 slowly adapting fibres) and RA1 (i.e. type 1 rapidly adapting fibres) afferent fibres. In the abstract, the say that the 'density of mechanoreceptors decreases', whereas I think it would be better to say 'mechanoreceptive afferents', as it is very likely that there is axon degradation as well, not just at the end of the receptor. Also, on p.4 of the introduction, the authors rightly cover the loss of mechanoreceptors in the skin, but the whole afferent can degrade. I think the authors need to be more specific right at the beginning of the introduction about what skin they are talking about. Skin is highly heterogenous and it is likely that aging affects it differently, for example, see the classic Stevens and Choo work (1996, Somatosen & Motor Res), which not only covers tactile tests on the glabrous hand skin, but all over the body, and how this changes over the lifespan. I think it would be good to add this study to the manuscript, but also start by saying that you are focusing on the glabrous hand skin. At the beginning of the methods, please include details about whether written informed consent was gained and whether the study received approval from an ethical committee. At the start of the 'Psychophysical task and stimulation setup' of the methods, please add more details about the exact location tested on the right index finger. Was this exactly the same between participants? The outer fingertip tends to be drier with age and there are subtle differences in sensitivity over the fingertip, due to mechanoreceptor density differences (e.g. Johansson & Vallbo, 1979, J Physiol), thus it is particularly important here to give this information. It could be more pertinent in aging, where the addition of RA2 signals during touch could aid the interpretation of the other tactile signals. To follow up on this in the discussion, I think it is relevant to discuss the potential future inclusion of RA2 and SA2 afferents. The model of Saal et al. does not include SA2s (as it is based on monkey work, where they are lacking, but these are readily found in humans), thus this information is not available. I can also understand the authors' reasoning (e.g. methods where they say PC/RA2 afferents are 'not tuned to fine spatial details'); however, these type 2 afferents would certainly help underpin touch perception and they should not be discounted. I agree that the focus here on type 1 afferents is good, but it would be worthwhile discussing the addition of type 2 afferents and their possible contribution for future work. Why was finger pad area significantly larger in the Elderly group? Can you comment more on this in the discussion? Does the finger change shape with age? Please spell out RA1 and SA1 when first used in the text. Referee #2: The current paper is trying to elucidate the reasons for deteriorating spatial acuity that comes with age. To do this they have performed psychophysical experiments and modeling of skin dynamics. Conceptually this is important research that would contribute to the missing understanding of how skin dynamics effect the tactile sense. The psychophysical method and description are good, and the use of the robotic stimulator is a nice way to ensure adequate reproducibility. However, the paper is severely lacking in clarity both with respect to the neurophysiological reasoning and use of statistical methods and how these are presented. Another limitation is that the model does not account for how the response properties of single mechanoafferents change with age in humans. Major issues The simulated responses to the tactile stimuli were made using the model by Saal et al. The model is only referred to and is stated to be "extended". If this extension is only the manipulation of Youngs modulus or there are more to it is never made clear. Furthermore, there is no description of the model apart from the reference which assumes that the reader is fully aware of the particulars of the model used. However, if one is familiar with the Saal-model then there are several aspects that is discussed within the model paper which is of relevance to this paper. For example, in the Saal paper they explicitly state that they only include "the stress perpendicular to the skin surface" while in the current paper the longitudinal stress is discussed. The chosen model should be discussed in the current paper since a large portion of the paper is dependent on the model. The already mentioned potential issue with the perpendicular stress for example, is this a problem? Furthermore, in the human experiments stimuli presentation is CONTACT ON-SLIDE-CONTACT OFF while the simulations have CONTACT ON-MOVE-CONTACT OFF. This is, probably, due to the fact that the Saal method cannot handle sliding stimulation; an aspect that is not even mentioned in the current paper. This is of course an important difference since the sliding event is a fundamental haptic dimension and could easily be argued to make a big difference (see "Is there a 'plenhaptic' function?" by Hayward for an in-depth discussion, https://doi.org/10.1098/rstb.2011.0150). How does this affect the outcome of the paper? The analysis of the simulated data is presented in a very shallow manner. The spike metric Victor-Purpura distance is presented and the q cost is set to zero, because the temporal structure of a spike response "does not provide relevant additional information". Which is a strange argument to present since the stimulation during the human experiments is dynamic and the stimulation during the simulations are also dynamic (to a low degree) since it is RAMP-HOLD-RAMP. Furthermore, there are numerous counter arguments to the notion of rate coding and these arguments shows that tactile primary afferental processing (that for the upper trunc in humans ends up in the cuneate nucleus) is highly dependent on the temporal structure (see https://doi.org/10.1016/j.neuron.2014.07.038 and https://doi.org/10.3389/fncel.2018.00210). Please discuss this more in detail and elucidate in which way the current paper avoids being dependent on the function of the Cuneate nucleus. Furthermore, there is no discussion in relation to this why VP-distance was chosen of all the available spike metrics (for a good overview see https://dx.doi.org/10.3389%2Ffncom.2019.00082). Finally, it is not known how the sensor responses change with respect to age and is definitively not included in the Saal-model. What is the argument with respect to this that makes the use of the model relevant? After the introduction of the VP-metric follows a very rough description of the classification process where all the responses of the sensors are thrown into RSS. Then converted into Euclidean coordinates, which begs the question what that is? The common way to describe it is to state that the space is Euclidean, not the coordinates. Furthermore, it is stated that MDS is used to "convert the data" into the Euclidean coordinates (space), which begs the question what it was before? Since classical MDS assumes that the input is Euclidean. Then the concept of "principal components" is written without explanation, followed by the statement that the PCs that accounted for 95% of the variance was selected. How this selection process was constructed is never written. Based on these PCs the classifier is then built. If this is true, then the authors have built in auto-correlation into their analysis since they have extracted the PCs from ALL data, and then classify the data based in that. They state that they do the testtrain split AFTER the PCs has been created. This means that the PCs will include the training data. It is also not written if there is any difference between the following 50 repetitions of the LDA classification (I assume that the test-train split is randomized each time, but this is never stated and is also moot if the PCs have been defined prior to the split). Finally, LDA is never explained. All in all, the classification algorithm presented is unclear and potentially auto-correlated which in that case would mean that all the results are nonsense. Is the above written understanding correct? If that is the case, please explain how the analysis does not exhibit auto-correlation. When the authors analyze how the stimulus information accumulates they do so for multiple durations for the Young group, but only one for the Elderly group without a rationale behind this. They also find that based on their simulations their classification performance is above chance already at 0.1 mm level which is not in line with their experimental data. This might be a strong argument why there is no apparent translational power between their experimental data and their simulated data. Furthermore, they use the word "spatiotemporal" in the same section, which is misleading since they use VP-distance with a q cost of zero meaning that the temporal aspect is close to zero. This also indicate that the pure number of spikes is what they measure. So, they for transparence should show the evolution of the number of spikes for the different stimulus paradigms. Why only one duration for the Elderly group? How is the analyzed data spatiotemporal? In conclusion, the paper would need a major rewrite with respect to the complexities inherent in skin biomechanics and the related sensor responses. Furthermore, all the descriptions of the analysis methods used, and the consequent results would need a major rewrite and in some, potential cases, a complete change. Finally, there is no apparent agreement between the simulation and the psychophysical results which begs the question what the simulation brings to the table? Minor No discussion of ethics approval is present. Even if there was concluded that no Ethics approval was needed, the rationale behind this should be included. There is no spread metric in the introduction where mean ages are presented. If a mean is written, then a spread needs to be written as well. Throughout the paper Confidence Intervals are missing, making the understanding of the p-values very incomplete. The authors say that the perform seven two-tailed t-tests. They accurately correct the significance level but they 1) fail to report the actual p-values, 2) fail to report the Confidence intervals, and 3) do not use the corrected p-values in figure 9. ----------------ADDITIONAL FORMATTING REQUIREMENTS: -You must start the Methods section with a paragraph headed Ethical Approval. If experiments were conducted on humans confirmation that informed consent was obtained, preferably in writing, that the studies conformed to the standards set by the latest revision of the Declaration of Helsinki, and that the procedures were approved by a properly constituted ethics committee, which should be named, must be included in the article file. If the research study was registered (clause 35 of the Declaration of Helsinki) the registration database should be indicated, otherwise the lack of registration should be noted as an exception (e.g. The study conformed to the standards set by the Declaration of Helsinki, except for registration in a database.). For further information see: https://physoc.onlinelibrary.wiley.com/hub/human-experiments -Please upload separate high-quality figure files via the submission form. -A Statistical Summary Document, summarising the statistics presented in the manuscript, is required upon revision. It must be on the Journal's template, which can be downloaded from the link in the Statistical Summary Document section here: https://jp.msubmit.net/cgi-bin/main.plex?form_type=display_requirements#statistics -Papers must comply with the Statistics Policy https://jp.msubmit.net/cgi-bin/main.plex? form_type=display_requirements#statistics In summary: -If n {less than or equal to} 30, all data points must be plotted in the figure in a way that reveals their range and distribution. A bar graph with data points overlaid, a box and whisker plot or a violin plot (preferably with data points included) are acceptable formats. 21-Feb-2022 -If n > 30, then the entire raw dataset must be made available either as supporting information, or hosted on a not-for-profit repository e.g. FigShare, with access details provided in the manuscript. -'n' clearly defined (e.g. x cells from y slices in z animals) in the Methods. Authors should be mindful of pseudoreplication. -All relevant 'n' values must be clearly stated in the main text, figures and tables, and the Statistical Summary Document (required upon revision) -The most appropriate summary statistic (e.g. mean or median and standard deviation) must be used. Standard Error of the Mean (SEM) alone is not permitted. -Exact p values must be stated. Authors must not use 'greater than' or 'less than'. Exact p values must be stated to three significant figures even when 'no statistical significance' is claimed. -Statistics Summary Document completed appropriately upon revision -A Data Availability Statement is required for all papers reporting original data. This must be in the Additional Information section of the manuscript itself. It must have the paragraph heading "Data Availability Statement". All data supporting the results in the paper must be either: in the paper itself; uploaded as Supporting Information for Online Publication; or archived in an appropriate public repository. The statement needs to describe the availability or the absence of shared data. Authors must include in their Statement: a link to the repository they have used, or a statement that it is available as Supporting Information; reference the data in the appropriate sections(s) of their manuscript; and cite the data they have shared in the References section. Whenever possible the scripts and other artefacts used to generate the analyses presented in the paper should also be publicly archived. If sharing data compromises ethical standards or legal requirements then authors are not expected to share it, but must note this in their Statement. For more information, see our Statistics Policy. -Please include an Abstract Figure Follow this link https://app.biorender.com/portal/jphysiol and enter your details and ensure you enter the manuscript number JP-RP-2022-282927 to be directed to enter our premium site. Select a figure type when creating the figure so The Journal of Physiology logo appears. When you have completed your figure(s) download and then upload as the figure file(s) for your revised submission. If you choose not to take up this offer we require figures to be of similar quality and resolution. If you are opting out of this service to authors, state this in the Comments section on the Detailed Information page of the submission form. 27-Oct-2022 1st Authors' Response to Referees I have now received reports from two independent reviewers, both experts in the field of human tactile neurophysiology and psychophysics. As you will see, Reviewer 1 has some recommendations for improvement but Reviewer 2 has major concerns with the modelling aspects of the study and hence the validity of the conclusions. In particular, this reviewer is concerned that there is low agreement between the simulations and the human psychophysical experiments, as well as the suitability of the model used in the simulations the paper. As such, I'm afraid the manuscript is not suitable as is, and will require significant rewriting in order to bring it up to the standard required of a modelling paper, particularly given that it does not include an assessment of the actual firing properties of human tactile afferents in young and aged participants. It would also be wise to include the data from Srinivasan and LaMotte in the monkey and Condon et al in the human on the effects of compliance on force encoding by tactile afferents. -We would like to thank the editor and the reviewers for their work. We have performed the changes suggested by the reviewers, including accounting for the concerns of reviewer 2 which affected the validity of the conclusion. The major changes are: (1) use of commonality analysis to understand the contribution to the discrimination judgments of the various factors (2) simplification of the analysis of simulated neurophysiological data (3) extension of the simulation to include evidence accumulation in elderly group. The authors present a highly-controlled and well-designed study investigating skin and tactile perception changes on the glabrous hand skin with age. The manuscript is interesting and very well-written, as well as displaying the data well (e.g. showing individual participant data in the graphs). I especially very much like their approach to 2-point discrimination testing, which at times can lead to problems of interpretation via the stimulation and the paradigm. Here, this is an excellent setup for many reasons, which minimizes errors and gives clear interpretations of the data. I have a few comments below that may help improve the paper more. In the abstract, the say that the 'density of mechanoreceptors decreases', whereas I think it would be better to say 'mechanoreceptive afferents', as it is very likely that there is axon degradation as well, not just at the end of the receptor. Also, on p.4 of the introduction, the authors rightly cover the loss of mechanoreceptors in the skin, but the whole afferent can degrade. -We have now changed the term to mechanoreceptive afferents and accounted for the degradation of the axon with the appropriate literature in the introduction. I think the authors need to be more specific right at the beginning of the introduction about what skin they are talking about. Skin is highly heterogenous and it is likely that aging affects it differently, for example, see the classic Stevens and Choo work (1996, Somatosen & Motor Res), which not only covers tactile tests on the glabrous hand skin, but all over the body, and how this changes over the lifespan. I think it would be good to add this study to the manuscript, but also start by saying that you are focusing on the glabrous hand skin. -We have made clear that we are talking about the glabrous skin of the finger pad and added the suggested paper (Stevens and Choo, 1996). At the beginning of the methods, please include details about whether written informed consent was gained and whether the study received approval from an ethical committee. -We have added this information. At the start of the 'Psychophysical task and stimulation setup' of the methods, please add more details about the exact location tested on the right index finger. Was this exactly the same between participants? The outer fingertip tends to be drier with age and there are subtle differences in sensitivity over the fingertip, due to mechanoreceptor density differences (e.g. Johansson & Vallbo, 1979, J Physiol), thus it is particularly important here to give this information. It could be more pertinent in aging, where the addition of RA2 signals during touch could aid the interpretation of the other tactile signals. -We have specified the location in the Methods section 'Psychophysical task and stimulation setup'. To follow up on this in the discussion, I think it is relevant to discuss the potential future inclusion of RA2 and SA2 afferents. The model of Saal et al. does not include SA2s (as it is based on monkey work, where they are lacking, but these are readily found in humans), thus this information is not available. I can also understand the authors' reasoning (e.g. methods where they say PC/RA2 afferents are 'not tuned to fine spatial details'); however, these type 2 afferents would certainly help underpin touch perception and they should not be discounted. I agree that the focus here on type 1 afferents is good, but it would be worthwhile discussing the addition of type 2 afferents and their possible contribution for future work. -We have now discussed the role of PC afferents in the Discussion section. Why was finger pad area significantly larger in the Elderly group? Can you comment more on this in the discussion? Does the finger change shape with age? -We have commented on this in the Discussion. Please spell out RA1 and SA1 when first used in the text. -We have introduced the abbreviations. Referee #2: The current paper is trying to elucidate the reasons for deteriorating spatial acuity that comes with age. To do this they have performed psychophysical experiments and modeling of skin dynamics. Conceptually this is important research that would contribute to the missing understanding of how skin dynamics effect the tactile sense. The psychophysical method and description are good, and the use of the robotic stimulator is a nice way to ensure adequate reproducibility. However, the paper is severely lacking in clarity both with respect to the neurophysiological reasoning and use of statistical methods and how these are presented. Another limitation is that the model does not account for how the response properties of single mechanoafferents change with age in humans. -We think there has been a misinterpretation of our experiment design, as there isn't a shear component in the stimulation used in this study. We have attempted to clarify this in the manuscript. -We also provided a better explanation of the neurophysiological reasoning behind the simulations and more details regarding analysing the simulated neurophysiological data. Major issues The simulated responses to the tactile stimuli were made using the model by Saal et al. The model is only referred to and is stated to be "extended". If this extension is only the manipulation of Youngs modulus or there are more to it is never made clear. Furthermore, there is no description of the model apart from the reference which assumes that the reader is fully aware of the particulars of the model used. However, if one is familiar with the Saal-model then there are several aspects that is discussed within the model paper which is of relevance to this paper. -We now introduce Saal's model in a new section titled 'Overview of TouchSim' model, which also includes the details related to the work on this paper. We have clarified how the model has been modified. For example, in the Saal paper they explicitly state that they only include "the stress perpendicular to the skin surface" while in the current paper the longitudinal stress is discussed. The chosen model should be discussed in the current paper since a large portion of the paper is dependent on the model. The already mentioned potential issue with the perpendicular stress for example, is this a problem? -We have now clarified that the Saal's model and the conditions in our experiment involve only stimuli applied perpendicular to the skin. We took out the term 'longitudinal' that was used inappropriately to avoid confusion. We also argued that this model is suitable for this application as no major longitudinal stress is produced by our static stimuli. Furthermore, in the human experiments stimuli presentation is CONTACT ON-SLIDE-CONTACT OFF while the simulations have CONTACT ON-MOVE-CONTACT OFF. This is, probably, due to the fact that the Saal method cannot handle sliding stimulation; an aspect that is not even mentioned in the current paper. This is of course an important difference since the sliding event is a fundamental haptic dimension and could easily be argued to make a big difference (see "Is there a 'plenhaptic' function?" by Hayward for an in-depth discussion, https://doi.org/10.1098/rstb.2011.0150). How does this affect the outcome of the paper? -We share the concerns of the reviewer and we have added this limitation to the paragraph about Saal model. However, this was not initially mentioned as it does not apply to our experiment. The analysis of the simulated data is presented in a very shallow manner. -Thank you for pointing this out. We agree there's need for more detail and we take this opportunity to make the analysis of simulation data clearer and reproducible. The spike metric Victor-Purpura distance is presented and the q cost is set to zero, because the temporal structure of a spike response "does not provide relevant additional information". Which is a strange argument to present since the stimulation during the human experiments is dynamic and the stimulation during the simulations are also dynamic (to a low degree) since it is RAMP-HOLD-RAMP. -We have provided a better explanation to why the focus was on the rate coding and why the temporal structure of neural response was not assessed. Furthermore, there are numerous counter arguments to the notion of rate coding and these arguments shows that tactile primary afferental processing (that for the upper trunc in humans ends up in the cuneate nucleus) is highly dependent on the temporal structure (seehttps://doi.org/10.1016/j.neuron.2014.07.038 and https://doi.org/10.3389/fncel.2018.00210) Please discuss this more in detail and elucidate in which way the current paper avoids being dependent on the function of the Cuneate nucleus. -We have included in the discussion limitations related to looking at first-order neurons alone without taking into account second-order neurons where signal processing is likely to begin. We have also discussed why a rate code was chosen for investigation rather than a temporal code (i.e., spike timing) Furthermore, there is no discussion in relation to this why VP-distance was chosen of all the available spike metrics (for a good overview see https://dx.doi.org/10.3389%2Ffncom.2019.00082). -We have now simplified the analysis that no longer needs any spike metrics. This simplifies the description and reproducibility of our approach. Finally, it is not known how the sensor responses change with respect to age and is definitively not included in the Saal-model. What is the argument with respect to this that makes the use of the model relevant? -We have provided a rationale behind the modelling part of the paper in the introduction and discussed its implications in the discussion section. After the introduction of the VP-metric follows a very rough description of the classification process where all the responses of the sensors are thrown into RSS. Then converted into Euclidean coordinates, which begs the question what that is? The common way to describe it is to state that the space is Euclidean, not the coordinates. Furthermore, it is stated that MDS is used to "convert the data" into the Euclidean coordinates (space), which begs the question what it was before? Since classical MDS assumes that the input is Euclidean. Then the concept of "principal components" is written without explanation, followed by the statement that the PCs that accounted for 95% of the variance was selected. How this selection process was constructed is never written. Based on these PCs the classifier is then built. If this is true, then the authors have built in autocorrelation into their analysis since they have extracted the PCs from ALL data, and then classify the data based in that. They state that they do the test-train split AFTER the PCs has been created. This means that the PCs will include the training data. It is also not written if there is any difference between the following 50 repetitions of the LDA classification (I assume that the test-train split is randomized each time, but this is never stated and is also moot if the PCs have been defined prior to the split). Finally, LDA is never explained. All in all, the classification algorithm presented is unclear and potentially auto-correlated which in that case would mean that all the results are nonsense. Is the above written understanding correct? If that is the case, please explain how the analysis does not exhibit auto-correlation. When the authors analyze how the stimulus information accumulates they do so for multiple durations for the Young group, but only one for the Elderly group without a rationale behind this. -We believe the misunderstanding stems from the use of the term Principal Component to refer to the coordinates obtained from the MDS. We are sorry we used a term that is unclear in this context. In brief, we did not perform PCA for dimensionality reduction which, the reviewer is correct, would have been an issue if pca was performed on all data before the train/test split. We have taken this opportunity to change the approach for the analysis of the neurophysiological data by using PCA instead of the long pipeline used in the first place. This part was rewritten ensuring more clarity in regard to the details of the analysis. They also find that based on their simulations their classification performance is above chance already at 0.1 mm level which is not in line with their experimental data. This might be a strong argument why there is no apparent translational power between their experimental data and their simulated data. -Thanks for pointing this out. We have tried to clarify why we believe there is translational power between experimental and simulated data. Our focus was to estimate the separation level at which the correct response rate is 75%. Participants performance was estimated as the stimulus level (i.e. separation distance) at which they could respond correctly 75% of the time. This was done by fitting a Logistic curve to the response correct rate at each separation level. Similarly, we estimated the separation level at which the classifier performed correctly 75% of the time. Results show a good match between the behavioural and simulated JNDs . We have now reported all the analysis in terms of JNDs. Importantly, the classification performance slighlty above chance at 0.1 mm is not in contrast with the behavioural data where some of our participants could discriminate the 2 pins from the single pin even at 0.1 mm. Furthermore, they use the word "spatiotemporal" in the same section, which is misleading since they use VP-distance with a q cost of zero meaning that the temporal aspect is close to zero. This also indicate that the pure number of spikes is what they measure. So, they for transparence should show the evolution of the number of spikes for the different stimulus paradigms. Why only one duration for the Elderly group? How is the analyzed data spatiotemporal? -We changed it to spatial and clarified that is the number of spikes we are looking at. We modified the analysis to allow a better comparison between young and elderly. In conclusion, the paper would need a major rewrite with respect to the complexities inherent in skin biomechanics and the related sensor responses. Furthermore, all the descriptions of the analysis methods used, and the consequent results would need a major rewrite and in some, potential cases, a complete change. Finally, there is no apparent agreement between the simulation and the psychophysical results which begs the question what the simulation brings to the table? Minor No discussion of ethics approval is present. Even if there was concluded that no Ethics approval was needed, the rationale behind this should be included. -We have added this information There is no spread metric in the introduction where mean ages are presented. If a mean is written, then a spread needs to be written as well. -We have added spread metric for the study by Skedung et al. (2018). However, spread metric was not reported in the remaining papers cited in the introduction. Throughout the paper Confidence Intervals are missing, making the understanding of the p-values very incomplete. The authors say that the perform seven two-tailed t-tests. They accurately correct the significance level but they 1) fail to report the actual p-values, 2) fail to report the Confidence intervals, and 3) do not use the corrected p-values in figure 9. 15-Nov-2022 1st Revision -Editorial Decision Dear Dr. Wing, Re: JP-RP-2022-283174X "Skin properties and afferent density in the deterioration of tactile spatial acuity with age" by Davide Deflorio, Massimilano Di Luca, and Alan Wing Thank you for submitting your manuscript to The Journal of Physiology. It has been assessed by a Reviewing Editor and by 2 expert referees and we are pleased to tell you that it is acceptable for publication following minor revision. Please advise your co-authors of this decision as soon as possible. The referee reports are copied at the end of this email. Please address all the points raised and incorporate all requested revisions or explain in your Response to Referees why a change has not been made. We hope you will find the comments helpful and that you will be able to return your revised manuscript within 4 weeks. If you require longer than this, please contact journal staff: jp@physoc.org. Your revised manuscript should be submitted online using the link in your Author Tasks: Link Not Available. This link is accessible via your account as Corresponding Author; it is not available to your co-authors. If this presents a problem, please contact journal staff (jp@physoc.org). Image files from the previous version are retained on the system. Please ensure you replace or remove any files that are being revised. If you do not wish to submit a revised version of your manuscript, you must inform our journal staff (jp@physoc.org) or reply to this email to request withdrawal. Please note that a manuscript must be formally withdrawn from the peer review process at one journal before it may be submitted to another journal. TRANSPARENT PEER REVIEW POLICY: To improve the transparency of its peer review process The Journal of Physiology publishes online, as supporting information, the peer review history of all articles accepted for publication. Readers will have access to decision letters, including Editors' comments and referee reports, for each version of the manuscript, as well as any author responses to peer review comments. Referees can decide whether or not they wish to be named on the peer review history document. ABSTRACT FIGURES: Authors are expected to use The Journal's premium BioRender account to create/redraw their Abstract Figures. Information on how to access this account is here: https://physoc.onlinelibrary.wiley.com/journal/14697793/biorender-access. This will enable Authors to create and download high-resolution figures. If authors have used the free BioRender service, they can use the instructions provided in the link above to download a high-resolution version suitable for publication. The link provided should only be used for the purposes of this submission. Authors will be charged for figures created on this account if they are not related to this manuscript submission. LANGUAGE EDITING AND SUPPORT FOR PUBLICATION: If you would like help with English language editing, or other article preparation support, Wiley Editing Services offers expert help, including English Language Editing, as well as translation, manuscript formatting, and figure formatting at www.wileyauthors.com/eeo/preparation. You can also find resources for Preparing Your Article for general guidance about writing and preparing your manuscript at www.wileyauthors.com/eeo/prepresources. Upload a full Response to Referees file. To create your 'Response to Referees' copy all the reports, including any comments from the Senior and Reviewing Editors, into a Microsoft Word, or similar, file and respond to each point, using font or background colour to distinguish comments and responses and upload as the required file type. Please upload two versions of your manuscript text: one with all relevant changes highlighted and one clean version with no changes tracked. The manuscript file should include all tables and figure legends, but each figure/graph should be uploaded as separate, high-resolution files. We look forward to receiving your revised submission. If you have any queries, please reply to this email and we will be pleased to advise. -You must start the Methods section with a paragraph headed Ethical Approval. If experiments were conducted on humans confirmation that informed consent was obtained, preferably in writing, that the studies conformed to the standards set by the latest revision of the Declaration of Helsinki, and that the procedures were approved by a properly constituted ethics committee, which should be named, must be included in the article file. If the research study was registered (clause 35 of the Declaration of Helsinki) the registration database should be indicated, otherwise the lack of registration should be noted as an exception (e.g. The study conformed to the standards set by the Declaration of Helsinki, except for registration in a database.). For further information see: https://physoc.onlinelibrary.wiley.com/hub/human-experiments. In summary: -If n {less than or equal to} 30, all data points must be plotted in the figure in a way that reveals their range and distribution. A bar graph with data points overlaid, a box and whisker plot or a violin plot (preferably with data points included) are acceptable formats. -If n > 30, then the entire raw dataset must be made available either as supporting information, or hosted on a not-for-profit repository e.g. FigShare, with access details provided in the manuscript. -'n' clearly defined (e.g. x cells from y slices in z animals) in the Methods. Authors should be mindful of pseudoreplication. -All relevant 'n' values must be clearly stated in the main text, figures and tables, and the Statistical Summary Document (required upon revision). -The most appropriate summary statistic (e.g. mean or median and standard deviation) must be used. Standard Error of the Mean (SEM) alone is not permitted. -Exact p values must be stated. Authors must not use 'greater than' or 'less than'. Exact p values must be stated to three significant figures even when 'no statistical significance' is claimed. -Statistics Summary Document completed appropriately upon revision. Thank you for submitting your manuscript to The Journal of Physiology. I have now received reports from two independent reviewers, both experts in human somatosensory neurophysiology. As you will see, both reviewers thought highly of your manuscript, noting that deterioration of tactile function with age is understudied and that your manuscript contributes with an in-depth mechanistic analysis. However, both reviewers raise some issues that you will need to address. I look forward to receiving your revised manuscript in due course. ----------------- REFEREE COMMENTS Referee #1: I thank the authors for providing clear responses in the review process and making the necessary changes. I have a few further comments. In the abstract, 'density of mechanoreceptive afferent decreases' should be 'density of mechanoreceptive afferents decreases' (with the 's' at the end of afferent). Was the present study pre-registered? If not, in the ethics part of the methods, please add, 'apart from registration in a database', as per the Declaration of Helsinki (#35 https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethicalprinciples-for-medical-research-involving-human-subjects/). In the paragraph on Meissner corpuscles in the discussion, the authors could add the paper by Skedung et al (2018, Sci Rep), as they show that some elderly participants had conserved numbers of Meissner's (and were better at tactile tests), but the majority had decreased numbers. It is likely that these factors are also variable with age. *** Referee #2: Several of the concerns raised in the first review has been mitigated. A major change to the analysis has been performed and is overall an improvement. It is easier to follow the method and less concerns are raised. However, a few concerns are present in the updated manuscript. Below are major and minor issues. Major issues: Page 27-28: Commonality analysis: I have several concerns that is, at least unclear to me although they might be due to misunderstandings rather than errors. The numbers in table 2 and the numbers in the paragraph starting with "Although unique effects suggest that elasticity," are not the same. Every numerator differs from the table. Elasticity is 17.89 in the paragraph, 17.79 in the table; finger pad area is 37 in the paragraph, 36.65 in the table; hydration is 31.58 in the paragraph and 31.25 in the table; age is 88.9 in the paragraph and 51.55 in the table. After some reading and calculating I assume that "unique and total of common effects combined" means that the authors added the Coefficient column to the Total of Common Effects column, which produces the numbers used in the text. If this is the case, then there should be a correspondence of terms used. Thus, the relationship between table and paragraph is unclear. Furthermore, it is never explained why this metric is relevant. From the numbers in the table, it looks like the main variable associated with the JND variance is Age. Looking at the plots in fig.5 this makes sense, but on the other hand the magnitude of the impact of the remaining variables seems unreasonably low. Are they so colinear with age that they do not matter or is it perhaps an issue with improper normalization prior to the commonality analysis? Also, it is unclear what the relationship is between the initial claim that age accounted for 37% of the variance, and the last claim of the following paragraph where it is said to account for 98.9% of the variance. Which is it? This is repeated in the first paragraph of the Discussion, any alteration in the preceding text must be reflected there as well. Overall, I think that the explanatory paragraph on page 28 is unclear, and it is very hard to find the correspondence between the numbers in the text and the numbers in the referred table. Figure 6 right: The magnitude of the SD is heavily influenced by the magnitude of the mean value. I.e., the potential for larger variations increases with increasing frequency. Converting the SDs in fig. 6. Right plot into Z-score (SD/Mean) seems to convert the variability into the 15-20% range for all the groups (calculations made from visual inspection of the plots). The last statement of the first paragraph on page 30 can be questioned. Since the classification is made based on the rate-code (i.e., number of spikes) then the variability will mostly introduce noise since it is quite obvious from the left panel in fig.6 that 27-Oct-2022 there is a roughly 2 SD difference between the Means of each of the Elderly groups, and even more up to the Young group. Please clarify how the variability improves information content with respect to rate-coding and why SD is preferred over zscore? Fig. 8 discusses the stimulus information build up over time. As it is rate-code it means how the number of spikes accumulates. We know from Fig.6 that the mean number of spikes from the different stimulus contexts produce quite different number of spikes. It would be interesting to see the typical responses from each of the groups. Like a raster plot or similar. This would make it more accessible and relevant for peers that work with recorded data from primary afferents. Minor issues: Page 25: Multiple comparisons without preceding test or p-value correction. This in contrast to the later figure 7 and related texts where suddenly both bonferroni correction and pre-test are used. Varying use of exact and approximate p-values. No definition of what "significant" means. Page 26 paragraph starting with "A summary of...": Missing CI for "No significant correlation was found between biological elasticity and finger pad area (Pearson's r = -0.06, p = .74)." Mixing of styles: 360% increase and two-fold in increase, page 35. END OF COMMENTS 1st Confidential Review EDITOR COMMENTS Reviewing Editor: Thank you for submitting your manuscript to The Journal of Physiology. I have now received reports from two independent reviewers, both experts in human somatosensory neurophysiology. As you will see, both reviewers thought highly of your manuscript, noting that deterioration of tactile function with age is understudied and that your manuscript contributes with an in-depth mechanistic analysis. However, both reviewers raise some issues that you will need to address. I look forward to receiving your revised manuscript in due course. In the abstract, 'density of mechanoreceptive afferent decreases' should be 'density of mechanoreceptive afferents decreases' (with the 's' at the end of afferent). -We corrected this. Was the present study pre-registered? If not, in the ethics part of the methods, please add, 'apart from registration in a database', as per the Declaration of Helsinki (#35 https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethicalprinciples-for-medical-research-involving-human-subjects/). -We added this information. In the paragraph on Meissner corpuscles in the discussion, the authors could add the paper by Skedung et al (2018, Sci Rep), as they show that some elderly participants had conserved numbers of Meissner's (and were better at tactile tests), but the majority had decreased numbers. It is likely that these factors are also variable with age. -We have added the suggested paper in the Discussion section. *** Referee #2: Several of the concerns raised in the first review has been mitigated. A major change to the analysis has been performed and is overall an improvement. It is easier to follow the method and less concerns are raised. However, a few concerns are present in the updated manuscript. Below are major and minor issues. Major issues: Page 27-28: Commonality analysis: I have several concerns that is, at least unclear to me although they might be due to misunderstandings rather than errors. The numbers in table 2 and the numbers in the paragraph starting with "Although unique effects suggest that elasticity," are not the same. Every numerator differs from the Furthermore, it is never explained why this metric is relevant. From the numbers in the table, it looks like the main variable associated with the JND variance is Age. Looking at the plots in fig.5 this makes sense, but on the other hand the magnitude of the impact of the remaining variables seems unreasonably low. Are they so colinear with age that they do not matter or is it perhaps an issue with improper normalization prior to the commonality analysis? Also, it is unclear what the relationship is between the initial claim that age accounted for 37% of the variance, and the last claim of the following paragraph where it is said to account for 98.9% of the variance. Which is it? This is repeated in the first paragraph of the Discussion, any alteration in the preceding text must be reflected there as well. Overall, I think that the explanatory paragraph on page 28 is unclear, and it is very hard to find the correspondence between the numbers in the text and the numbers in the referred table. -We thank the reviewer for pointing this out and we tried to clarify the text in the manuscript to avoid confusion between R 2 , which is the regression effect (percentage of variance in the dependent variable that can be explained by our model), the percentage of explained variance in the dependent variable by each unique predictor, and the percentage of variance shared by each predictor with the regression effect (i.e., the ratio between the percentage of explained variance in the dependent variable by each unique predictor plus all combinations of each predictor with the others and the overall variance explained in the behavioural performance). -The low impact of the remaining variables is due to the collinearity issue which is the reason why we used commonality analysis to provide a more accurate picture of the relationship between predictors and dependent variable. Although the remaining variable provided little unique contribution to the regression effect, they shared a significant amount of variance with the regression effect as shown by the ratio between total effects for each predictor plus all combinations of each predictors with the others and the overall variance explained in the behavioural performance. Figure 6 right: The magnitude of the SD is heavily influenced by the magnitude of the mean value. I.e., the potential for larger variations increases with increasing frequency. Converting the SDs in fig. 6. Right plot into Z-score (SD/Mean) seems to convert the variability into the 15-20% range for all the groups (calculations made from visual inspection of the plots). The last statement of the first paragraph on page 30 can be questioned. Since the classification is made based on the rate-code (i.e., number of spikes) then the variability will mostly introduce noise since it is quite obvious from the left panel in fig.6 that there is a roughly 2 SD difference between the Means of each of the Elderly groups, and even more up to the Young group. Please clarify how the variability improves information content with respect to ratecoding and why SD is preferred over z-score? -We thank the reviewer for pointing this out. We have now removed the figure and text related to the SD as we realised it was superfluous. We have highlighted that the population spike count is more variable across stimuli for the simulated Young group than the other three groups without the need for any additional measure (e.g. SD). Also, this is not exactly what the classifier is using. The information comes from the variability of the firing rate across individual afferents and not the overall (population) spike count. Fig. 8 discusses the stimulus information build up over time. As it is rate-code it means how the number of spikes accumulates. We know from Fig.6 that the mean number of spikes from the different stimulus contexts produce quite different number of spikes. It would be interesting to see the typical responses from each of the groups. Like a raster plot or similar. This would make it more accessible and relevant for peers that work with recorded data from primary afferents. -We have added raster plots in Figure 8 showing an example of the simulated neural response in the two-age groups. Minor issues: Page 25: Multiple comparisons without preceding test or p-value correction. This in contrast to the later figure 7 and related texts where suddenly both bonferroni correction and pre-test are used. -We have included a two-way mixed ANOVA before the multiple comparison. Varying use of exact and approximate p-values. -We have now reported exact p-values in all tests. No definition of what "significant" means. -We have now specified: "Alpha of 0.05 was set as criterion for statistical significance" at the beginning of the Results section. Page 26 paragraph starting with "A summary of...": Missing CI for "No significant correlation was found between biological elasticity and finger pad area (Pearson's r = -0.06, p = .74)." -We have added this information. Mixing of styles: 360% increase and two-fold in increase, page 35. -We have now used the same style. 13-Dec-2022 2nd Revision -Editorial Decision Dear Dr Wing, Re: JP-RP-2022-283174XR1 "Skin properties and afferent density in the deterioration of tactile spatial acuity with age" by Davide Deflorio, Massimilano Di Luca, and Alan Miles Wing We are pleased to tell you that your paper has been accepted for publication in The Journal of Physiology. TRANSPARENT PEER REVIEW POLICY: To improve the transparency of its peer review process The Journal of Physiology publishes online, as supporting information, the peer review history of all articles accepted for publication. Readers will have access to decision letters, including Editors' comments and referee reports, for each version of the manuscript, as well as any author responses to peer review comments. Referees can decide whether or not they wish to be named on the peer review history document. The last Word (or similar) version of the manuscript provided will be used by the Production Editor to prepare your proof. When this is ready you will receive an email containing a link to Wiley's Online Proofing System. The proof should be thoroughly checked and corrected as promptly as possible.
v3-fos-license
2019-04-03T13:08:28.332Z
2016-01-01T00:00:00.000
91858383
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ajfm.journals.ekb.eg/article_18550_20262f7a84548ecfc668cade6f7e6db7.pdf", "pdf_hash": "fbdfca1c446f03f775debf90b68799124286c25c", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42656", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "fbdfca1c446f03f775debf90b68799124286c25c", "year": 2016 }
pes2o/s2orc
The immunotoxic effects of short term chronic exposure to Titanium Dioxide Nanoparticles on spleen of adult albino rats and the role of after toxic effect follow up Objective: The usage of Titanium Dioxide NanoParticles (TiO2NPs) on a large scale of applications was a reason of a variety of many health problems. The aim of the current study was to evaluate the immune toxic effects of short term administration of TiO2NPs on spleen. Material and Methods: Forty Adult male Rats were equally divided into four groups as follows: Group I: negative control, Group II: positive control, Group III: received TiO2NPs (1200 mg/kg) orally daily for 12 weeks. Group IV: follow up group received TiO2NPs by the same dose, route and for the same duration as TiO2NPs group and then they were left untreated for another 8 weeks. Total leukocytes, differential leukocytic counts, Interleukins IL-2 and IL-10 levels were measured, the spleen sections were examined immunohistochemically for the detection of CD4+ and iNOS expressing cells. Histopathological alterations in the spleen were also evaluated. Moreover, DNA damage was evaluated by comet assay. The results: TiO2NPs exposure for 12 weeks resulted in significant decreased in the total and deferential leukocytic counts, serum interleukins IL-2 as well as IL-10. It caused marked decrease in CD4+T-lymphocytes and increase in iNOS expressing cells indicating oxidative stress in spleen tissues. It also caused histopathological disruption of spleen architecture and produced DNA damage in splenocytes. Discontinuation of TiO2NPs administration for 8 weeks resulted in significant improvement of leukocytes, interleukins, increase in CD4+ T-lymphocytes and decrease in iNOS expressing cells in spleen tissues. Moreover, there was moderate improvement in histopathological alterations and DNA damage. Conclusion:TiO2NPs consumption have immunotoxic effects which may resulted from genetic damage and oxidative stress in the spleen of adult male albino rats which could be improved by its discontinuation for a period of time and it was recommended to increase the period of discontinuation as complete improvement may occur Introduction anomaterials are new engineered structures with one dimension of 100 nanometers or less (Nel et al., 2006).Titanium dioxide NanoParticles (TIO2NPs) that have specific properties such as higher stability, anti-corrosive and photocatalytic effects make them have large scale of applications in several areas (Riu et al., 2006).TiO2NPs are used in a variety of consumer products, as surface coatings, paints, toothpastes, sunscreens, cosmetics, food products (Gurr et al., 2005) and in the environmental decontamination of air, soil, and water (Choi et al., 2006).In medical treatment TiO2NPs also have been used in photodynamic therapy (Szacilowski et al., 2005) and antibacterial drugs (Montazer et al., 2011). TiO2NPs are small sized particles with large surface area and have surface reactive chemical agents which facilitates their endocytosis into different cells (Robertson et al.,2010) providing opportunity for accumulation in several organs such as the liver, kidneys, spleen, lungs, and heart of animals (Liu et al.,2009).Previous researches have proved the toxic effects of TiO2NPs on various organs by different mechanisms as DNA damage, apoptosis with formation of apoptotic bodies and mitochondrial abnormalities (Wang et al.,2007a), oxidative stress (Park et al.,2008), N enzymatic activity changes followed by cell apoptosis or necrosis (Zhao et al.,2009) and penetration of the blood-brain barriers and blood placenta barriers (Sang et al.,2012). Recently, the studies have been reported that exposure to TiO2NPs produced immune system affection (Li et al.,2010) by changing cytokine production and decrease immune function (Liu et al.,2011).Also, the systemic immune response associated with inhalable TiO2NP provided new strategy for risk assessment of TiO2NP exposure (Yanyun et al.,2014). Recent in vitro studies have demonstrated the genotoxic potential of higher concentration of TiO2NPs as shown in root meristem cells of Allium cepa (Demir et al.,2014).Also, another study have been reported that there is significant DNA damage in TiO2NPs exposed cultured WISH cells (Saquib et al.,2012). The spleen is the largest lymphoid organ in animals, involved in host immune response against blood borne antigens, producing lymphocytes and reflecting the histopathology of the immune system (Mebius and Kraal,2005).Xenobiotics that produced immunotoxicity affect lymphocytes population that is represented in the spleen (Elmore, 2006).T lymphocyte, is a type of leukocytes that play a central role in cell-mediated immunity.It has several subsets each have a distinct function, from these subsets T helper cells (Th cells) that is also known as CD4 + T cells because they express the CD4 glycoprotein on their surfaces.Th cells divide rapidly and secrete small proteins called cytokines that regulate or assist in the active immune response.If CD4 cells become depleted following immune suppression, the body is left vulnerable to a wide range of infections (McClory et al.,2012). Interleukin-2 (IL-2) is a type of cytokine that secreted in the activation program of CD4+T cells while Interleukin-10 (IL-10) is primarily produced by monocytes and to a lesser extent, Th lymphocytes and mast cells (Sojka et al.,2004). The immune system may be affected by oxidative stress that contributes to several organs damage and cell death as disturbed balance between oxidants and antioxidants leading to depression of immune organ function (Parkash and Nagarkatti, 2014).Inducible nitric oxide synthase (iNOS) described as the immunological NOS and expressed by macrophages and characterized by increase production of nitric oxide (NO) in pathological conditions.NO generated enzymatically by synthase [nitric oxide synthase (NOS)] which oxidize L-arginine to L-citrulline (Gnarro,2002), having the chance to react with superoxide and formation of peroxynitrite inducing cell toxicity (Mungrue et al.,2002).NO from iNOS-expressing cells suppresses mouse T cell proliferation.Also, in case of deficient iNOS, NO did not produced and this may attributed to increase the infiltration and expansion of CD4 cells (Aheng et al.,2011).The spleen was approved to be important organ for iNOS/NO responses and highly expressed iNOS in the splenocytes may regulate Th response negatively (Qing Shen et al.,2015). The comet assay or single cell gel electrophoresis (SCGE) provides a simple and effective method for evaluating DNA damage and breakage in individual cells (Demir et al.,2011) so, it can be used to test if TiO2NPs produce DNA damage. On that basis, The aim of the current work was to study immunetoxic effects of short term chronic exposure to TiO2NPs and to investigate the possible underlying molecular mechanisms of such toxicity and role of follow up in adult albino rats. Chemicals and preparation Titanium dioxide nanoparticles (TiO2NPs) was white odorless fine powder (21 nm particle size, surface area of 35-65 m 2 /g, purity ≥99.5% trace metals basis and Its CAS No is 13463-67-7), manufactured by Sigma -Aldrich Chemical Company, Germany and purchased from Sigma -Egypt.It was dissolved in 5% gum acacia solution which prepared by dissolving 10 gm of powder in 100 ml boiled distilled water.It was obtained from El-Nasr Pharmaceutical Chemicals Company, Egypt. Experimental Animals and Design Forty adult albino rats were purchased from the animal breeding house of Faculty of Medicine Zagazig university weighing 200-210 gm.The animals were housed in stainless steel cages and provided with commercial laboratory animal food and water ad libitum.All ethically approved conditions used for animal housing &handling were considered.Standards for animal care and administration met those acquired by applicable international laws and regulations (ILAR ,1996).The animals were equally divided into 4 groups :  Group I (negative control group): 10 rats received only regular diet and water to determine the basic values of performed tests for 12 weeks. Group II (positive control group) (gum acacia): 10 rats each received 1 ml of 5% gum acacia solution (the solvent of titanium dioxide) by oral gavage once daily for 12 weeks. Group III (titanium dioxide treated group) : 10 rats gavaged orally with 1200 mg/kg body weight titanium dioxide nanoparticles (1/10 LD 50) in 1ml of 5% gum acacia solution as a solvent once daily for 12 weeks.The LD50 of TiO2 for rats is more than 12,000 mg/kg body weight after oral administration (Wang et al., 2007b). Group IV (follow up group): 10 rats received Titanium dioxide nanoparticles by the same route, at the same dose and for the same duration as group III ,then they were left without treatment for another 8 weeks.Twenty-four hour after the final dosing/nondosing day, the rats (had been fasted over-night) were weighed and venous blood samples were collected from retro-orbital plexus of each rat while the animal was anesthetized with ether.The animals were then euthanized by cervical dislocation and the spleens were excised and weighed accurately. Body weight and Coefficient of spleen after weighing the body and spleens, the coefficient of spleen to body weight was calculated as the ratio of spleen (wet weight, mg) to body weight (g). 4. Leucocytes Counts 2.5 ml of whole blood samples from each group were collected in tubes containing EDTA as anticoagulant.Total (WBCs) and deferential leukocytes counts (lymphocytes, monocytes and granulocytes) were measured using a hematology cell counter. 5. Cytokines Assay 2.5 ml of serum from each group were harvested by centrifuging blood at 2500 rpm for 10 min at 4 °C and immediately frozen at −80 °C for further detection of levels of interleukin IL-2 and IL-10.Serum IL-2 and IL-10 levels were assayed for 4.5hours using Rat Quantikine® ELISA kits (R&D Systems Inc., Minneapolis, MN).Sensitivity of the kits was natural and recombinant rat IL-2 and IL-10.Manufacturer's instruction was followed.The absorbance was measured on a microplate reader at 450 nm then IL-2 IL-10 concentration of samples were calculated from a standard curve. Immunohistochemical Staining of spleen Immunohistochemistry was performed using primary antibodies against CD4+ (ready to use; Dako Carpinteria, California, USA)and anti-iNOS (isoform Nitros Oxide) (dilution 1 : 50; Santa Cruz Biotechnology) using streptavidin-biotin immunoperoxidase technique (DakoCytomation, California, USA).Formalin-fixed, paraffin-embedded tissues (FFPE) tissues were cut into (3-4-μm) thick sections and transferred to 3aminopropyltriethoxysilane (APTS) coated glass slides.Then, sections were subjected to dewaxing, rehydration, blocking with hydrogen peroxide, and antigen retrieval that was performed by heating specimens at 100°C for 20 min in citrate buffer (pH 6.0) within microwave.One to two drops of the primary ready-to-use monoclonal antibody, anti-CD4 and anti-iNOS were then placed on the sections on separate slides.Slides were incubated at room temperature for 60 min.Incubation with secondary antibody and product visualization (Dako) was performed with DAB chromogen (3, 3diaminobenzidine tetrahydrochloride).Sections were counter-stained with hematoxylin, dehydrated with ethanol and xylene and mounted permanently with Din-butylPhthalate in Xylene (DPX). Assessment of Immunohistochemical Staining Results Microscopic evaluation of CD4+ Assessment of membranous immunostaining were scored by counting the number of CD4 marker expression over each lymphocyte in five randomly selected high power fields at 40X magnification and the sections were graded as follow: + (1-25 cells), ++ (26-50 cells), +++ (≥51 cells) (Guo et al.,2008). Histopathological Examination of spleen All tissue samples were fixed with 10%formalin.Consecutive 5-μm thick sections from formalin-fixed, paraffin-embedded tissue blocks were prepared and stained with hematoxylin and eosin (H&E) for histopathological classification (Frank et al., 2005). Comet Assay of spleen The principle of the method (Singh et al.,1988) under highly alkaline conditions there is denaturation, unwinding of the duplex DNA and expression of alkali labile sites as single strand breaks.The parameters measured to analyze the electrophoretic patterns were: Tail length were measured from the middle of the nucleus to the end of the tail and relative DNA content in the tail.with 40x objective fluorescence microscope (With excitation filter 420-490nm [issue 510nm]).a Komet 5 image analysis software developed by Kinetic Imaging, Ltd. (Liverpool, UK) linked to a chargecoupled device (CCD) camera to assess the quantitative and qualitative extent of DNA damage in the cells by measuring the length of DNA migration and the percentage of migrated DNA.Finally, the program calculates tail moment.Generally, 50 to 100 randomly selected cells are analyzed per sample. Statisticalanalysis Results were expressed as mean ± standard deviation (SD).Multigroup comparisons of the means were carried out by one way analysis of variance (ANOVA) test.Least significant difference (LSD) test was used to compare the difference between the experimental groups and the control group.Descriptive data were compared by (chi-square test).The statistical significance difference for all tests was set at P< 0.05.Correlation coefficient (r) was used for testing the association between two continuous variables using SPSS software (v.16;SPSS). Body weight and Coefficient of spleen Results of this study revealed that TiO2NPs treatment for 12 weeks induced highly significant increase in the coefficients of the spleen (P<0.0001) when compared to control group.Stopping of administration of TiO2NPs for 8 weeks in the Follow up group significantly reduced these values compared to TiO2NPs group (P<0.0001)(Table 1). Leucocytes Counts Rats of TiO2NPs treated group showed highly significant decrease of total White blood cells (WBC) and differential counts(P<0.0001).Stopping of administration of TiO2NPs significantly increased total WBC and differential counts (P<0.0001) when compared to TiO2NPs treated group (Table 2). 3.Cytokines Assay Titanium dioxide NanoParticles (TiO2NPs) treated group showed significant decrease in both IL-2 and IL-10 as compared to control (P<0.0001)whilecessation of treatment showed significant increase of both Interleukins as compared with TiO2NPs group (P<0.0001)(Table 3 Bar charts 1 and 2). 4.Immunohistochemical Staining of spleen CD4 and iNOS expression was examined in spleen tissues of studied groups and results were showed in (Table 4) and (Table 5). In terms of CD4 marker expressions, there was an highly significant reduction in TiO2NPs treated group as compared to control groups and were graded (+) P<0.0001 (Table 4).InTiO2NPs treated group all splenic sections showed low level expression of CD4+ T-lymphocytes (Plates1 Cand F) as compared to control tissues that showed numerous brown positive membranous immunoreactions around white pulp (Plates 1 A, B and E). While the splenic sections of rats of Follow up group showed highly significant increase in CD4 marker expressions as compared to TiO2NPs treated group and were graded (++) P<0.0001 (Table 4) and(Plates 1D). According to inducible nitric oxide synthase (iNOS) expressions, there was highly significant difference between TiO2NPs treated group and control groups where all splenic sections were positive and 60% of them had scored (3) or strong expressions P<0.0001 (Table 5), also TiO2NPs treated group showed brown cytoplasmic immune reactions staining the red pulp of spleen (Plates 2 C and E) compared to control sections (negative iNOS immunostained) (Plates 2 A, B). While when rats of Follow up group were left without treatment for 8 weeks, their splenic sections showed highly significant difference as compared to that of TiO2NPs group where most of splenic sections were negative and 60% of them had scored (0) P<0.0001 (Table 5) and showed weaker immunoractivity within red pulp and in sinus lining cells (Plates2 D and F). From the Immunohistochemical staining results, a significant negative correlation between CD4 and iNOS expression was found in both TiO2NPs treated group (r= -0.97) and follow up group (r= -0.56) (Table 6) (Figure 1 A and B ). 5.Histopathology Examination of spleen Examination of H&E stained spleen sections of control -ve and control +ve showed normal spleen consisted of white and red pulps.The white pulp was composed of well-circumscribed lymphoid follicles, a periarterial lymphoid sheath and marginal zone which is clearly differentiated between the non-lymphoid red pulp and the lymphoid white-pulp.The lymphoid follicles consisted of a large number of lymphocytes, most of them appeared to have condensed darkly stained nuclei.The red pulp was composed of branching and anastomosing splenic cords and blood sinusoids in between (Plates 3 A and B).Treatment with TiO2NPs for12 weeks caused disruption of splenic architecture, apparent white pulp atrophy in the form of reduction of both size and cellular component of lymphatic follicles, differentiation between red and white pulp is indistinct and congested sinusoids (the sinusoids were packed with RBCs) (Plates 3 C and E ).While in follow up group that left without treatment for 8 weeks splenic tissues showed an improvement of the white pulp architectures as increase in the size and cellular components of the lymphatic follicles and marginal zone is differentiated from the red pulp which still showed congested sinusoids (Plates 3D and F). 6.Comet Assay of spleen Splenocytes exposed to TiO2NPs treatment for 12 weeks have exhibited DNA damage in the form of increased tail moment percentage and tail length as compared to control group that showed un-damaged nuclei.Tail moment reveal (the product of tail length and the fraction of total DNA in the tail) where the tail length represent the smallest detectable size of migrating DNA and the fraction of total DNA or the intensity of DNA represent the number of relaxed/broken pieces in the tail.On the contrary, on stopping administration of TiO2NPs, moderate improvement was noticed in the form of reduction of the damaged cells in Follow up group as compared to TiO2NPs group (Table7) and (Figure 2) . Discussion Toxicologists and regulatory scientists become interested with nanotoxicology and nano-risk particularly regarding to the fine-sized nanoparticles that may render them potentially toxic (Li and Nel, 2011). TiO2NPs is a versatile compound that has broadly been used in modern cosmetics.The spleen play an important role in immune response (Mebius and Kraal,2005), thus, any alteration of T lymphocytes population or its cytokines and any affection of DNA after consumption of TiO2NPs may reflect its immunotoxicity. In the current study, oral gavages of TiO2NPs for 12 weeks resulted in reduction of the body weight of TiO2NPs treated rats and increase the coefficients of the spleens.These are in consistent with Li et al.,2010 who observed significant increase in the coefficients of the spleens after intra-peritoneal injection of TiO2NPs at doses of 50 and 150 mg/kg BW for 45days and attributed the cause to TiO2NPs significant accumulation in the mouse spleen. Moreover, Xu et al.,2013 found increase in the organ coefficients of the spleens after intravenous injection of 140, 300, 645, and 1387 mg/kg BW of TiO2NPs and both suggested increase spleen coefficient to TiO2NPs accumulation in the organs.On the other hand, after stopping of oral treatment of TiO2NPs for 8 weeks, there was reduction in the splenic coefficient compared to that of TiO2NPs treated group, although not reaching to the spleen coefficients of control.This may be attributed to weight regain after its reduction during TiO2NPs administration and attributed also to decreased the congestion of splenic tissues. The immunological function of each of the WBC types has been reviewed extensively elsewhere (Thrall,2004) where neutrophils /heterophils and lymphocytes make up the majority of WBCs in mammals (Jain, 1993).T-lymphocytes have an important role in maintaining of immune response (Ayuob,2013) and CD4 + T cells can produce IL-2 and to lesser extent IL-10 which primarily produced by monocytes (Sojka et al.,2004).Therefore, examination of WBCs and differential leukocytes related to CD4+T cells and also examination of CD4+T cells in splenocytes could determine the immunological state. The results of the present study showed deficiency of total and differential leukocytes counts (lymphocytes, monocytes and granulocytes) in rats that treated with oral TiO2NPs for 12 weeks.These results are supported by that of Duan et al., 2010 who reported significant decrease in WBC, RBC, HGB, MCHC, and PCT blood levels after intragastric administration of TiO2NPs at doses of 125, 250 mg/kg BW for 30 consecutive days and attributed the deficient metabolism and immune response of mice to marked decrease in O2 content in the blood by the effect of the higher dose of TiO2NPs. Hamrahi-michak et al., 2012 explained the decreased level of leucocytes after its pervious increase because of the high concentrations of nanoparticles can enter into the lymphatic system, producing inflammation and enlargement in lymph nodes leading to increase the number of WBCs.But, after a period of time, the activity of these glands decreased and lymph node atrophy has been noted.leukocytes and differential leukocytic counts compared toTiO2NPs treated group.No previous studies on the improvement potentials after stopping of TiO2NPs administration were found. Cytokines are cells expressing proteins that are considered important mediators to regulate immune response (Szelényi,2001), also any changes in these proteins levels could be considered an indirect index to assess immune function status (Liu et al.,2014).Therefore, cytokines levels were examined to evaluate the immune response to TiO2NPs administration. In the present study, TiO2NPs treatment for 12 weeks showed significant decrease in both IL-2 and IL-10 as compared to those of control group while cessation of treatment for 8 weeks showed significant increase of both interleukins as compared to TiO2NPs group.These results are consistent with those of Duan et al.,2010 who reported significant decrease in IL-2 activity of the mouse serum after intragastric administration of TiO2NPs at doses of 125, 250 mg/kg BW for 30days and explained this to the decreased proliferation of CD4+ T cells.Also, Sojka et al., 2004 observed a decrease in IL-10 activity in TiO2NPs treated rats,which may be caused by decreased lymphocytes population.The potential improvement after cessation of TiO2NPs consumption for a period of time could be attributed to moderate restoration of the activity of lymphocytes after a period of complete discontinuity. The immunohistochemical staining of splenic tissue sections of TiO2NPs group treated for 12 weeks showed decreased expression of CD4+ marker indicating reduced T lymphocyte cells in white pulp as compared to that of control.While, stopping of TiO2NPs administration for 8 weeks allowed moderate restoration of T lymphocyte cells guided by significant increase of CD4+ marker expression when compared to TiO2NPs treated group. Also, the results of the present study are consistent with those of Duan et al., 2010 who reported decreased proliferation of T lymphocytes (including CD3, CD4, and CD8), B lymphocyte and natural killer lymphocyte and the ratio of CD4 to CD8 of mice after intragastric administration of TiO2NPs at doses of 125, 250 mg/kg BW for 30days and attributed this to inhibition of IL-2 activity that lead to reduction of the proliferation of T lymphocytes and inhibition of other immunologically competent cell activation.No previous studies discussed the potentialities of regaining T lymphocytes activities after a period of cessation of TiO2NPs consumption. The structural modifications in T-lymphocytes may be affected by oxidative stress that lead to being hypo-responsive (Cemerski et al.,2003).Nitric oxide production from iNOS-expressing cells suppresses mouse T cell proliferation leading to oxidative cell toxicity (Aheng et al.,2011).Therefore, examination of iNOS expressing cell in splenocytes could determine its role in immunological state. The immunohistochemical staining of splenic tissue sections of TiO2NPs group treated for 12 weeks showed strong iNOS immunreactivity in the red pulp pointed to increase production of NO inducing oxidative stress and suppressing CD4+T cells.These results are consistent with those of Duan et al., 2010 who reported higher levels of NO in blood serum after intragastric administration of TiO2NPs at doses of 125, 250 mg/kg BW for 30days and also with Ma et al.,2010 who reported excessive release of nitric oxide in brain tissues of mice after injection into abdominal cavity of 50,100, and 150 mg/kg BW of TiO2NPs for 14 days.Where Filep et al.,1996 stated that NO is closely related to cellular immune function by mediating natural killer cells to kill YAC-1 lymphocyte tumor.While, stopping of TiO2NPs administration for 8 weeks showed weak iNOS immunreactivity in the red pulp of splenic tissues giving a thought that the period of discontinuity allowed CD4+T cells to regain its activity leading to decreasing in the level of NO that indicated by decreasing iNOS expressing cells. Also, The histopathological examination of splenic sections of rats treated with TiO2NPs for12 weeks caused disruption of spleen architecture, sever congestion of tissues, decreased in both size and cellular component of the lymphatic follicles and absence of demarcation between red pulp and white pulp as compared to spleen architecture of control.These findings are supported by those of Li et al.,2010 who reported congestion of the spleen tissue and proliferation of lymphatic follicles after intraperitoneal injection of 50mg/kg BW TiO2NPs for 45days suggested that TiO2NPs induced oxidative reaction causing splenic lesion.Chen et al.,2009 found a mass of neutrophilic cells in spleen tissues as a result of accumulated large number of TiO2 particles that induced severe splenic lesion after intra-peritoneal injection of higher doses of TiO2NPs(324-2592 mg/kg) for 7 days. On the other hand, stopping of TiO2NPs administration for 8 weeks, splenic tissues showed increase in the size and cellular components of the lymphatic follicles and marginal zone is differentiated from the red pulp which still showed congested sinusoids. The results of the present Comet Assay after treatment with TiO2NPs for12 weeks showed genotoxic effects of TiO2NPs on spleen tissues.These results are in agreement with the review of Landsiedel et al.,2009 where they reported genotoxic effects about nanomaterials including TiO2NPs and describing micronuclei development, as an indicative of chromosomal damage and DNA damage caused byTiO2NPs.These results also are in consistent with those of Trouiller et al., 2009 who reported significant increase in tail moment after oral treatment with 500 mg/kg BW of TiO2 NPs for five consecutive days where they measured DNA stand breaks by the alkaline comet assay in mice peripheral blood and attributed DNA damage to oxidative stress. Mohamed and Hussien, 2016 also reported the genotoxic effects of TiO2NPs on brain tissue after treatment of mice with TiO2NPsat 500 mg/kg BW for five consecutive days TiO2NPs by using comet assay and referred these effects to oxidative burst that caused by TiO2NPs leading to release superoxide anions (O2−•) which convert to multiple reactive oxygen species(ROS).Several previous in-vivo studies demonstrated the gentoxic effects of TiO2NPs as Reeves et al., 2008 reported TiO2NPs oxidative-stressrelated effects including inflammation, cytotoxicity and genomic instability, either alone or in the presence of UVA irradiation.Gurr et al., 2005 showed that TiO2NPs induced mainly hydrogen peroxide and nitric oxide generation leading to lipid peroxidation and oxidative DNA damage in lung epithelial cells. On the other hand, the present Comet Assay after cessation of TiO2NPs for 8 weeks showed decreased DNA damaged cells as compared to that of TiO2NPs group.Conclusion: TiO2NPs consumption have immunotoxic effects which may resulted from genetic damage and oxidative stress in the spleen cells of adult male albino rats which could be moderately improved by its discontinuation for a period of time and it is recommended to increase the period of discontinuation as complete improvement may occur. Figure 1 : Figure 1: Pearson Correlation statistical analysis showing negative correlation between iNOS expressing cells and the CD4 + T cell count in (A) TiO2NPs treated group and (B) Follow up group.r=-0.99.P=0.0001. Figure 2 : Figure 2: Epi-fluorescence images of DNA damage in Comet Assay.(1) Control-ve and (2)Control + ve showing nuclei of un damaged cells (3)TiO2NPs group showing induced various degree of damage in DNA of splenocytes (4) Follow up group after stopping of TiO2NPs showing decreased the damaged cells.
v3-fos-license