added
string
created
string
id
string
metadata
dict
source
string
text
string
version
string
2020-11-04T14:08:21.689Z
2020-10-29T00:00:00.000
226244187
{ "extfieldsofstudy": [ "Medicine", "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2227-9032/8/4/438/pdf", "pdf_hash": "9a12bf612315faef250362118796973f1352a063", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2734", "s2fieldsofstudy": [ "Medicine" ], "sha1": "1af3d7dad131cff837361cde0908aea5c9234691", "year": 2020 }
pes2o/s2orc
The Reliability and Compatibility of the Paper and Electronic Versions of the POLLEK Cohort Study Questionnaire Background: Chronic fatigue, depression, burnout syndrome, and alcohol addiction have been identified as significant mental health problems in young medical doctors. Given the lack of prospective studies in this area in Poland, the POLski LEKarz (POLLEK) cohort study was created. The goal of the POLLEK study is to assess the quality of life and health status (including mental health) of medical students and young physicians. The aim of the presented paper was to assess the reliability and compatibility of paper and electronic versions of the POLLEK questionnaire. Methods: Between 1 October 2019 and 28 February 2020, all medical students (N = 638) of the first year in the Medical University of Silesia were invited to participate in a cross-sectional study. Three hundred and fifty-three students (55.3%) who accomplished both versions were included in the current analysis. Results: Values of Cronbach’s alpha >0.7 proved both modes of delivery to have good internal consistency, except for the individual Alcohol Use Disorder Identification Test (AUDIT) domains and the Environmental domain of the WHOQOL-BREF (paper version). Similarly, interclass correlation coefficients equal to or greater than 0.9 denoted an excellent reproducibility. Conclusions: We documented very good accordance and reproducibility of POLLEK questionnaire (both paper and electronic versions). These findings legitimize the use of the questionnaire interchangeably. Introduction The current epidemiological situation of COVID-19 in all European countries (including Poland) vividly highlighted how difficult and responsible medical doctor work is. A previously published review paper of our team revealed that psychosocial determinants have a significant impact on mental health and quality of life of physicians [1]. Chronic fatigue, burnout syndrome, alcohol addiction, risky alcohol consumption, depression, and potential suicidal ideation are among the most important mental health problems of young medical doctors and even medical students [2][3][4][5]. Public health experts suggest that future research of mentioned problems should be conducted on the bases of prospective observations. The lack of this type of research in Poland justifies taking up the research topic in the group of medical students, as future doctors. We understand that reliable scientific knowledge requires appropriate, standardized tools, including validated research questionnaires. Following the mentioned justification, we created the integrated original questionnaire that we used in the first step of POLLEK cohort study, which aimed to identify and evaluate the quality of life and health status (including mental health) of medical students and young physicians with simultaneous assessment of their determinants related to studying and working conditions in medical students and young physicians during a long-term observation. Additionally, in the model of epidemiological cohort study, a control of the socio-demographic factors, as well as those that identify lifestyle and chronic diseases is planned. The aim of the presented paper is an evaluation of the reliability and compatibility of both the paper and electronic versions of the POLLEK questionnaire. Study Design and Sampling A cross-sectional study was performed between 1 October 2019 and 28 February 2020. All medical students (N = 638) of the first year in the Medical University of Silesia (MUoS, Poland) were invited to participate in the study project. Written consent to the examination was obtained from n = 559 students (N 1 = 354; 91.2% of all medical students in Katowice and N 2 = 205; 82.0% of all medical students in Zabrze); both are medical faculties of MUoS. Detailed descriptive statistics were presented in Table 1. The first step of the study was related to the necessity of questionnaire validation. The integrated tool includes the Polish version of the WHOQOL-BREF questionnaire [6], the next is the Alcohol Use Disorder Identification Test (AUDIT) [7], and also the original questionnaire identifying individual nutrition, demographic, socioeconomic, and anthropometric determinants. It is worth indicating that the WHOQOL-BREF questionnaire regards four domains of quality of life (26 items in total): somatic (physical health), psychological, social (social relationships), and environmental domains. Whereas, the AUDIT questionnaire is a 10-item screening tool to assess alcohol consumption, drinking behaviors, and alcohol-related problems. Both questionnaires had been successfully used in previous studies [8,9]. Statistical Analysis Initially, data were analyzed using descriptive statistics (median and interquartile range, IQR). Reproducibility (test-retest reliability of the POLLEK questionnaire) was assessed by asking all of the students (N = 638 in both medical faculties of MUoS) to complete the paper and online version of the instrument. A total of 560 students (response rate of 87.8%) completed the paper version (341 females, 218 males, and 1 missing data). As many as 353 students (55.3%) also completed the electronic version of the questionnaire; nearly 62% of them were women. The median age of respondents was 19 years. About three-quarters of students were living away from their families. Detailed statistics are presented in Table 1. The interclass correlation coefficient (ICC) was analyzed in a test-retest reliability study using the ICC function available in the psych (v1.9.12) package in R software. Moreover, the Bland-Altman plots were obtained to describe differences between the scores and assess heteroscedasticity [10]. Additionally, the repeatability was evaluated by Cohen's kappa statistics [11]. The reliability of the scales and their domains was evaluated using Cronbach's alpha coefficients of internal consistency. Moreover, we conducted confirmatory factorial analysis (CFA) using the lavaan (v0.6-5) package in R software to evaluate the structure of each major part of the questionnaire and their domains. To measure the goodness of fit, the Comparative Fit Index (CFI), Tucker Lewis Index (TLI), and root mean square error of approximation (RMSEA) were used. RMSEA results were scored as a good fit for ≤0.05, adequate fit (0.05-0.08), mediocre (0.08-0.10), while values > 0.10 denoted not acceptable fit. Furthermore, values of CFI and TLI greater than 0.95 were interpreted as an acceptable fit [12,13]. All analyses were performed in R 3.6.2 software [14], and results were presented with the respective confidence intervals (95%) or p values (significant at the level <0.05). Ethical Approval The ethics approval for the study was received from the Bioethical Committee of the Medical University of Silesia in Katowice (approval number KNW/0022/KB/217/19; date: 8 November 2019). Written informed consent was obtained from all participants. Results The Bland-Altman analysis demonstrated high accordance between scores in the paper and internet version of WHOQOL-BREF and AUDIT scales (see Figure 1 and Table 2 for more details). In the ICC analysis, the accordance between paper and electronic version of the WHOQOL-BREF questionnaire was excellent for the overall scores (ICC = 0.92), and the specific domains (ICCs vary from 0.90 to 0.94). Additionally, assessment of repeatability of answers to particular questions (with Cohen's kappa, Spearman's rho, and Kendall's tau) is available in supplementary Table S1. Both versions of the WHOQOL-BREF questionnaire had very good internal consistency (α near or equal to 0.9), while the reliability of the electronic version was higher compared to the paper. The greater difference was revealed for the environmental domain. The accordance between both versions of the AUDIT questionnaire was also excellent (ICC value of 0.96), including specific domains except for the "Dependence Symptoms" domain with a value of 0.83. We demonstrated also a good internal consistency (α value of 0.77), except for the individual AUDIT domains. Detailed results are presented in Table 2. Legend: Me-median; IQR-interquartile range; M-mean; SD-standard deviation; CI-95% confidence interval; α-Cronbach's alpha; MoD-mean of differences (also called "bias") calculated with Bland-Altman statistics; ICC-intraclass correlation coefficient; κ-Cohen's kappa (unweighted). Discussion Reliability and reproducibility are important aspects of questionnaire validation. Questionnaires should be able to reproduce results to be valid [15]. Regarding the statistical measures using commonly in validation studies, the results of the reviewed bibliography indicated that the questionnaire validity was assessed mainly by Cronbach's alpha coefficient and intraclass correlation coefficient (ICC) [16,17] This observation showed that these measures were used in our study in a reasonable manner. Moreover, obtained results documented well or very good reproducibility (ICC > 0.8 and Kappa Cohen > 0.8 in each assessed domain). Additionally, the results of the measured Cronbach's alpha statistic confirmed moderate or high consistency (α > 0.5 in each scale). The possibility of using the paper version of WHOQOL-BREF in many populations has its established position [8,18]. Few studies have been conducted using the WHOQOL-BREF questionnaire to assess the impact of medical education on the quality of life of students [19][20][21][22]. Although the role of the electronic version of the WHOQOL-BREF questionnaire was confirmed in 2008 [23,24], we have not been able to find a study assessing the electronic form in the medical student population. To the best of the authors' knowledge, the presented study is the first study authorizing the use of the online version of this tool among medical students. The AUDIT is a screening questionnaire developed by the World Health Organization (WHO) to assess alcohol-related problems, and available published data indicate that it is a reliable and valid tool used in different cultural backgrounds [25][26][27][28][29][30]. However, the AUDIT questionnaire has not yet been validated among medical students in Europe, both in the paper or electronic form. Nevertheless, the electronic version of this tool was used in a validation study among medical students from China [31] and was validated among university students [32,33]. It is worth mentioning that the AUDIT questionnaire was applied in a cross-sectional study on the prevalence of alcohol use disorders among American surgeons [34]. We believe that the findings described in the presented paper can complement the observed gap. Obtained results confirmed, that both versions (paper and electronic) of the AUDIT and WHOQOL-BREF questionnaires can be used interchangeably in the Polish cohort study of medical students and young medical doctors. Very high agreement in both kappa and ICC statistics (higher than 0.8 in each case of the assessed domain) indicates that an electronic questionnaire is a reliable tool in planned cohort studies aimed to assess the quality of mental health. This is an important observation for future planned research that will be realized during the COVID-19 pandemic when real-time interpersonal contacts are significantly hampered. Choosing the electronic version of the tool will facilitate contact with medical students in the coming years, also after completing education, and at the same time, will significantly reduce the costs of subjects' recruitment. In general, it can be assumed that the results obtained by us are consistent with previous observations [8,20,21,[24][25][26]28,29,31,32]. Limitations of the Study Although a large proportion of the invited students have agreed to participate in the study, the fact of not obtaining one hundred percent participation may to some extent limit the conclusions of the study. Similarly, as Chen et al. reported [23], we have only examined the validity of the Internet version of WHOQOL-BREF questionnaire with a standard set of items, whereas the WHOQOL group recommended adding some questions relevant for the studied population [35]. Although our integrated questionnaire also contained additional demographic and nutritional assessment questions, they were not an extension of the WHOQOL-BREF questionnaire. Conclusions We demonstrated very good accordance and reproducibility of both versions of the POLLEK questionnaire. These findings legitimize the use of both versions interchangeably.
v3-fos-license
2023-12-02T06:17:24.149Z
2023-12-01T00:00:00.000
265512909
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10935-023-00758-8.pdf", "pdf_hash": "f51d19e6ab35a53acc2cb3949d43e91d92787505", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2736", "s2fieldsofstudy": [ "Psychology", "Education" ], "sha1": "6fea65378e33a3be7adf1c8d9ea67003720a61cb", "year": 2023 }
pes2o/s2orc
Development and Implementation of a Preventive Intervention for Youth with Concerns About Their Sexual Thoughts and Behaviors: A Practitioner Narrative This practitioner narrative describes the development of an innovative, primary and secondary prevention resource to provide confidential resources to youth with questions about potentially problematic sexual interests and behaviors. WhatsOK is a website and free confidential helpline for youth who are potentially at risk to sexually harm or have harmed someone in the past. By encouraging self-efficacy, helpline counselors respond to these inquires in order to prevent harmful events or lessen the impact. This practitioner narrative begins with an explanation of the planning process, then describes the implementation, piloting and refining the resource, and, finally, explains how evaluation was incorporated. The development of the WhatsOK helpline services was conducted with the goal of creating an evidence-informed resource for youth with concerns about sexual thoughts and behaviors. In 2021, Stop it Now! (Now!), a United States-based, non-profit child sexual abuse (CSA) primary perpetration prevention organization, launched an online resource and confidential helpline-WhatsOK-for youth and young adults with concerns about potentially problematic sexual interests and behaviors.Now!'s original helpline was developed to provide resources and divert adults from committing sexual offenses towards children.However, in 2019, 11% of inquiries were youth asking for help with their sexual interests and behaviors towards younger children.Consistent with recent literature, youth contribute significantly as perpetrators of CSA.Some estimate that 70% or more of sex offenses are committed by other youth (Gewirtz-Meydan & Finkelhor, 2020), with the average age of first-time sexual perpetration being approximately 14-16 years (Ybarra & Mitchell, 2013).Moreover, nearly 1 in 10 youth report some type of self-involved sexual perpetration (Finkelhor et al., 2009;Ybarra & Mitchell, 2013).Despite the significant need for support and information, resources for youth and young adults with potentially problematic sexual interests and behaviors are limited. WhatsOK aims to (1) increase knowledge of CSA and CSA prevention, (2) change perceptions of availability and utility of support resources, (3) encourage self-efficacy in preventing CSA, and (4) increase the use of protective and interventional behaviors when risk factors are identified.Among the reasons why youth may sexually abuse another child or engage in sexual misconduct harmful to others are the following: feeling sexual attraction for children, trauma reactive behaviors, lack of healthy sexuality knowledge, developmental delays, emotional intelligence differences including spectrum diagnoses, drug/alcohol use, or other psychiatric disabilities (Finkelhor et al., 2009).Increased access to online pornography is also now often part of a youth's introduction to sexuality (Horvath et al., 2013).Ongoing viewing of pornography can impact a young person's image of sexual behaviors, influencing perspective on what is safe, healthy, and consensual (Huntington et al., 2022).During consideration of the development of a youth-targeted resource, most inquiries to the Now! helpline primarily focused on: (1) worries about online addiction to child pornography, (2) fears that reaching out for help will label them for life, and (3) depression related to a sense of hopelessness regarding a "normal" life. An example email from a youth at risk to abuse: "I have noticed that I have an attraction to prepubescent girls (and) This narrative describes the development of an innovative, primary and secondary prevention resource to provide confidential resources to youth with questions about potentially problematic sexual interest and behaviors.We begin with an explanation of the development process, then describe the implementation, piloting and refining the resource, and finally how evaluation was incorporated from the beginning. Development of a Resource for Youth The Now! team created a small project team including people in the following areas of expertise: child sexual abuse prevention, research, evaluation, social media, and marketing.Funding to support development and pilot testing was obtained through competitive application with the World Childhood Foundation. The logic model behind WhatsOK is based on the Theory of Planned Behavior (Ajzen, 1991) which posits that behavior is determined by a person's knowledge of the behavior and how to complete it, their attitudes toward that behavior, their perceived resources and self-efficacy to complete the behavior, and their perceived social norms around completing the behavior.In the context of this project, the target behaviors are seeking formal therapeutic supports and not engaging or re-engaging in sexual harm (Fig. 1).The proposed theory of WhatsOK influencing these behaviors is that youth with potentially problematic sexual behaviors who receive information in a supportive, non-judgmental setting will have improved knowledge of healthy and unhealthy sexual behavior, will be able to identify resources, will understand social norms around sexual behavior, will have greater self-efficacy in finding resources and changing behavior, and will want to seek further help. Website and Helpline The WhatsOK.org site includes 8 pages: a homepage, a contact page, a page explaining how the helpline works, a 5-part FAQs page, a blog, a reference page of additional resources, an about page, and a privacy policy page.Each webpage was developed custom for the youth audience, including age-appropriate guidance for pre-adolescents, adolescents, and young adults, while incorporating Now!'s signature tone of hopefulness, accountability and support.Website content was reviewed by our internal team of experts as well as several external experts. Individuals can contact the WhatsOK helpline via email, phone, text, online chat, or postal mail and speak confidentially to a helpline counselor.The Now! team trained three internal employees as helpline counselors to respond to youth-specific inquiries.All counselors had backgrounds in healthy sexuality, trauma, and/or counseling.Training topics included motivational interviewing with youth, strategizing resources and safety with youth, and deterrence planning for CSAM viewing.Content relevant research and resources were reviewed and discussed, such as pornography viewing behaviors, sexting, and varied types of anime.Ongoing supervision and consultation were provided. Experts and Youth Advisors The Now! team also sought input from experts in the fields of child sexual abuse and sexual harm.Now! received input from the National Center on the Sexual Behavior of Youth (NCBSY), a clinician specializing in the treatment of sexually harmful behaviors, professionals with a national youth hotline, an international organization that helps web-based companies prevent child sexual abuse, a noted journalist with connections to youth who are identifying as minor attracted persons (MAP), academic researchers with expertise in preventing harmful youth sexual behaviors, a marketing company with experience developing campaigns around healthy relationships, and a universitybased, student-led organization that supports students who have experienced sexual harm. Including the youth voice was critical to development of WhatsOK.The Now! team created a Youth Advisory Council consisting of ten youth aged 14-21 years.Some youth council members had a history of sexual behavior problems, including a young man incarcerated for viewing child sexual abuse material (CSAM), and youth who were previously part of a treatment group for sexual problem behaviors were also recruited.Other council members were part of an advocacy youth group at a local high school group, and some others were recruited through professional networks.The Youth Advisory Council was asked for input on, among other topics, website content and resources, media campaign messages, and engaging with youth through text and chat. Youth-Focused Social Media Advertisements The Now! team created 4 youth-focused advertisements for social media outreach (Bright et al., 2023).Advertisements were designed by a marketing expert based on successful strategies used by other organizations that focused on similar topics.That is, advertisements were brightly colored, included short questions, and primarily used graphics instead of images of people. Evaluation The Now! team included a researcher since initiating resource development to ensure the creation of an evidence-informed service.The researcher conducted formative analyses on the process of developing WhatsOK including tracking results from alpha, beta, feasibility, and usability testing.The project team met monthly to discuss patterns in advertisement performance, website traffic, number and nature of contacts, and updates on dissemination.Issues with any of the components were 1 3 Journal of Prevention (2024) 45:9-16 discussed and resolved during these The data management system was also adapted, and the project team continues to evaluate and refine data management processes. Launch The helpline and social media campaign launched in October of 2021.The Now! team distributed announcements of the youth-focused helpline services to thousands of individuals and agencies globally, using their contact list of individuals and national listservs, coalitions, and state and federal agencies.As of April 2023, approximately 1.5 years since the launch, the Now! team reached 1,959,021 youth/ young adults through social media and achieved 4,656,736 impressions (i.e., views of an ad), 47,481 engagements, and 2,179 shares.WhatsOK.org was accessed by 62,316 users who collectively viewed pages 113,174 times across 70,271 sessions.WhatsOK helpline counselors responded to 558 inquiries of which approximately 54% were individuals who had or were at risk to cause sexual harm. Challenges and Lessons Learned The development of the WhatsOK helpline services presented some challenges.One challenge was determining what resources are available to youth that do not require parental consent.For example, how do helpline counselors recommend therapy if the youth does not want their parent/guardian to know but the insurance is through the parent/guardian?In addition, most mental health resources require parental consent for youth under 18.Interpreting and understanding laws about fictional sexual content, including hentai, lolicon, and shotacon were also challenges.It is important for Now! staff to stay current in their knowledge of sexualized fictional contentboth the laws around this content and the impact on youth and young adults.The laws around this content are constantly changing and evolving and can vary by state or internationally.A third significant challenge was developing content for the wide age range of the target audience (14-21 years).For example, certain sexual behavior may be developmentally appropriate, healthy, and legal for a 14-year-old but developmentally inappropriate or illegal for a 20-year-old. We anticipate new challenges will arise as this new service is disseminated broadly and usage increases.Most challenging will be the need to appropriately match the number of helpline staff to the rate of inquiry growth.We will need to closely monitor the number of missed contacts to determine the best time to increase helpline hours and, if necessary, hire more helpline counselors.Relatedly, finding funding for a free service will always be a challenge.We will need to explore private and government funding sources with the goal of obtaining significant and consistent funding. The launch of WhatsOK provided many opportunities for further growth.First and foremost, youth who contact the helpline are articulate, thoughtful, inquisitive, responsible.Youth who contacted the helpline communicated their needs and concerns in thoughtful ways.Today, youth are turning to the internet to find answers specific to their sexual thoughts and behaviors, whether it be through social media or other online discussion platforms, such as Reddit.Youth who contacted the WhatsOK helpline were considering the risks of their behaviors and seeking information to help them in their decision making.It was apparent that these youth were also aware of their potentially problematic behaviors, interests, and feelings.It is important to recognize that users of WhatsOK may not represent all youth with potentially problematic sexual thoughts, feelings, or behaviors.Instead, they represent youth who are a) aware of their potentially problematic sexual thoughts, feelings, or behaviors, are b) aware of the service, and are c) motivated to seek help through the website helpline.Youth with potentially problematic sexual behaviors who are not aware of the problematic nature of their behaviors, are not aware of support services, or are not interested in receiving services may have categorically different needs from youth who used the WhatsOK service.Identifying and addressing these differences are ongoing goals for Now! Youth who contacted the helpline also inquired about approaching someone they felt they may have harmed.WhatsOK offers youth the opportunity take accountability and navigate the necessary steps to remedy a situation for which they feel responsible. The Now! team has developed more understanding of both the experiences and the challenges facing youth specific to their digital media and internet use.This includes recognizing that information available for youth regarding their viewing behaviors, especially when it includes things like fictional sexual content, are limited and, sometimes, difficult to understand.For example, while youth want to know if their viewing behaviors regarding this content is legal, this remains a gray area in the U.S. Youth's exposure to sexualized online content that does not identify as CSAM, has an impact on the sexual interests of youth.This includes fictional sexual content, such as hentai, lolicon, shotacon, and other forms of anime.Many youths who contacted the helpline expressed that they had seen this content, and felt the urge to look at it further, even when it might not align with their sexual attractions.They also expressed that looking at this content was affecting their relationships, or their thoughts around this content felt unmanageable. It's also important to note that many youths felt concerned about what is legal, especially regarding age of consent.Questions such as "Is it okay to date someone who is X years younger than me?" were common.Youth also wanted to understand the impact of their experiences on their own behaviors.They understood that their own experiences of abuse may impact how they experience relationships, their bodies and sexuality.However, they expressed uncertainty when considering strategies to address short-term and long-term impacts. Conclusion WhatsOK is the first, only US-based helpline to offer practical deterrent services to at-risk youth, offering new services for this vital population, confidentially gaining new insights about and from them, and sharing new findings and resources with the abuse prevention field. The WhatsOK helpline aims to reduce the impact of problematic sexual behaviors by allowing youth to ask questions and directing them to appropriate resources.Having this opportunity to talk to helpline counselors who encourage protective behaviors may help prevent potentially harmful sexual behaviors.From a primary and secondary prevention standpoint, the WhatsOK helpline assists youth who are at risk to harm, as well as those you may have already harmed someone. Fig. 1 Fig. 1 Logic model for WhatsOK helpline and website services for youth with potentially problematic sexual behaviors
v3-fos-license
2021-03-16T13:07:10.390Z
2021-01-01T00:00:00.000
232233933
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.ochsnerjournal.org/content/ochjnl/21/1/10.full.pdf", "pdf_hash": "ae0a162b74f3fa5089fd3c31c4fb1e90a08f2347", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2738", "s2fieldsofstudy": [ "Medicine" ], "sha1": "d46a9c689cca67d4aa86e0a65ef737e2eb58cef6", "year": 2021 }
pes2o/s2orc
Smoking Cessation and Hospitalized Patients: A Missed Opportunity to Avoid Premature Deaths INTRODUCTION Professor Sir Richard Doll, in collaboration with Professor Sir Austin Bradford Hill, were the first to quantitate the relationship of cigarette smoking with lung cancer.1 Later, Doll aptly stated, “Death in old age is inevitable, but death before old age is not.”1 The Richard Doll Building at the University of Oxford in Oxford, England, is inscribed with one of Doll’s many impactful statements about cigarette smoking: INTRODUCTION Professor Sir Richard Doll, in collaboration with Professor Sir Austin Bradford Hill, were the first to quantitate the relationship of cigarette smoking with lung cancer. 1 Later, Doll aptly stated, "Death in old age is inevitable, but death before old age is not." 1 The Richard Doll Building at the University of Oxford in Oxford, England, is inscribed with one of Doll's many impactful statements about cigarette smoking: In previous centuries, 70 years used to be regarded as humanity's allotted span of life, and only about one in five lived to such an age. Nowadays, however, for nonsmokers in Western countries, the situation is reversed: only about one in five will die before 70, and the nonsmoker death rates are still decreasing, offering the promise, at least in developed countries, of a world where death before 70 is uncommon. 1 Despite this evidence-based optimism, cigarette smoking and the emerging clinical and public health challenges of overweight and obesity and physical inactivity are the leading avoidable causes of premature death in most industrialized nations. These risk factors are also emerging in the rest of the world, so that smoking, overweight and obesity, and physical inactivity are the major contributors to the reason that cardiovascular disease (CVD) has increased from number 5 to number 1 as the cause of mortality worldwide. 2 In the United States during the past several decades (since 1965), the prevalence of cigarette smoking has decreased markedly. 3 Nonetheless, in 2018, approximately 34.2 million Americans aged 18 years and older, or 13.7% of the population, were current cigarette smokers. 3 Thus, the impact of cigarette smoking on avoidable and premature mortality remains alarmingly high. In the United States alone, smoking causes more than 480,000 deaths each year. 3 These sobering statistics reflect, in major ways, the approximate 2-fold increased risk of CVD among current smokers and the approximate 20-fold increased risk of lung cancer among long-term smokers. Specifically, these relative increases translate to absolute increases of more than 150,000 deaths per year from CVD and 130,000 deaths per year from lung cancer among persons ࣙ35 years of age. 4 Quitting smoking before age 40 years reduces the risk of dying from cigarette-related disease by approximately 90%. 5 Specifically, smoking cessation significantly reduces the risks of CVD-beginning within a matter of months-and the risk of CVD among those who quit smoking equals that of lifelong nonsmokers within a few years, even among older adults. 6 In contrast, reductions in mortality risk from lung cancer only begin to appear several years after quitting, and even by 10 years, the risk has been reduced to only approximately midway between continuing smokers and lifelong nonsmokers. 7 In the United States in 2016, approximately 30 million hospital admissions occurred among persons 18+ years of age, with an average length of stay of 4.6 days. 8 A 2009 study in San Francisco reported the prevalence of cigarette smoking among hospitalized patients to be 40% 9 in contrast to the prevalence in the general population of 13.7%. 3,10 This prevalence implies that up to 12 million inpatients in US hospitals are smokers. In this paper, we discuss effective and safe counseling and drug therapies that have the potential to prevent numerous premature deaths in the United States caused by cigarettes in hospitalized smokers. SMOKING CESSATION COUNSELING A systematic review of smoking cessation counseling studies examined 340 peer-reviewed publications of intensive smoking cessation programs for hospitalized patients and found that 326 (95.9%) did not meet quality standards. 11 Among the remaining 14 studies, 8 showed no difference between the intervention and comparison groups. [12][13][14][15][16][17][18][19] Three studies included self-reported and biochemically tested abstinence and reported positive results based on selfreports but not on biochemical testing. [20][21][22] Two other studies reported positive results based on self-reports but did not include biochemical testing. 23,24 However, a trial that randomized hospitalized adult smokers, identified by their desire to quit, to intensive vs standard counseling during hospitalization and after discharge resulted in higher rates of smoking cessation at 6 months in the intensive counseling group. 25 For the intensive counseling group, postdischarge interventions included automated telephone calls and free medication. Prescription drug therapy was individualized for the intervention group, and patients received 5 automated outbound interactive voice response telephone calls at 2, 14, 30, 60, and 90 days after discharge. These telephone calls provided advice and support messages prompting smokers to stay quit, encouraging proper use and adherence to cessation medication, offering medication refills, and triaging patients who needed live support from counselors. An automated telephone script reinforced these messages and encouraged participants to request a callback from a counselor if they had low confidence in their ability to stay quit, had resumed smoking but still wanted to quit, needed a medication refill, or were noncompliant. The drug or drugs prescribed were documented in the medical record to alert the attending physician, and a fax was sent to the patient's primary care clinician as well. In contrast, standard care included postdischarge recommendations, advice to call a free telephone quit line, and a note in the medical record advising hospital physicians to prescribe smoking cessation medication at discharge. 25 Short-and long-term results were based on biochemically confirmed abstinence at discharge and at 6-month followup. At 6 months, 26% of patients in the intensive counseling group had stopped smoking vs 15% of patients in the standard counseling group. 25 This difference was highly significant but more important was its clinical and public health significance. Specifically, this relative reduction could translate into an absolute reduction of 1.32 million cigarette smokers among hospitalized patients in the United States. 3,8-10 DRUG THERAPIES The US Food and Drug Administration (FDA) has approved 7 prescription and over-the-counter drug therapies for smoking cessation. 26 Perhaps the most effective is the prescription medication varenicline that achieved permanent quit rates at 12 weeks of approximately 25% and was approved by the FDA in 2006. In 2009, however, varenicline received a black box warning based on reports to the FDA Adverse Event Reporting System (AERS) of neuropsychiatric symptoms including aggression, depression, and suicidal ideation. Data from the AERS are useful for formulating but not for testing hypotheses, and analysis showed that nearly half of the subjects had psychiatric histories, 42% were taking psychotropic drugs, and 42% had depression. 27 The adverse public health impact of the black box warning was substantial, leading to a 76% decline in the number of prescriptions dispensed, from a peak of approximately 2 million in the last quarter of 2007 to approximately 531,000 in the first quarter of 2014. 27 Gaballa et al estimated that because of the decrease in prescription rates, an estimated 17,000 annual US deaths from CVD attributable to smoking were avoidable between 2009 and 2016. 27 In 2016, the FDA removed the black box warning, based, in large part, on the results of the Evaluating Adverse Events in a Global Smoking Cessation Study (EAGLES), a randomized trial of adequate size and 12 weeks' duration. 28 With respect to efficacy, varenicline was statistically significantly superior to bupropion and the nicotine patch, and bupro-pion and the patch were superior to placebo. Regarding side effects, varenicline produced no significant increases in serious neuropsychiatric symptoms in either the general population or patients with mental illness. The finding among patients with mental illness was especially important because the lifespan of patients with schizophrenia is reduced by approximately 20% compared to the general population. 29 Although high premature death rates in patients with schizophrenia had been attributable to their 10fold increase risk of suicide, patients with schizophrenia have a 40% or greater death rate from CVD that is attributable, in large part, to the approximate 74% rate of cigarette smoking among patients with schizophrenia, combined with overweight and obesity and physical inactivity. 29 BENEFIT OF MULTIFACTORIAL INTERVENTIONS Providing multifactorial intensive counseling interventions during and after hospitalization and initiating and maintaining adherence to drug therapy are independently associated with permanent cessation rates. 30 Among 9,193 smokers hospitalized for myocardial infarction and identified by the largest US registry, 97% received smoking cessation counseling during hospitalization, but only 7% filled their smoking cessation prescription within 90 days of discharge, and by 1 year, the percentage had only increased to 9.4%. 31 In a Veterans Affairs hospital network, only 33.7% of patients with chronic obstructive pulmonary disease were prescribed a smoking cessation medication, and among these patients, 53.4% received nicotine patches alone. 32 In another study involving 36,675 patients with coronary heart disease, only 22.7% (8,316) received any smoking cessation pharmaceutical during the hospitalization. 33 Implementation of effective and safe antismoking campaigns has been suboptimal in communities as well as in hospitals. For example, among an estimated 53,107,842 active smokers with chronic obstructive pulmonary disease, the average prescription rate for smoking cessation efforts was 3.64%. 34 Among smokers with peripheral artery disease, approximately 64% received no counseling or pharmacotherapy for smoking cessation. 35 In theory, achieving 90% coverage with smoking cessation programs would save approximately 1,300,000 quality-adjusted life-years. 36 This estimate is greater than the combined effects of screening for breast, colon, and cervical cancers; chlamydia; cholesterol; problem drinking; and vision. 36 The health care system in the United States is characterized by numerous high-cost, low-value services such as baseline laboratory tests for low-risk patients having low-risk surgery ($227.8 million per year in unnecessary costs); stress cardiac or other cardiac imaging in low-risk asymptomatic patients ($93.2 million); annual electrocardiograms or other cardiac screening for low-risk asymptomatic patients ($41.0 million); and routine head computed tomography scans for emergency department visits for patients with dizziness ($24.8 million). 37 In contrast, smoking cessation programs are cost-effective, in part, because the benefits to the economy far exceed the monetary value of the delivery of the effective multifactorial interventions. 36,38 CONCLUSION The totality of evidence suggests that initiation of long-term counseling and adjunctive drug therapy during Smoking Cessation and Hospitalized Patients hospitalization and maintaining high adherence postdischarge can markedly improve permanent quit rates with minimal to no side effects. Programs should include long-term counseling and at least a 90-day prescription of a smoking cessation medication, preferably varenicline. Such efforts have the potential to reduce the number of avoidable premature deaths from cigarette smoking that remains alarmingly and unnecessarily high in the United States and worldwide.
v3-fos-license
2024-02-01T16:28:53.966Z
2024-01-29T00:00:00.000
267346712
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2024.1358055/pdf?isPublishedV2=False", "pdf_hash": "5d59ff781af487a573b22d2c4f418279c0f611e4", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2739", "s2fieldsofstudy": [ "Education", "Psychology", "Sociology" ], "sha1": "a9ebe65d373fab9bf0c1d778f86f97a865991ec2", "year": 2024 }
pes2o/s2orc
Integrating intrapreneurial self-capital, cultural intelligence, and gender in Chinese international education: pathways to flourishing This study investigates the complex interplay between Intrapreneurial Self Capital, Cultural Intelligence, and gender, and their collective influence on the flourishing of Chinese international students in foreign academic settings. As global interconnectivity intensifies, the increasing number of Chinese students seeking education abroad presents a unique opportunity to examine the psychological and sociocultural dynamics of this demographic. Central to our investigation is the role of Cultural Intelligence, a crucial competency for navigating diverse environments, and Intrapreneurial Self Capital, a composite of psychological resources instrumental in educational and career success. The study also explores the mediating role of Cultural Intelligence in the relationship between Intrapreneurial Self Capital and student flourishing, and examines how gender moderates this dynamic. The research engaged 508 Chinese international students, utilizing a variety of social networks for participant recruitment. The survey, conducted via Qualtrics, focused on a diverse range of students across different educational levels and disciplines. A moderated mediation model was tested to examine the mediation effect of cultural intelligence on the relationship between intrapreneurial self-capital and flourishing, with gender serving as a moderating variable. Our findings reveal significant insights into how Intrapreneurial Self Capital and Cultural Intelligence contribute to the personal and professional development of Chinese international students. Overall, the results suggest that the impact of Intrapreneurial Self Capital on various cognitive qualities (Metacognitive Cultural Intelligence, Cognitive Cultural Intelligence, Motivational Cultural Intelligence and Behavioral Cultural Intelligence) is moderated by gender, highlighting the importance of considering gender differences in this context. Related to the prediction of Flourishing, the direct effect of Intrapreneurial Self Capital on flourishing is notably strong. However, the mediating roles of Metacognitive, Cognitive, and Behavioral aspects of Cultural Intelligence show different levels of influence. The study underscores the need for educational institutions to adopt holistic approaches in fostering student well-being and success, accounting for the nuanced effects of cultural and gender dynamics. These results have significant implications for the development of targeted educational programs and training, aimed at enhancing the international educational experience for students and professionals. Introduction In an era marked by unprecedented global interconnectivity and cultural exchange, the landscape of international education, particularly within the context of Chinese students pursuing academic endeavors abroad, has become a fertile ground for exploring the interplay of various psychological and sociocultural factors (Wilczewski and Alon, 2023).The surge in Chinese international education is a phenomenon of immense importance.It reflects not only the global aspirations of Chinese students but also underscores a complex tapestry of opportunities and challenges that these students navigate (Kang and Hwang, 2022).This educational migration is more than a pursuit of academic excellence; it embodies a journey of personal and professional transformation, fraught with both unique opportunities and formidable challenges.As such, understanding these dynamics from an individual perspective is crucial (Zhu and O'Sullivan, 2022). Central to this exploration is the concept of Cultural Intelligence, a multifaceted competency allowing individuals to thrive in culturally diverse environments (Van Dyne et al., 2016).Cultural Intelligence's significance in the context of international education cannot be overstated, especially for students immersed in foreign academic and cultural settings (Van Dyne et al., 2015).This intelligence, which encompasses various dimensions, is a key factor in enhancing students' adaptability and well-being, enabling them to navigate and flourish within these complex environments (Bai and Wang, 2022). Another critical element is Intrapreneurial Self Capital, an emerging construct within educational psychology and career development.Encompassing a holistic blend of psychological resources, Intrapreneurial Self-capital has been identified as a vital contributor to educational success and career preparedness (Alessio et al., 2019).Its multifaceted nature, which includes aspects like resilience, creativity, and proactivity, positions Intrapreneurial Self-capital as an essential tool for students grappling with the demands of modern education and career landscapes (Guo et al., 2022). Beyond academic performance, the concept of personal flourishing has gained traction, advocating for a more holistic understanding of student success.This approach recognizes that true success encompasses psychological, emotional, and social well-being, in addition to academic achievement.Thus, educational institutions are increasingly called upon to adopt a more comprehensive approach to student development, one that fosters personal flourishing in all its dimensions. Our study also considers the mediating role of Cultural Intelligence in the relationship between Intrapreneurial Self-capital and international students' flourishing.We posit that Cultural Intelligence could serve as a crucial link between the internal resources provided by Intrapreneurial Self-capital and the external challenges encountered in international educational settings.Furthermore, we explore the moderating role of gender in this dynamic, acknowledging the sparse research in this area and the need for a more nuanced understanding of how gender influences the development and application of Cultural Intelligence, particularly among Chinese international students. In sum, our paper seeks to unravel the intricate relationship between Intrapreneurial Self Capital, Cultural Intelligence, and gender, and their collective impact on the flourishing of Chinese international students.By examining these interconnections, we aim to provide valuable insights that could inform the development of tailored educational and training programs, ultimately enhancing the international educational experience for students and professionals alike. Literature review Chinese international education: opportunities and challenges The landscape of Chinese international education has evolved significantly, with an increasing number of students pursuing education abroad.This trend presents a unique set of opportunities and challenges, which are crucial to understand from an individual perspective (Xu and Shubo, 2023). Educational migration offers Chinese students unique opportunities for personal and professional development.As found in the work of Bodycott and Lai (2012), studying abroad significantly enhances language proficiency, cultural understanding, and employability in the global job market.Furthermore, Wei (2013) highlights that international exposure broadens students' worldviews, fostering critical thinking and adaptability. However, this journey is not without its challenges (Chen et al., 2022).According to Spencer-Oatey and Xiong (2006), cultural adaptation remains a significant hurdle for many Chinese students, often leading to social isolation and academic challenges.This cultural adjustment is compounded by the language barrier, as identified by Sherry et al. (2010), which can impede both academic performance and social integration. The mismatch between the educational experiences abroad and the expectations within the Chinese job market can affect career trajectories of returning students (Guo et al., 2020). The literature reveals a complex picture of opportunities and challenges faced by Chinese international students.The benefits of personal growth and enhanced employability coexist with the difficulties of cultural adaptation and educational recognition.This understanding is vital for stakeholders in international education to develop support systems that cater to the needs of these students. Intrapreneurial self capital: key resource for educational success Intrapreneurial Self Capital (hereafter, ISC) is an emerging construct in the field of educational psychology and career development, representing a holistic blend of psychological resources (Di Fabio and Kenny, 2018).Di Fabio and colleagues offer a comprehensive view of ISC, describing it as a higher-order construct comprising seven specific constructs: core self-evaluation, resilience, creative self-efficacy, grit, goal mastery, determination, and attentiveness (Di Fabio, 2014).This multifaceted nature underscores its importance in navigating complex educational environments and adapting to dynamic academic and career landscapes.The components of Intrapreneurial Self Capital, as outlined by Di Fabio and colleagues, is a high order construct that include seven sub-dimensions: Core Self-Evaluation: This aspect reflects an individual's fundamental appraisal of their own abilities and worth.Resilience: this involves the capacity to recover quickly from challenges and adapt to change.Resilience in the context of ISC involves the ability to bounce back from setbacks and adapt to changing circumstances.As per Luthans and colleagues, it's a vital component in maintaining positive educational outcomes in the face of adversity (Luthans et al., 2006).Creative Self-Efficacy: This refers to the belief in one's ability to produce creative outcomes.Creativity in ISC is not just about novel ideas but also the application of these ideas in problem-solving.Amabile (1997) emphasizes the role of creativity in enhancing personal and professional effectiveness in various settings, including education.Grit: Defined as perseverance and passion for long-term goals, contributing to sustained effort and interest over years despite failure, adversity, and plateaus in progress.Goal Mastery: The ability to effectively set and achieve personal and academic goals.This aspect focuses on the setting and achieving of personal and academic goals.Locke and Latham (2002) discuss how goal-setting theory plays a significant role in personal development and success.Determination: A quality that entails a strong commitment to achieve despite obstacles and setbacks.Attentiveness: The ability to maintain focus and be vigilant in one's pursuits. The validity of the construct and its psychometric properties have been empirically explored in western (McIlveen and Di Fabio, 2018;Palazzeschi et al., 2019;Puigmitja et al., 2019) and eastern countries (Bee Seok et al., 2019Seok et al., , 2020;;Malekiha, 2020), showing its crosscultural significance and relevance for the academic contexts. The study of Intrapreneurial Self-capital's impact on educational success is gaining momentum.McIlveen and Di Fabio, (2018) suggest that students with higher Intrapreneurial Self-capital levels exhibit better adaptability to academic challenges and engagement in studies.They also note the relevance of Intrapreneurial Self-capital in career readiness, emphasizing its importance in transitioning from education to the workforce (Di Fabio, 2021;Rosen and Di Fabio, 2023). A study found a significant correlation between Intrapreneurial Self-capital and academic achievement, indicating that the diverse components of Intrapreneurial Self-capital, such as resilience and determination, contribute to better academic outcomes, underscoring its value for students preparing for professional life (Di Fabio et al., 2017). In conclusion, Intrapreneurial Self Capital is a key resource in educational contexts, contributing significantly to academic success and career preparedness (Singh, 2021;Gutiérrez-Carrasco et al., 2022;López-Núñez et al., 2022).Its multifaceted nature, encompassing elements like proactivity, resilience, and creativity, makes it an essential construct for students navigating the complex demands of modern education and career landscapes (Henter and Nastasa, 2021;Iliashenko et al., 2023).More than academic performance: students' personal flourishing In the contemporary educational discourse, there has been a paradigm shift toward a more holistic understanding of students' success.Traditionally, academic performance, measured through grades, test scores, and degree attainment, has been the primary indicator of success.However, this narrow focus overlooks the multifaceted nature of student development and well-being.The concept of personal flourishing comes into play as a more comprehensive measure, encompassing not only academic achievement but also psychological, emotional, and social wellbeing (Seligman and Csikszentmihalyi, 2000). Personal flourishing refers to a state where individuals experience a high level of well-being and life satisfaction.This concept, rooted in positive psychology, extends beyond the absence of mental health issues to include the presence of positive emotions, engagement, relationships, meaning, and accomplishment (Seligman, 2011).In the context of education, personal flourishing is characterized by students experiencing growth in various dimensions of their life, not limited to academic achievements (Gan and Cheng, 2021). While academic performance is a significant aspect of student life, it is not the sole determinant of personal flourishing.The interplay of mental health and academic performance is also critical.Studies have demonstrated that higher levels of mental health, including lower instances of anxiety and depression, are associated with better academic outcomes (Stallman, 2010).This relationship underlines the importance of addressing mental health as part of comprehensive educational success strategies.Flourishing students are those who engage with their studies in a manner that promotes their overall well-being.This includes developing resilience, fostering a growth mindset, and maintaining a healthy balance between academic and personal life. Psychological and emotional well-being are critical components of personal flourishing.Students who are flourishing demonstrate resilience, coping effectively with stress and setbacks.They also exhibit higher levels of self-efficacy and self-esteem, which contribute to their overall sense of well-being (Bandura et al., 1999).This theory has been substantiated by studies showing a strong correlation between self-efficacy and academic achievement (Pajares, 1996;Zimmerman, 2000). Social relationships and community engagement are also integral to personal flourishing.Positive relationships with peers, faculty, and the wider community can enhance students' sense of belonging and purpose, thereby contributing to their overall well-being (Putnam, 2000).Active involvement in extracurricular activities and community service can further foster a sense of connection and accomplishment. In conclusion, redefining student success to include personal flourishing represents a more inclusive and holistic approach to education (Di Fabio et al., 2017).By focusing on the overall wellbeing of students, educational institutions can contribute to the development of individuals who are not only academically proficient but also psychologically robust, emotionally balanced, and socially engaged, equipping them with the necessary tools to thrive in all aspects of their lives. Mediating role of cultural intelligence in the relationships between ISC and international students' flourishing The predictive value of Intrapreneurial Self Capital in the context of international students' academic and career success is an area of growing interest.Intrapreneurial Self Capital, with its components such as resilience, creative self-efficacy, and determination, appears to play a crucial role in the success of these students who face unique challenges in a foreign educational environment (Palazzeschi et al., 2019).In particular, the attributes of Intrapreneurial Self Capital can be instrumental in helping international students navigate cultural, linguistic, and academic barriers, thereby enhancing their overall educational experience and personal flourishing.As some studies showed, self-efficacy, a key component of Intrapreneurial Self-capital, emerges as a prominent factor in explaining the Cultural Intelligence (MacNab and Worthley, 2012). Moreover, the relationship between Intrapreneurial Self Capital and international students' success may be further elucidated through the mediating role of Cultural Intelligence.Cultural Intelligence could serve as a bridge linking the internal resources of Intrapreneurial Self Capital with the external challenges faced in international education settings.For instance, a high level of resilience and goal mastery (components of Intrapreneurial Self Capital) might enable a student to adapt more effectively to a new cultural environment, a process potentially mediated by their level of Cultural Intelligence.This suggests that while Intrapreneurial Self Capital equips students with internal psychological resources, Cultural Intelligence enables them to apply these resources effectively in culturally diverse settings (Dolce and Ghislieri, 2022). Empirical research is needed to explore this potential mediating role of Cultural Intelligence in the relationship between Intrapreneurial Self Capital and the success of international students.Such research could provide valuable insights into how educational institutions and policymakers can better support international students by not only fostering Intrapreneurial Self Capital but also enhancing their Cultural Intelligence, thereby ensuring a more holistic approach to international education. This perspective opens up new avenues for understanding the multifaceted challenges faced by international students and highlights the importance of an integrated approach that considers both personal attributes (like Intrapreneurial Self Capital) and the ability to navigate cultural differences (through Cultural Intelligence) in ensuring their academic and career success in international contexts. The moderating role of gender in the relationship between intrapreneurial self-capital and cultural intelligence among Chinese international students The investigation into the moderating effects of gender on the relationship between Intrapreneurial Self Capital and Cultural Intelligence in Chinese international students offers a novel perspective in understanding how these elements interact within a cross-cultural context.This analysis is particularly relevant given the sparse research addressing the direct impact of gender differences on Cultural Intelligence levels and their subsequent influence on career success and cultural adaptability. Research to date has frequently treated gender as a control variable rather than a focal point of study.For instance, Aslam et al. (2016) and Jyoti and Kour (2017) included gender as a demographic variable in their studies but did not delve deeply into its specific impacts.In a study involving 335 global managers in India, Jyoti and Kour (2017) found that gender had an insignificant impact on the primary variables of Cultural Intelligence, job performance, and cross-cultural adjustments.This finding suggests that the influence of gender on Cultural Intelligence and related outcomes may be more nuanced than previously assumed. However, there are indications that gender differences do exist in specific dimensions of Cultural Intelligence.Zhou and Charoensukmongkol (2022) noted that there is a moderating role of gender in the effectiveness of cultural intelligence on customer qualifications skills.Conversely, (Khodadady and Ghahari (2012) observed that females exhibited higher levels of metacognitive CQ, suggesting a greater propensity for reflective and adaptive thinking in unfamiliar cultural settings (Khodadady and Ghahari, 2012). An interesting dimension to this discourse is presented by Mandell and Pherwani (2003), who found that females generally exhibited higher emotional intelligence than males.This aspect of emotional intelligence could potentially contribute to higher motivational scores in Cultural Intelligence, as motivation can be partially driven by emotional responses to environmental stimuli.The ability to harness emotions effectively may offer females an advantage in adapting motivationally in diverse cultural contexts. Given these disparate findings, there is a clear need for more focused research examining the relationship between Cultural Intelligence and gender, particularly among specific groups such as Chinese international students.Such studies would contribute significantly to our understanding of how gender influences the development and application of Cultural Intelligence in multicultural environments. In conclusion, while existing literature provides some insights into the complex relationship between gender, Intrapreneurial Self Capital and Cultural Intelligence, it also highlights a significant gap in our understanding.More nuanced and focused research is required to unpack the layers of this relationship, particularly in the context of Chinese international students, who navigate unique cultural and educational landscapes.The exploration of how Intrapreneurial Self Capital, as a composite of personal and professional skills, interacts with gender to influence Cultural Intelligence could provide valuable insights into tailoring educational and training programs for international students and professionals. Following the previously revised evidence, the present study proposes this research model displayed in Figure 1. The following hypotheses are proposed: H1: Intrapreneurial Self-capital will predict Flourishing among Chinese international students. H2: Cultural intelligence will mediate the relationship between Intrapreneurial Self-capital and Flourishing among Chinese international students. In more detail: H2a: Metacognitive CQ will mediate the relationship between Intrapreneurial Self-capital and Flourishing among Chinese international students. H2b: Cognitive CQ will mediate the relationship between Intrapreneurial Self-capital and Flourishing among Chinese international students. H2c: Motivational CQ will mediate the relationship between Intrapreneurial Self-capital and Flourishing among Chinese international students.H2d: Behavioral CQ will mediate the relationship between Intrapreneurial Self-capital and Flourishing among Chinese international students. H3: Chinese international students' gender will moderate the relationship between Intrapreneurial Self-capital and Cultural intelligence. In more detail: H3a: Chinese international students' gender will moderate the relationship between Intrapreneurial Self-capital and Metacognitive CQ. H3b: Chinese international students' gender will moderate the relationship between Intrapreneurial Self-capital and Cognitive CQ. H3c: Chinese international students' gender will moderate the relationship between Intrapreneurial Self-capital and Motivational CQ. H3d: Chinese international students' gender will moderate the relationship between Intrapreneurial Self-capital and Behavioral CQ. Method Participants, procedure, and statistical analyses This study involved a sample of 508 Chinese international students.These participants were contacted through various social networks, which provided a convenient and efficient means of reaching a broad and diverse group within this demographic (WeChat, Sina Weibo, QQ, Douyin, Zhihu, Linkedin, and Renren).The selection of these platforms was based on their prevalence and usage patterns among Chinese international students.Participants were 103 males (20.8%), (Mean age = 23.23 years; S.D. = 4.80), 57.8% were enrolled in Senior secondary education while 30.5% were enrolled in Master's and 21.7% in Ph.D. programs.Regarding the discipline of the studies, 8.1% were in health-related studies, 7.9% education, 39.7% in engineering, 35.2% in banking, finances, and economic studies, 6.5% in technologies, 2.5% commerce and tourism. Invitations to participate in the study were disseminated through these networks, outlining the purpose of the research and assuring confidentiality and anonymity.The survey was administered using Qualtrics, a robust online survey platform known for its user-friendly interface and advanced data collection capabilities.Participants were given a unique link to access the survey, which remained open for a period of 4 weeks, between March and April 2023.During this time, two reminders were sent out via the same social networks to encourage participation and ensure a comprehensive response rate.The Ethical Committee of the Donghua University provided approval for the present study.Prior to participation, all respondents were required to give their informed consent, which was facilitated through an electronic form on the Qualtrics platform.This form detailed the purpose of the study, the voluntary nature of participation, and the measures in place to safeguard participant privacy and data confidentiality.No personal data have been collected, more than the start date and finish date and percentage of completion of the survey.Those participants that do not reach the 100% of survey completion have been eliminated (response rate 80.2%).Data have been examined with SPSS version 25 software and Jamovi version 2.4.(Jamovi, 2023) for correlational analyses and Confirmatory Factor Analyses.PROCESS macros for SPSS (Hayes) Model 7 was used for the moderated mediation model. Instruments Intrapreneurial self capital This variable was assessed with the Intrapreneurial Self Capital Scale (Di Fabio, 2014), including 28 items measured via a 5-point Likert scale (ranging from 1 = strongly disagree to 5 = strongly agree).some item examples are "Sometimes when I fail, I feel worthless" (Core Self-Evaluation), "Planning in advance can help avoid most future problems" (Grit), "I'm able to solve problems creatively" (Creative Self-Efficacy), "I'm able to achieve objectives despite obstacles" (Resilience), "One of my goals in training is to learn as much as I can" (Goal Mastery), "It's simple for me to decide" (Determination), "When I must to take a decision, I like to stop and consider all possible options" (Attentiveness).Cronbach's alpha coefficient in previous studies was 0.84 (Di Fabio, 2014), while in the present study is: 0.89 for the global scale.The McDonalds' omega coefficient for the global scale is 0.97.The omegas' values for the factors ranged from 0.67 for Resilience to 0.88 for Core Self-evaluation.Cronbach's Alphas for the separate factors ranged from 0.65 for Resilience to 0.85 for Goal Mastery.Average Extracted Variance for the nine factors ranged 0.40 to 0.67.Adaptations of this scale in non-western societies, as Malaysia and Iran, have demonstrated adequate psychometric properties, indicating its potential crosscultural applicability and relevance (Bee Seok et al., 2019Seok et al., , 2020;;Malekiha, 2020). Cultural intelligence The Cultural Intelligence Scale (CQ) used in this study was developed by Van Dyne et al. (2015) in the Chinese version published by Schlägel and Sarstedt (2016).The CQS has 20 items divided into four dimensions.This scale includes four items for metacognitive CQ, six items for cognitive CQ, five items for motivational CQ and five items for behavioral CQ.All items were rated on a 7-point Likert scale (1 = strongly disagree, 7 = strongly agree).The reliability of the scale in the present study was found to be adequate (Cronbach's Alpha = 0.91; Mc Donald's omega = 0.94).The omega coefficient for the CQ factors ranged from 0.78 for Metacognitive CQ to 0.88 for the Cognitive CQ.The Average Extracted Variance for the factors ranged from 0.50 for Metacognitive CQ to 0.57 for the Behavioral CQ. Flourishing The Flourishing Scale Spanish version (FS-SV) (Ramírez-Maestre et al., 2017) of Diener et al.'s (2010) Flourishing Scale was used.This scale measures critical aspects of psychosocial functioning through eight items which provide a single well-being score.The instrument showed a reliability value of 0.89 in previous studies, while the (Hulland et al., 2018), but we try to apply procedural remedies to avoid Common method bias using a web-based survey which displays the scales for the different variables along different web-pages (Memon et al., 2023).By this procedure we avoid the location of the items in close proximity to one another. Confirmatory factor analysis For the Cultural Intelligence scale, there is some controversial evidence about its structural invariance (Lin et al., 2012;Schlägel and Sarstedt, 2016).Hence, we conducted a Confirmatory Factor Analysis that reveals significant findings across its four dimensions.Each dimension's indicators show strong statistical significance (p < 0.001) with varying standard estimates, indicating a robust association with their respective constructs.Metacognitive CQ 2 and 3, Cognitive CQ 7, 8, and 10, Motivational CQ 12 and 13, and Behavioral CQ 19 are particularly notable for their high standard estimates. The overall model fit is assessed through several fit indices.The Chi-square test (χ 2 = 798, df = 164, p < 0.001) suggests a significant discrepancy between the observed and expected covariance matrices, which is common in large samples.The RMSEA (Root Mean Square Error of Approximation) value of 0.0500 (90% CI: 0.0881-0.0820)indicates an acceptable model fit, falling within the commonly accepted threshold.The CFI (Comparative Fit Index) and TLI (Tucker-Lewis Index) values of 0.885 and 0.867, respectively, though slightly below the preferred threshold of 0.90, still suggest a reasonable fit.The SRMR (Standardized Root Mean Square Residual) value of 0.0943, being close to 0.08, also supports an acceptable model fit. Overall, while the fit indices indicate that the model could benefit from further refinement, the strong indicator relationships suggest that the Cultural Intelligence Scale has substantial construct validity in its current form.Confirmatory Factor Analysis and the factor loadings are displayed in Table 1. Descriptive statistics and Pearson's correlation matrix Intrapreneurial Self-Capital is moderately correlated with Motivational and Behavioral Cultural Intelligence, and less so with Metacognitive and Cognitive Cultural Intelligence (CQ), as Table 2 illustrates.There is a strong correlation between Intrapreneurial Self-Capital and Flourishing, suggesting a significant link.Metacognitive CQ correlates strongly with Behavioral CQ and moderately with Motivational CQ and Flourishing.The correlation between Metacognitive and Cognitive CQ is weaker, suggesting a more complex relationship.Cognitive CQ has moderate correlations with Motivational and Behavioral CQ, but a weaker correlation with Flourishing, indicating a less direct impact on well-being.Motivational CQ and Behavioral CQ are strongly related, highlighting their close Moderated mediation analyses The results from the Model 7 involve Flourishing as the outcome variable, Intrapreneurial Self Capital as the independent variable, four mediators (Metacognitive CQ, Cognitive CQ, Motivational CQ, Behavioral CQ), and Gender as the moderating variable, with a sample size of 498. First, being the outcome Metacognitive CQ, the model summary shows a moderate R-squared value of 0.1922.The coefficients for Intrapreneurial Self Capital and Gender are significant, as is their interaction term.This interaction indicates that the effect of Intrapreneurial Self Capital on Metacognitive CQ varies depending on the students' gender.For the conditional effects of Intrapreneurial Self Capital at different gender groups, we see a significant effect for males, but not for females.The moderation graph is provided in Figure 2. Moving on to Cognitive CQ, the model again shows a significant overall fit with an R-squared value of 0.0733.Similar to Cognitive CQ, the interaction between Intrapreneurial Self Capital and Gender is significant.The conditional effects reveal that Intrapreneurial Self Capital significantly predicts Cognitive CQ for males, but not for women, indicating a gender-specific effect (Figure 3). Lastly, for Motivational CQ, the model demonstrates a stronger relationship with an R-squared value of 0.2617.The interaction between Intrapreneurial Self Capital and Gender is again significant, with Intrapreneurial Self Capital being a significant predictor of Motivational CQ for both genders, although the effect size varies.The Moderation graph for this relationships is displayed in Figure 4. Overall, these results suggest that the impact of Intrapreneurial Self Capital on various cognitive qualities (Metacognitive CQ, Cognitive CQ, Motivational CQ) is moderated by gender, highlighting the importance of considering gender differences in this context.The significant interaction effects across different outcome variables underscore the nuanced relationship between the independent variable and mediators, shaped by the moderating influence of gender. Regarding the Behavioral CQ, the model summary indicates a reasonably strong relationship, with an R-squared value of 0.2334.In the model, the constant is not significant, which is typical as it represents the expected value of Behavioral CQ when all other variables are zero.The coefficient for Intrapreneurial Self Capital is significant, indicating a strong positive relationship between Intrapreneurial Self Capital and Behavioral CQ. The coefficient for Gender is also significant and positive.The interaction term, is significant and negative.In the analysis of conditional effects, we see that for males the effect of Intrapreneurial Self Capital on Behavioral CQ is positive and strong.However, for females the effect remains significant but is weaker.This indicates that Intrapreneurial Self Capital is a more potent predictor of behavioral cultural intelligence among males compared to females.The Moderation graph for this relationships is displayed in Figure 5. Finally, related to the prediction of Flourishing, the R-squared value is 59.87%. The direct effect of Intrapreneurial Self Capital on flourishing is strong.However, the mediating roles of Metacognitive, Cognitive, and Behavioral aspects of Cultural Intelligence show different levels of influence.The Metacognitive CQ demonstrates minimal and statistically insignificant mediating effects for both genders.The index of moderated mediation is −0.0191 (95% CI [−0.0507; 0.0166]), indicating that this moderating effect of gender might not be statistically significant.Similarly, for the cognitive CQ, the index of moderated mediation is −0.0077 (95% CI [−0.0253; 0.0092]), FIGURE 2 Moderation by Gender of the relationships between Intrapreneurial Self-capital and Metacognitive CQ. between these factors, revealing significant insights into the dynamics of intrapreneurial capabilities, cultural adaptability, and well-being in a cross-cultural context.The strong direct effect of Intrapreneurial Self-capital on flourishing underscores the importance of intrapreneurial qualities in enhancing well-being.This finding, supporting Hypothesis 1, aligns with existing literature emphasizing the role of self-initiative, resourcefulness, and resilience in promoting psychological health and adaptability, especially in cross-cultural environments.The substantial influence of Intrapreneurial Self-capital on flourishing indicates that fostering intrapreneurial skills may be crucial for the well-being of international students.This finding is in line with previous research showing the role of students' Intrapreneurial Self-capital in predicting flourishing, academic performance and life satisfaction in other Eastern countries, as Malaysia (Bee Seok et al., 2020). The mediating roles of the four dimensions of Cultural Intelligence (Metacognitive, Cognitive, Motivational, and Behavioral) presented a complex picture.Firstly, metacognitive and Cognitive CQ: Both showed minimal and statistically insignificant mediation effects.This suggests that the reflective thinking and knowledge aspects of cultural intelligence might not significantly influence how Intrapreneurial Selfcapital translates into flourishing.These findings indicate that while these cognitive aspects are essential components of cultural intelligence, they may not be the primary mechanisms through which Moderation by Gender of the relationships between Intrapreneurial Self-capital and Behavioral CQ. intrapreneurial self-capital contributes to students' well-being.This research is according to other findings about the role of emotional constructs in explaining the impact of Intrapreneurial Self-capital on outcomes (Di Fabio and Saklofske, 2019).Then, motivational and Behavioral CQ, in contrast, demonstrated more substantial mediating effects.Particularly, the Behavioral aspect of Cultural Intelligence, especially among males, played a significant role in mediating the relationship between Intrapreneurial Self-capital and flourishing.This implies that the ability to adapt behaviorally in a culturally diverse environment is a crucial factor in leveraging intrapreneurial skills for enhancing well-being.The motivational aspect, though smaller in effect, still indicates the importance of motivation in cultural adaptation as a pathway linking ISC to flourishing.These findings partially support Hypothesis 2, highlighting the nuanced roles different aspects of Cultural Intelligence play in this context.Our findings are aligned with other studies that showed the impact of Cultural Intelligence beyond the educative context on workers' burnout when interacting with immigrants, highlighting the relevance of cultural competence in professional success (Puzzo et al., 2023). The study's exploration into the moderating effect of gender revealed significant insights.The impact of Intrapreneurial Self-capital on various dimensions of Cultural Intelligence was found to be moderated by gender.Specifically, Behavioral and Motivational CQ showed a more pronounced gender difference in their mediating effects.For instance, Behavioral CQ's role as a mediator was stronger for males, suggesting that gender plays a crucial role in how behavioral adaptation in a new cultural environment influences the relationship between ISC and flourishing.These findings suggest that gender differences should be considered in understanding how intrapreneurial capabilities interact with cultural intelligence to impact well-being.The significant moderation by gender, especially in the Behavioral and Motivational dimensions of Cultural Intelligence, underscores the need for gender-sensitive approaches in facilitating the flourishing of international students.This supports Hypothesis 3 and its sub-components, highlighting the importance of considering gender in the dynamic interplay between intrapreneurial self-capital and cultural intelligence.This finding is in line with previous studies that have stated gender differences on the Cultural Intelligence-related outcomes (Davis, 2013;Maeland and Wattenberg, 2017).Hereafter, we need to recognize that other relevant variables, not taken into account in the present research, could be considered.Among the predictors of Cultural Intelligence, some studies have considered personality traits and its relationships with cultural adaptation via Cultural Intelligence (Ward and Fischer, 2008;Shu et al., 2017;Chiesi et al., 2020).In this sense Mao and Liu (2016) provided evidence on the moderator role of social support into the relationships between Cultural intelligence and adaptation of Chinese international students. Limitations of the present study and implications for future research The recruitment of participants through various social networks, while efficient in reaching a broad and diverse group of Chinese international students, introduces a potential selection bias.This approach might have limited the sample to students who are active on these platforms, potentially excluding those who do not use these social networks or have different usage patterns.Consequently, the findings may not fully represent the entire population of Chinese international students, affecting the generalizability of the results.Additionally, the response rate seems robust, but the exclusion of participants who did not complete 100% of the survey may lead to response bias.This criterion could overlook perspectives of students who chose not to complete the entire survey, possibly due to different experiences or views. Furthermore, the cross-sectional design of the study limits the ability to establish causality between intrapreneurial self-capital, cultural intelligence, and flourishing.A longitudinal approach would be more suitable to understand how these relationships develop over time, particularly as students adapt to new cultural environments.Following this idea, some previous studies tested the predictive validity of Cultural Intelligence over time stating the importance of Motivational CQ as negative predictor of psychological problems during international adaptation (Ward et al., 2011). The reliance on self-report measures for key constructs like Intrapreneurial Self-capital, Cultural Intelligence, and flourishing, despite using reliable scales, raises concerns about social desirability bias and individual differences in self-awareness.This may affect the accuracy of the responses. The specific focus on Chinese international students also means that the findings might not be applicable to students from other cultural backgrounds or to Chinese students within their own country, limiting the study's broader applicability.In this sense, some previous studies showed a relevant role of perceived cultural distance in the complex pattern of relationships between Cultural Intelligence and international students' adjustment (Malay et al., 2023). Moreover, the use of translated versions of scales, such as the Flourishing Scale Spanish Version, requires careful consideration.On the one hand, translation can sometimes alter the meaning of items, potentially impacting their interpretation and the validity of the results.On the other hand, some of the scales used in the present research have demonstrated their cross-cultural equivalence with different samples, included Chinese participants (Bücker et al., 2016;Schlägel and Sarstedt, 2016).These limitations highlight the need for cautious interpretation of the findings and suggest areas for improvement in future research. Despite the recognized shortcomings, the study's findings have important implications for the support and development programs for international students.Similar initiatives focused on the Intrapreneurial Self-capital training have been recently applied (McIlveen and Di Fabio, 2018).This kind of initiatives have proven efficacy in different contexts, as educative or professional fields (Azevedo and Shane, 2019;Zhang et al., 2022).Educational institutions and policymakers should consider tailoring their initiatives to enhance ISC and CQ, taking into account the different ways these factors interact based on gender.Furthermore, the nuanced roles of various CQ dimensions suggest that interventions should be multifaceted, focusing not only on knowledge and awareness but also on behavioral adaptation and motivational factors. Future research could explore these relationships in different cultural contexts and among students from other backgrounds to understand the universality and specificity of these findings.Additionally, longitudinal studies could provide insights into how these relationships evolve over time, especially as students adjust to new cultural environments. Societal and policy implications Firstly, the strong direct effect of Intrapreneurial Self-Capital on flourishing underscores the importance of educational policies that support the development of intrapreneurial skills such as selfinitiative, resourcefulness, and resilience in international students.These skills are essential not only for academic success but also for psychological well-being in a cross-cultural setting.Educational institutions should integrate programs that foster these qualities, recognizing their role in enhancing the overall well-being of students. Furthermore, the nuanced roles of different dimensions of Cultural Intelligence (Metacognitive, Cognitive, Motivational, and Behavioral) in this context suggest that educational and social programs need to be multifaceted.The substantial mediating effects of Motivational and Behavioral CQ, particularly among males, highlight the critical role of adaptive behavior and motivation in cultural environments.Educational programs should thus tailor their approaches to cultivate these specific aspects of Cultural Intelligence, recognizing their differential impact on student flourishing. The study's findings on the moderating effect of gender in the relationship between Intrapreneurial Self-Capital and Cultural Intelligence also have important implications.The pronounced gender differences, especially in Behavioral and Motivational CQ, suggest the need for gender-sensitive approaches in educational and social interventions.Policies and programs designed to facilitate the flourishing of international students should consider these genderspecific dynamics to be more effective. In addition, the potential influence of other variables, such as personality traits and social support, on Cultural Intelligence and adaptation, underscores the complexity of these dynamics.Future research and policy development should consider these broader factors to fully understand and support the adaptation and flourishing of international students in cross-cultural environments. Conclusion In conclusion, this study contributes to a deeper understanding of the factors contributing to the flourishing of Chinese international students.It highlights the critical role of intrapreneurial self-capital, the nuanced mediating roles of different dimensions of cultural intelligence, and the significant moderating effect of gender.These insights are invaluable in guiding efforts to support the well-being and adaptation of international students in multicultural settings.Overall, this study not only contributes to academic understanding but also provides practical insights for policymakers and educators in shaping supportive environments for international students.By acknowledging and addressing the complex interplay of intrapreneurial skills, cultural intelligence, and gender, policies can be more effectively tailored to enhance the well-being and success of this important student population. TABLE 1 Factor loadings for the Cultural Intelligence Scale.
v3-fos-license
2023-06-06T15:04:00.216Z
2023-01-01T00:00:00.000
259082134
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.clausiuspress.com/assets/default/article/2023/06/04/article_1685856307.pdf", "pdf_hash": "46e014818493d4aed0f5a89010accaccdfd47adc", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2740", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "0869dd497970dc2f8d98a91d80d16864cdd831cf", "year": 2023 }
pes2o/s2orc
RSA Cryptosystem Speed Security Enhancement (Hybrid and Parallel Domain Approach) : Encryption involves every aspect of working with and learning about codes. Over the last 40 years, it has grown in prominence to become a prominent scholarly discipline. Because most interactions now take place online, people require secure means of transmitting sensitive information. Several modern cryptosystems rely on public keys as a crucial component of their architecture. The major purpose of this research is to improve the speed and security of the RSA algorithm. By employing Linear Congruential Generator (LCG) random standards for randomly generating a list of large primes; and by employing other selected algorithms, such as the Chinese Remainder Theorem (CRT) in decryption, exponent selection conditions, the Fast Exponentiation Algorithm for encryption, and finally, a comparison of the enhanced RSA versus the normal RSA algorithm that shows an improvement will be provided. Introduction With the use of cryptography, sensitive information may be concealed from prying eyes. The protection provided by urban touchable and physical devices against data access by unauthorized parties is insufficient. Therefore, specialists and developers need to build and extend safety mechanisms to protect data and prevent attacks from starting at such a crucial point. For this reason, "encryption" was borrowed from elsewhere; it is a crucial component of any adequate safety system and a prerequisite for any practical means of influencing or creating such a system. Skill is required to keep the secret from random people. The importance of data encryption over simple message transport is growing as our collective knowledge base expands. Cryptography is used to protect this information, and it may be broadly classified into two subfields: secret-key and public-key cryptography [1]. Today's cryptography relies significantly on the principles of mathematics and computer science. Due to the computational hardness assumptions used in their creation, cryptographic algorithms are very difficult for an adversary to crack in reality. It is conceivable in theory to crack such a system, but doing so in practice is currently impossible. Therefore, we call these methods "computationally and parallel domain approach, the study contribution increases the speed and security of the RSA algorithm. Organization This research study is based on the speed and security of the RSA cryptosystem algorithm in hybrid and parallel domain techniques. The paper is broken into sections. The introduction, problem statement, outcomes, and contributions are all included in Part 1. Section 2 addresses the technical background of cryptography; Section 3 examines the literature review, and Section 4 provides an indepth look at the RSA. Section 5 comprises all of the implemented methods and mathematical methodologies for the improved RSA algorithm. Section 6 contains the improved RSA algorithm. Section 7 delves deeper into the results and debate. Finally, Part 8 will summarize the research with recommendations for future work, and the appendix section provides all of the Python codes used for creating our modified RSA and results on GitHub [5]. Background Cryptography ensures data integrity in information security, providing assurance that data has not been altered with hash algorithms. One other benefit of using digital signatures is that they may be used for non-repudiation or authentication of the parties involved [6], they are two types of cryptography. Symmetric key cryptography Symmetric cryptography Figure 1 shows the simplest type of encryption when both parties use the same secret key to encrypt and decrypt data. They are two distinct categories of symmetric key cryptography designated by the quantity of data they are able to process: block ciphers and stream ciphers. The block cipher organizes the input data into blocks or groups of consistent size (within a few bytes), facilitating efficient encryption and decryption, while the stream cipher converts the data format suitable for encryption and decryption. Examples of symmetric encryption algorithms include Blowfish, AES, RC4, DES, RC5, and RC6. One of the primary advantages of symmetric algorithms is their ability to be used in real-time systems, which asymmetric algorithms cannot because each user needs their own unique key. Asymmetric key cryptography The use of two distinct keys is at the heart of asymmetric cryptography, commonly known as public-key cryptography. When encrypting plain text with asymmetric encryption, two keys are used a public one that may be shared openly and a private one that shouldn't be revealed under any circumstances. Asymmetric cryptography provides not just the privacy that encryption does but also using digital signatures it is possible to acquire authentication, non-repudiation, and integrity. The context of a signed message is compared to the input used to generate the signature [7]. To counter the fact that anybody in possession of the secret key may read an encrypted communication, asymmetric encryption makes use of a pair of keys that are mathematically connected to one another RSA, DSA, and Diffie-Hellman are all examples of asymmetric algorithms. Using symmetric cryptography with a straightforward and safe key exchange system enabled by an asymmetric key method makes it possible to have a quick technique that generates compact cipher texts [8]. Related work Many enhancements were made to the RSA algorithm by many researchers [4]. Some relevant studies are included below: For the hybrid domain, which is the improvement of Classical RSA and one or two additional algorithms. Rebalanced RSA and Multi-Prime RSA were proposed by Boneh and Shacham [9]. A strategy for integrating two previously improved RSA versions was developed by Alison et al. [10]. To increase the security levels and overall execution speed of the method, Gupta and Sharma [11] integrated RSA with the Diffie-Hellman public key cryptographic technique. Based on an additive homomorphic property, Dhakar et al. [12] introduced the MREA (Modified RSA Encryption Algorithm), and they demonstrated that it is far more secure than the regular RSA and extremely resistant to brute force attacks. The RSA and El-Gamal cryptosystems were joined by Ahmed et al. [13] using the Discrete Logarithm Problem (DLP) for El-Gamal and the Integer Factorization Problem (IFP) for RSA. For asymmetric cryptosystems, the pairing of IFP with DLP gave a reasonable processing speed. The El-Gamal and RSA algorithms were less effective than the indicated system computations as a result. In order to secure the upload of data to the cloud, Mahalle and Shahade [14] presented an RSA variation that makes use of the AES method and three keys (a public key, a private key, and a secret key). The authors' conclusion is that this AES-RSA hybrid will effectively give cloud users data security. In order to make factorization more difficult overall, Arora and Pooja [15] proposed a novel algorithm in 2015. This algorithm uses a hybrid of RSA and the El-Gamal algorithm. AES + RSA and Twofish + RSA, a hybrid implementation of one symmetric algorithm with another asymmetric algorithm, were recently proposed by Jintcharadze and Iavich [16]. They came to the conclusion that RSA + Twofish outperforms the abovementioned hybrid algorithms in terms of speed and memory use. Alamsyah and others [17] to increase the security of two-factor authentication, combined RSA and the one-time pad approach. For multi-threading or parallel techniques. C. W. Chiou [18] compares a fresh modular exponentiation technique to reduce the time it takes for modular exponentiation to execute in 1993. Because of fewer operations, this method offered a roughly 33% greater throughput. Ayub et al. [19] developed an OpenMP-based parallel CPU-based RSA method implementation that parallelizes the algorithm's exponentiation phase to aid in speedy encryption and decryption. Their analysis concludes that the program's execution time has been improved. Rahat et al. [20] improved efficiency by using a unique parallel data structure termed a Concurrent Indexed List of character blocks in 2019. The essay presented three different simultaneous RSA implementations. With a possible speed-up factor of 4.5. For the CRT enhancements domain. Wu et al. [21] presented a CRT-RSA in 2001. Their proposal use Montgomery's algorithm rather than CRT, yielding better results for decryption and digital signatures. Blomer et al. [22] presented another CRT-RSA for using CRT to solve fault attack vulnerabilities on RSA-based signature algorithms. According to them, CRT-RSA is widely employed in smart card transactions. Sony et al. [23] proposed using multiple keys and CRT to improve data transmission security by increasing processing time and algorithm security. Quisquater and Couvreur [24] proposed a rapid decryption technique in 1982. Based on previously reported Standard RSA weaknesses, they developed an RSA deciphering method that employs an enhanced modular exponentiation methodology and is based on the CRT, with the goal of increasing overall performance time. Finally, Aiswarya et al. [25] proposed a novel Binary RSA Encryption Algorithm (BREA) encryption method in 2017. Its security is further increased by converting the encrypted cipher text generated by the Modified RSA Encryption Algorithm (MREA) into binary code format. As a result, the intruder will struggle to decrypt the data. In the same year, Sahu et al. [26] presented a more secure approach than the original by making modulus n private as well. RSA Algorithm In 1977, Ron Rivest, Adi Shamir, and Leonard Adleman of the Massachusetts Institute of Technology revealed the public description of the Rivest Shamir Adleman (RSA) algorithm [27]. The system's safety comes from the difficulty of factoring the products of two large prime numbers. This difficulty is the foundation of the one-way RSA core function, which is straightforward to calculate in one direction but prohibitively so in the other. Consequently, RSA is secure since it is mathematically impossible to obtain such numbers or it would take too much time to do so, regardless of the available computing power. In addition, the encryption's safety is directly proportional to the size of the key. An algorithm's effectiveness increases exponentially when its size is doubled. Typical bit lengths for RSA keys are 2048 or 4096. RSA isn't just used for encryption; it's also the basis for digital signatures, in which only the owner of a private key may "sign" a message, but anybody can check its authenticity using the public key. Several protocols, including SSH, OpenPGP, S/MIME, and SSL/TLS, rely on RSA signature verification [28]. Many firms utilize RSA for personnel verification. Cryptographic techniques are used in chip-based smart cards to ensure security by verifying the PIN code [29]. Pretty Good Privacy (PGP), a freeware that provides encryption and authentication for e-mail and file storage applications, also uses RSA for key transfer. Furthermore, SSL provides data security by establishing an RSA key exchange during the SSL handshake client-server authentication at the end of the connection between the internet-based communications protocol TCP/IP and application protocols such as HTTP, Telnet, Network News Transfer Protocol (NNTP), or File Transfer Protocol (FTP). The determinism of the original RSA encryption means that the same plain text will always yield the same cipher text when using the same key pair. Due to this feature, the technique is susceptible to a "selected clear text attack," in which an attacker generates cipher texts randomly from a pool of known clear texts and checks whether or not they are equal to previously generated cipher texts. An adversary can learn about the original data without having to decrypt it by making this comparison [3]. Structure of Classical RSA algorithm  Generate Large Prime numbers p and q.  Calculate modulus N = P · Q. Implementation of Algorithms We used many theories and algorithm techniques to improve the total performance of the RSA cryptosystem using our modified RSA algorithm from prime number creation through key generation, encryption, and decryption. All of the approaches applied for the enhancement are listed here. Linear Congruential Generator Pseudo-Random Numbers are created deterministically (meaning that they can be replicated) and must look independent, with no visible patterns in the numbers. We utilized this approach to quickly construct a big list of odd integers for a subsequent primality test. The algorithm function's Python code is on my Github [5]. Where Xn is the random integers generated, P1 is the multiplier, P2 is the increment, and m is the modulus, X0 is the initial seed value of the series. As in [29] Linear congruential generator of Equation (1) has a full period (cycle length of m) if and only if the equation meets the following conditions:  gcd(P2, m) = 1;  P1 ≡ 1( mod p));  P1 ≡ 1( mod 4); With m = 2 k , P1 = 4b + 1, P2 as an odd number where (b, k > 0) we will get a full period of length m, for our RSA prime generation approach:  The modulus m should be a known nearest prime from 2 (n+1) , n being the n-bits key, this value is fixed (constant) in our approach. Miller Rabin Prime Primality testing The Miller-Rabin primality test [31] is a probabilistic primality test that determines if a particular number is likely to be prime, which is known to be the simplest and quickest test known; this test employs Fermat's Little Theorem [2]. We need to generate two huge prime numbers during the RSA key generation process, and after that, we need to ensure that they are indeed prime. Let us verify "n" primality ⇐⇒ n is an odd number. Algorithm 1: Miller-Rabin primality test Find integer k and m such that: n − 1 = 2 k · m Choose randomly any integer a ε [1, n − 1] Compute b0 = a m mod n Compute bi,"k" times, bi = −1 2 mod n The result must be ±1: If the solution is 1, n is a composite number and if the solution is -1, n is a prime number. Repeat the test "i" times. The probability of a composite n passing "i" tests is 1/4i; for our implementation the probability is 2 −128 . GitHub repository [5] has a Python code of this algorithm. Extended Euclidean algorithm Euclid's algorithm if gcd (a, b) = d, then there exist integers x, y such that ax + by = d. The first step in the Euclidean method is to divide the bigger integer, a, by the smaller number, b, to get the quotient, q1, and the remainder, r1(less than b), a = q1b + r1 Next, we'll use b and r1 to go on in the same manner, eventually arriving at the value: b = q2r1 + r2 rn−2 = qnrn−1 + rn Finding the largest common divisor is complete after we have achieved rn|rn−1. This is shown by the evidence below: After solving for rn −2 = qnrn−1 + rn, in the previous step, we arrive at the following equation: rn −2 = qnrn − 1 + rn Now, suppose d is the gcd of a and b. So we get <d|a and d|b>. Hence, we get rn as the Greatest Common Divisor (gcd) of a and b. Fermat's Little Theorem Fermat's Little Theorem Let a ∈ N and p be a prime number, then: a p ≡ a mod p (6) a p−1 ≡ 1 mod p (7) CRT Chinese Remainder Theorem A system of two or more linear congruencies does not necessarily have a solution, even though each of the individual congruencies does [33]. The Chinese Remainder Theorem CRT describes the solutions for a system of simultaneous linear congruencies. x ≡ a mod m and x ≡ b mod n have a unique solution if the modules are relatively prime to each other, two to two. Where gcd (m, n) = 1. Fast Exponentiation Algorithm The speed with which we can calculate M e (mod N) for integer n of this magnitude is a significant aspect of public-key cryptography. The integers used in modern RSA keys are at least 1024 bits long. The traditional method for raising to a power, say x 8 is as follows: x ⃗⃗⃗⃗⃗⃗⃗⃗ x 2 ⃗⃗⃗⃗⃗⃗⃗⃗⃗ x 3 ⃗⃗⃗⃗⃗⃗⃗⃗⃗ x 4 ⃗⃗⃗⃗⃗⃗⃗⃗⃗ x 5 ⃗⃗⃗⃗⃗⃗⃗⃗⃗ x 6 ⃗⃗⃗⃗⃗⃗⃗⃗⃗ x 7 ⃗⃗⃗⃗⃗⃗⃗⃗⃗ x 8 Where Sqr means squaring and Mul means multiplying. We squared first, followed by six successive multiplications, a total of seven operations. In RSA, we would have to execute 21024 multiplications for a power of 1024 bits, which is not at all convenient. The square and multiply algorithm is the fastest way to perform this exponentiation. Consider the same example shown above to calculate x 8 using square and multiply algorithm: x ⃗⃗⃗⃗⃗⃗⃗⃗ x 2 ⃗⃗⃗⃗⃗⃗⃗⃗ x 4 ⃗⃗⃗⃗⃗⃗⃗⃗ x 8 We only need 3 squaring operations, note that the square and multiplication operation has the same time complexity. To know the number of squaring and multiplying operations required we convert the exponent to its binary equivalent, where we have "1" we square and multiply in that order, and where we have "0" we apply square operation only, This method is called the Fast exponentiation algorithm [34]. See Algorithm [3]. Recommendations to select the value "e" We recommended using a very small public key value "e" less than√Φ (n). Therefore the trap Φ(n) = (p − 1)(q − 1), p and q being primes of at least 1024 bits, and the size of Φ(n) will be approximately equal to that of n, a little smaller but with the same magnitude(number of bits). Since the public key "e" and the private key "d" are inverses in the field Φ (n), that is, d = inv [e, Φ (n)] then we get the following relation e· d (mod Φ (n)) = 1, for this equality to hold, the product "e· d" must leave the field Φ (n) at least once so that the operation in that module returns the value 1. In other words, it will be true that e· d = k· Φ (n) + 1, with k = 1, 2, 3, 4... For this product to come out at least once from the body Φ (n), that is with k = 1, given that the public key "e" has for example 17 bits, the value of the private key d should be at least greater than 1007 bits, for the hypothetical case (and with almost null probability) that the equation is fulfilled for k = 1. In practice, that value of k = 1 or a low value of k, will be very unlikely and, therefore, we can expect a private key "d" very close to or equal in bits to the value of n as it happens in the practice. In other words, it will be computationally difficult to guess the value of the private key d, since finding a number within a 1024-bit body means an intractable computation time, with an average of 21023 attempts [35]. Forcing then the public key e to being a small value, less than 20 bits within a body of 1024 bits or greater guarantees that the private key d is a very large value and, therefore very secure since it is computationally intractable to find it by brute force. In our model, we will choose value "e" from Fermat's numbers less than√Φ (n), and since all Fermat's numbers are prime numbers we won't necessarily need to check if gcd(e, ϕ(n)) = 1 which saves time. This relatively small value of "e" forces the private key "d" to have a size similar to the modulus n and makes a brute force attack impractical. For example the value 65537 a known prime number as Fermat number (F4) was used to create the SSL certificates See Figure 3. [Fermat's prime] Fn = 2 2 In addition to the advantages mentioned, Fermat primes such as F4 have a significant feature that is worth highlighting. The binary representation of F4 has only two one bits equal to 65, 53710 = 100000000000000012 = 01000116. This fact has great utility in exponentiation computational efficiency. Paired private and public keys When an RSA key is generated, Euler's thotient function Φ (N) is used as a trap to calculate the private key "d" knowing the public key "e". Since (e, Φ(N)) = 1, it is guaranteed that the inverse "d" exists and that it is the only inverse of "e" in that field Φ(N). The encryption is done afterward in the public body N so that anyone can use it. And in said body N it is no longer satisfied that the only inverse of the public key "e" is the private key "d". There is at least one other value other than a "d" that allows deciphering what is encrypted with the public key. These keys are called paired private keys. It has been said that an asymmetric cipher system has a single public key, and therefore also a single private key. For the RSA cryptosystem, this has turned out to be false. An example will better illustrate this. That is, for the ciphered body N = 2109 with public key e = 13, the numbers 349, 853, and 1357 are paired private keys d0 that fulfill my function more than the private key d. Every RSA key will have at least one matching private key. The number of even private keys depends strongly on the primes P and Q. Unencryptable messages One of the security vulnerabilities are non-recommended keys that either do not encrypt the information to protect or do so in a predictable way. For example, in the symmetric algorithm DES (Data Encryption Standard), there were weak or semi-weak keys, which did not satisfy the singledigit principle enunciated by Shannon; and gave solutions known as false solutions. Something similar happens in RSA, where there are unencryptable messages, or rather, unencryptable numbers. 0 key mod N = 0 and 1 key mod N = 1 (10) Another value that is transmitted in the clear is (n -1), since: Example: Consider the RSA key with n = 17 · 29 = 493 and e = 11, we have: In addition to these three numbers, in RSA there will always be another 6 numbers that are not encrypted.on our example: To locate these unencryptable numbers requires a brute force encryption attack on the space of the primes p and q, to verify the values of X that yield the following inequalities: X e (mod P) = X and X e (mod Q) = X for 1 < X < Q -1, P-1 Keys of 1024 bits or more make calculations within the primes p and q computationally intractable if each has at least 512 bits. Therefore, for those keys, it will not be possible to find the remaining unencryptable numbers. The equations to calculate the unencryptable numbers are shown below: The number of unencryptable numbers σn within a field n is: The unencryptable numbers will be: N = [q· (inv (q, p)) · Np + p· (inv (p, q)) · Nq] mod n (13) Where: Np are the solutions of N e mod p = N and Nq are the solutions of N e mod q = N As can be seen, the only complicated calculation that occurs is in the last two equations, which means attacking by brute force all values of N candidates to be non-cipherable numbers, with 1 < N < (P − 1) for prime P and 1 < N < (Q − 1) for prime Q Optimised RSA algorithm For our modified RSA, we drew the following conclusions concerning the sizes of the operands:  Carefully select the prime numbers p and q such that factoring them would be computationally unfeasible. As a rule of thumb, the bit lengths of these primes should be roughly comparable. For example, if the number n is 1024 bits in length, then the appropriate sizes for p and q are approximately 512 bits.  The exponent is typically small in order to maximize the power of the exponentiation. RSA Modified  Generate Large Prime numbers p and q generate truly unpredictable random numbers list using LCG Section 5.1, then tests for primality we used Rabin-Miller probabilistic test Section 5.2.  Calculate modulus N = P · Q  Calculate X to replace N. X = N − (P + Q) The key length, which is commonly stated in bits, determines its size. We recommended the Karatsuba algorithm Section 5.3 for N computation. The Karatsuba approach begins to pay off as ndigits increase, as it can multiply hundreds of digits quicker than the standard technique.  Calculate Euler's totient function, which is defined as: Φ(N) = X + 1 Reduced multiplication of (P-1) (  Decryption Plain-Text message M = C d (mod N) We proposed to use CRT Section 5.6 and Fermat's Little Theorem 5.6 for fast computation and monitoring security. RSA Modified Example We begin by walking over on how to generate RSA encryption and decryption keys. After that, we'll go through a basic example to see how encryption and decryption work in practice.  Two prime numbers "P" and "Q" are chosen. At present, such numbers must be of the order of 1024 bits, at least. For our example, Let us use a 16-bits prime for illustration n_digits = 16 log 2 = 5 Using Equation (5)  A fermat's value of "e < 39733" is chosen. We will take Fermat number three. e = F3 = 2 2 3 + 1 = 257 < 39733 , since all Fermat's numbers are primes therefore gcd (257, 1578717360) = 1.  Calculate d, the multiplicative inverse of module n. Since gcd (e, Φ(n)) = 1, we will apply Extended Euclidian algorithm [1] to express 1 as a linear combination of integer u and v such that: Equation [14] into [15] we Step 1: First compute the binary representation of the exponent "257". 25710 = 1000000012 Step 2: Read the binary representation of the exponent from left to right. 1000000012 = b0b1b2b3b4b5b6b7b8 Step 3: For every subsequent bi = 1 apply square and multiplication while every subsequent bi = 0 we apply the square operation only. This completes the encryption process See Compute: (6851 · 67699 · 9075) + (55757 · 23321 · 41355) = 57983316650610 mod (23321 · 67699) = 123456 Plain-text message M = 123456 as expected. We also want to look into the temporal complexity of each method. In this situation, we're interested in the algorithm's efficiency, or how long it takes the function to generate at least two prime integers. As shown in Table 2, traditional prime generation always yields a single prime number by randomly selecting an odd number and testing for primality, however, our suggested LCG prime generation method yields a list of primes in a short time. Figure 4 illustrates a graph. Key Generation Consider n as the number of bits of P or Q both prime and N as the product of P and Q. Modulus computation The time complexity to compute the modulus N product of two big primes P and Q of the same bit size is ( 2 ) 2 using big O-natation that is O (n 2 ). By using the Karatsuba algorithm Section [5.3] the time complexity goes down to O (n 2 3 ) approximately O (n 1.58 ). Thotient computation The time complexity to compute Φ (N) product of two large integers (P − 1) and (Q − 1) of the same bit size is ( 2 ) 2 using big O-natation that is O (n 2 ), by substituting X = N − (P + Q), Φ (N) = X + 1 the time complexity goes down to O (n) Making the exponent smaller vs Classical RSA In making an algorithm faster, we can actually use a small exponent and still RSA will be secure, In Table 3 changing the public key exponent e to the binary representation we can tell the number of operations that are required for the square multiply algorithm using exponentiation, fast encryption is possible using this small exponent and RSA is still secure for short. The public exponent "e" can be smaller in this case, but "d" must be as big as "N" now, thus we applied the Chinese remainder theorem (CRT) to accelerate decryption. This technique lowers n_bits modular exponentiation into two n/2_bits modular exponentiations plus the CRT steps described above, while P −1 can be precomputed and saved. The Chinese remainder theorem, according to Table 4, is substantially faster at the expense of system parameters and memory, and it does not require any modular inversion implementation, saving development costs and memory. As illustrated in Figure 6, 7 and 8 both the CRT and decreasing the exponent need less processing time than the standard RSA. Because it adds randomness to the cryptography method, RSA's improved performance increases speed and security. Comparison of Total Time Complexity The time complexity study of the enhanced RSA method and the traditional RSA algorithm will now be described using big O notation. For time complexity operations, we have the following features, as described by [36] and [37]: Sum (x + y), the sum of two n-bit values has a time complexity of O (n). Subtraction (x -y), is an addition of a negative integer, its asymptotic time complexity is O (n). Multiplication (x * y), the product of two n-bit values, has a time complexity of O (n 2 ). Division (x / y), is a multiplication of an inverted integer, its asymptotic time complexity is O (n 2 ). Module (A mod N), given an n-bit A, the algorithm's time complexity is O (n 2 ). Exponentiation in modules (A B mod N) has a time complexity of O (n 3 ). The Extended Euclid's Algorithm, Using the gcd (a, b) = gcd (b, a mod b) rule, Euclid's method computes the gcd of two integers a and b and yields the last value of "a" as gcd when b is zero. To examine the Extended Euclid's algorithm, assume a and b of n-bits, and a mod b < a/2 at each iteration. As a result, the algorithm will only make n recursive calls (since each division decreases one bit of n). In addition, a division of complexity O (n 2 ) is done at every recursive step to extract the new argument b for the following iteration. When we add up the whole time, we obtain. O (n) · O (n 2 ) = O (n 3 ) Multiplicative Inverse, Time complexity is O (n 3 ) due to the usage of the Extended Euclid method. Primality Test, the N key is n bits long, while the prime integer p is (n/2) bits long. In practice, the primality test is performed using the probabilistic Miller-Rabin test, Algorithm [5.2]. The most expensive operation in the Miller-Rabbin technique, given a number p of n bits, is modular exponentiation, which has an O (n 3 ) complexity. Furthermore, to ensure a high level of confidence, this test is repeated about ln (p) / 2 [38], resulting in a complexity of O(n). As a result, the final complexity is O (n 3 ) · O (n) = O (n 4 ) We can examine the overall RSA performance in contrast to the suggested RSA from all of the above time complexity properties, as presented in Table 5. Conclusion This study proved helpful with a combination of the Chinese Remainder Theorem (CRT) algorithm, Fast exponentiation Algorithm, small exponent, and Linear Congruential Generator (LCG) that has been proposed with the normal RSA algorithm for fast and secure communication for large data. We applied the proposed theorems, multiplication algorithms, and randomness techniques to redesign the execution of the RSA cryptosystem that improves speed on the RSA encryption side while the security of data was maintained and upgraded with the CRT on the decryption side. This research objective includes a detailed overview of the most successful classical cryptographic Technique RSA. The study proposed an RSA with the Chinese Remainder Theorem using a small value of the exponent, which helped enhance the cryptographic algorithm. Another contribution is the discussion on different multiplication algorithms and their pseudo-codes. Future work will discuss the practical implementation of the modified RSA algorithm's security attacks and threats. But unless we encounter a situation that puts our data in danger, we will continue to use what we have.
v3-fos-license
2017-10-17T05:26:12.170Z
2012-01-01T00:00:00.000
22349580
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201226935182813&method=download", "pdf_hash": "2b7ab28097c4196a039cae53019d53fb8805f675", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2741", "s2fieldsofstudy": [ "Medicine" ], "sha1": "2b7ab28097c4196a039cae53019d53fb8805f675", "year": 2012 }
pes2o/s2orc
Houttuynia cordata Thunb Fraction Induces Human Leukemic Molt-4 Cell Apoptosis through the Endoplasmic Reticulum Stress Pathway Apoptosis is a programmed cell death found in both the physiological and pathological processes such as sloughing off of the gastrointestinal epithelium, neurological degenerative diseases, autoimmune diseases and infection. The mechanism of apoptosis can be divided into two main pathways, viz. death receptor and mitochondrial pathways. Fas or tumor necrosis factor receptors are bound to the ligands and become trimerization. FADD or TRADD form complex with the receptors and activate procaspase-8 or -10 to its active form. Ionizing radiation, oxidative stress and DNA damage can trigger mitochondrial pathway. Bax and Bak are pivotal effectors of the mitochondrial apoptosis pathway, as either Bax or Bak is needed to permeabilize the mitochondrial outer membrane. Bax and Bak, proapoptotic protein in the Bcl-2 family, form homodimers at the mitochondrial membrane and cause the release of cytochromec, Smac/Diablo, and apoptosis inducing factor (AIF). The expressions of anti-apoptotic protein in Bcl-2 family, such as Bcl-xl, Mcl-1, Bcl-2, and Bcl-w, are reduced in apoptotic cells. Cytochrome c forms complex with Apaf-1, procaspase-9, called apoptosome, and activate procaspase-9 to be active caspase-9, which then activates the executioner caspases-3, -6, and-7 to cause apoptotic cell death (Hengartner, 2000; Galluzzi et Introduction Apoptosis is a programmed cell death found in both the physiological and pathological processes such as sloughing off of the gastrointestinal epithelium, neurological degenerative diseases, autoimmune diseases and infection. The mechanism of apoptosis can be divided into two main pathways, viz. death receptor and mitochondrial pathways. Fas or tumor necrosis factor receptors are bound to the ligands and become trimerization. FADD or TRADD form complex with the receptors and activate procaspase-8 or -10 to its active form. Ionizing radiation, oxidative stress and DNA damage can trigger mitochondrial pathway. Bax and Bak are pivotal effectors of the mitochondrial apoptosis pathway, as either Bax or Bak is needed to permeabilize the mitochondrial outer membrane. Bax and Bak, proapoptotic protein in the Bcl-2 family, form homodimers at the mitochondrial membrane and cause the release of cytochromec, Smac/Diablo, and apoptosis inducing factor (AIF). The expressions of anti-apoptotic protein in Bcl-2 family, such as Bcl-xl, Mcl-1, Bcl-2, and Bcl-w, are reduced in apoptotic cells. Cytochrome c forms complex with Apaf-1, procaspase-9, called apoptosome, and activate procaspase-9 to be active caspase-9, which then activates the executioner caspases-3, -6, and-7 to cause apoptotic cell death (Hengartner, 2000;Galluzzi et al., 2012). Endoplasmic reticulum (ER) is an organelle responsible to the posttranslational modification of proteins. It controls protein folding by chaperones and the process of protein glycation. The response of ER stress is the accumulation of misfolded proteins. ER stress leads cells to activate self protective mechanisms: (1) transcriptional up-regulation of ER chaperones and folding enzymes; (2) translational attenuation to limit further accumulation and aggregation of misfolded proteins; and (3) ER-associated degradation (ERAD) which eliminates misfolded proteins from the ER (Shiraishi et al., 2006;Hussain & Ramaiah, 2007). Three types of ER membrane receptors, ATF6, IRE1 and PERK, sense the stress in the ER, and eventually activate transcription factors for induction of ER chaperones, such as GRP78, for inhibition of synthesis of new proteins. These processes are called unfolded protein response (UPR) (Imaizumi et al., 2001). It has been found that ER stress pathway involves the mechanism of apoptotic cell death. Apaf-1 and the mitochondrial pathway of apoptosis play significant roles in ER stress-induced apoptosis (Shiraishi et al., 2006). Houttuynia cordata Thunb (HCT), which is in the family of Saururaceae, is a commonly used herb in traditional Asian medicine. It has been reported to have various bioactivities to counteract with oxidative stress (Kusirisin et al., 2009), cancer (Tang et al., 2009;Lai et al., 2010), allergy (Han et al., 2009) and inflammation (Li et al., 2011). The water extract of HCT protects rat primary cortical cells from Abeta25-35-induced neurotoxicity via regulation of calcium influx and mitochondria-mediated apoptosis (Park & Oh, 2012). We have previously reported that fermented HCT extract induces human leukemic HL-60 and Molt-4 cell apoptosis via oxidative stress and mitochondrial pathway (Banjerdpongchai & Kongtawelert, 2011). However, in the process of identifying active compound(s) in HCT, silica gel column chromatography was performed and six fractions were obtained. The aims of this study were to determine the cytotoxic effect of six HCT fractions on Molt-4 cells, the mode of cell death and the mechanism involved. In the present study, HCT fraction 4 could induce human leukemic Molt-4 cell apoptosis via the ER stress and coactivated through the mitochondrial pathway indicated by the increase expression of GRP78, Bax and Smac/Diablo, mitochondrial transmembrane permeability alteration, and the reduction in protein expression of Bcl-xl. Further study is to purify the active compound(s) in fraction 4, which is (are) responsible for the apoptotic inducing property of HCT. It will provide new drug development from this medicinal herb. Plant material, extraction and isolation The Houttuynia cordata Thunb whole plants were collected in June, 2009 from Chiang Mai province, Thailand. The plant was authenticated and a voucher specimen (QBG42697) has been deposited at the Queen Sirikit Botanic Garden, Chiang Mai, Thailand. The whole plants were fermented with yeast and ethanol. One kilogram air-dried and finely powdered of HCT had been percolated 5 times with 10 liters of ethanol for 4 days at room temperature. The extracts were combined and evaporated to dryness under reduced pressure to afford a crude ethanolic extract. The ethanolic extract was separated by column chromatography over silica gel (Merck No. 7734,. Elution started with hexane, gradually enriched with ethylacetate in hexane up to 20% ethylacetate and methanol. Fractions (300 ml each) were collected, monitoring by TLC behavior and combined. The solvent was evaporated to dryness to afford six fractions (F1-F6). Fraction 4 was further processed through high performance liquid chromatography (HPLC) and nuclear magnetic resonance (NMR) to obtain the chromatogram and NMR spectrum, respectively. Cell culture Human acute T lymphoblastic leukemic Molt-4 cells were gifts from Dr. Watchara Kasinroek. The cells were cultured in 10% fetal bovine serum in RPMI-1640 medium supplemented with penicillin G (100 units/ml) and streptomycin (100 μg/ml) at 37 °C in a humidified atmosphere containing 5% CO 2 . The human leukemic cells (1x10 6 ) were treated with the fractions at indicated concentrations and durations. Peripheral blood mononuclear cells (PBMCs) were isolated from heparinized blood obtained from adult volunteers by density gradient centrifugation using Histopaque according to standard protocols. Cells were cultured in RPMI-1640 medium supplemented with 10% heat-inactivated fetal bovine serum, 2 mM glutamine, 100 U/ml penicillin and 100 μg/ml streptomycin. PBMCs (3x10 6 ) were treated with the fractions at 10, 20, 40, 80 μg/ml for indicated times. Cytotoxicity test Following six fraction treatments for indicated times, cell viability was assessed by MTT (3-(4,5-dimethyl)-2,5-diphenyl tetrazolium bromide) assay (Su et al., 2000). This method is based on the ability of viable cells to reduce MTT and form a blue formazan product. MTT solution (sterile stock solution of 5 mg/ml) was added to cell suspension at final concentration of 100 μg/ml and the solution incubated for 4 h at 37 °C in a humidified 5% CO 2 atmosphere. The medium was then removed and cells were treated with DMSO for 30 min. The optical density of the cell lysate was measured at 540 nm with reference wavelength of 630 nm using microtiter plate reader (Biotek, USA). Number of viable cells was calculated from untreated cells, and the data were expressed as percent cell viability. Determination of phosphatidylserine externalization in apoptotic cells Treated cells were washed once in phosphate-buffered saline solution, centrifuged at 200 x g and the cell pellet was suspended in 100 μl of binding buffer provided by the annexin V-FITC reagent kit. Annexin V-FITC (2 μl) and PI (2 μl) were added and the cell suspension was left at room temperature for 15 min in the dark. Finally 900 μl of binding buffer were added. Analysis was conducted using FACScan (Becton Dickinson, USA). Cells that were stained with annexin V-FITC, and annexin V-FITC together with PI, were designated as early and late apoptotic cells, respectively. Determination of mitochondrial transmembrane potential (MTP) For MTP determination, 5x10 5 cells were treated with the HCT fraction 4 at IC 10 , IC 20 and IC 50 for indicated times, harvested and re-suspended in a PBS containing 40 nM of DiOC 6 (Li et al., 2007). Then the cells were incubated for 15 min at 37 °C before cells were subjected to flow cytometer (Becton Dickinson, USA). Western blot analysis The fraction 4-treated cells were washed once in ice cold PBS and incubated at 4 °C for 10 min with ice-cold cell lysis buffer (250 mM sucrose, 70 mM KCl, 0.25% Triton X-100 in PBS containing complete mini protease inhibitor cocktail). Following centrifugation at 20,000 x g for 20 min, supernatant (50 μg, determined by Bradford method) was separated by 17% SDS-PAGE and transferred onto nitrocellulose membrane. After treating with 5% non-fat milk in PBS containing 0.2% Tween-20, membrane was incubated with rabbit polyclonal antibody to GRP78 or Bcl-xl, mouse monoclonal antibody to Bax, or rabbit monoclonal antibody to Smac/Diablo, followed by appropriate horseradish peroxidase (HRP)-conjugated secondary antibodies (1:20,000). Protein bands were visualized on X-ray film with SuperSignal West Pico Chemiluminescent Substrate. Statistical analysis Results are expressed as mean ± S.D. Statistical difference between control and treated group was determined by one-way ANOVA (Kruskal Wallis analysis) at limit of p < 0.05 from 3 independent experiments conducted in triplicate. For comparison between two groups, data were analyzed using Student's t-test. Cytotoxicity of HCT fractions Six HCT fractions were cytotoxic to human leukemic Molt-4 cells dose dependently as shown in Figure 1. The fraction 4 was the most toxic to Molt-4 cells with IC 50 value of 15.5 μg/ml. Therefore HCT fraction 4 was selected to study further for the effect on normal human PBMCs and determine the mode of cell death by using IC 10 , IC 20 and IC 50 concentrations of 5, 8.5 and 15.5 μg/ml, respectively. The HCT fraction 4 was less toxic to PBMCs with the IC 50 more than 70 μg/ml (data not shown). There are various modes of cell death, which includes apoptosis, necrosis, and autophagic cell death. Molt-4 cells were induced to undergo apoptosis as shown in Figure 2, by the flip-flop out of the phosphatidylserine to the outer layer of cell membrane. Percentage of Molt-4 cells with early apoptosis (right lower quadrant) was significantly different in the HCT fraction 4 at the dose of IC 50 for 4 hours of incubation compared to without treatment (p < 0.05). Mitochondrial transmembrane potential HCT fraction 4 reduced mitochondrial transmembrane potential (MTP) as shown in Figure 3. DiOC 6 is cationic lipophilic fluorochrome specific for mitochondrial membrane. In apoptotic cells, DiOC 6 leaks into cytoplasm compared to viable cells, in which accumulation in the mitochondria occurs. Cells with reduction of MTP increased significantly at the dose of IC 50 (p < 0.05). The expression of Bcl-2 family and Smac/Diablo proteins by Western blot In the incubation of Molt-4 cells with HCT fraction 4 for various times, the expression of anti-apoptotic Bcl-xl protein decreased whereas that of Bax protein increased (Figure 4). It indicates the involvement of mitochondrial pathway. Therefore, the Smac/Diablo protein, which is released from mitohcondria and inhibits the inhibitors of apoptotic proteins (IAPs), was determined by immunoblot. Since Smac/Diablo inhibits the IAPs, which blocks caspase-3 activity, the high cytosolic Smac/Diablo level indicates the mitochondrial pathway activation (Hengartner, 2000). As shown in Figure 4, Smac levels slightly increased at 6 and 12 h. Altogether, HCT fraction 4 induced human leukemic cell apoptosis via the mitochondrial or intrinsic pathway. ER stress protein expression Glucose regulated kinase/immunoglobulin heavy chain binding protein (GRP78/Bip) acts as an ER chaperone in stress response. This protein expression increases in ER stress-induced late apoptosis in HL-60 cells treated with curcumin (Pae et al., 2007). HCT fraction 4 treatment in human leukemic Molt-4 cells induced a time dependent increase of GRP78 expression as shown in Figure 4. A molecular chaperone inducer, such as Bip inducer X (BIX) protects neurons from ER stress in the treatment of cerebral disorders associated ischemia (Kudo, 2008). Unfolded protein response (UPR) has dual roles on cell survival and death, depending on the type of tumor. Compounds either inducing ER stress and cell death, or blocking the cytoprotective function of the altered UPR of cancer cells, could be used either alone or in combination (Verfaillie et al., 2010). HCT fraction 4 induced human leukemic cell apoptosis via the increased expression of transcription factor Bip/GRP78, which reduces transcription and translation. High performance liquid chromatogram of fraction 4 Even though fraction 4 was pooled from several continuing individual isolates appearing as a single band on thin layer chromatography, HCT fraction 4 from the silica gel column chromatography and thin layer chromatography (TLC) was not pure. Since various small peaks appeared in high performance liquid chromatogram ( Figure 5), fraction 4 is composed of several compounds confirmed by nuclear magnetic resonance (NMR) spectrum (data not shown). Further purification of fraction 4 is required to obtain the active compound(s) and assess for the cytotoxic effect on cancer cells. Taken together, HCT fraction 4 induced human lymphoblastic T leukemic Molt-4 cells to undergo apoptosis via the intrinsic and endoplasmic reticulum stress pathways.
v3-fos-license
2023-05-24T06:17:49.397Z
2023-05-22T00:00:00.000
258843795
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1038/s41598-023-35342-x", "pdf_hash": "d757a5f77f98fb874251423f721c171eab5b950c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2743", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "76138156fcddb4afd02ab5765a62fc87e374e0c3", "year": 2023 }
pes2o/s2orc
The change in metabolic activity of a large benthic foraminifera as a function of light supply We studied metabolic activity of the symbiont-bearing large benthic foraminifer Heterostegina depressa under different light conditions. Besides the overall photosynthetic performance of the photosymbionts estimated by means of variable fluorescence, the isotope uptake (13C and 15N) of the specimens (= holobionts) was measured. Heterostegina depressa was either incubated in darkness over a period of 15 days or exposed to an 16:8 h light:dark cycle mimicking natural light conditions. We found photosynthetic performance to be highly related to light supply. The photosymbionts, however, survived prolonged darkness and could be reactivated after 15 days of darkness. The same pattern was found in the isotope uptake of the holobionts. Based on these results, we propose that 13C-carbonate and 15N-nitrate assimilation is mainly controlled by the photosymbionts, whereas 15N-ammonium and 13C-glucose utilization is regulated by both, the symbiont and the host cells. Scientific Reports | (2023) 13:8240 | https://doi.org/10.1038/s41598-023-35342-x www.nature.com/scientificreports/ plasticity. At this point it should be mentioned, that also some miliolids are part of the LBFs. In contrast to the adaptive strategy to different light conditions from rotalids, miliolids establishing symbioses with a variety of algal symbionts 10 . Pecheux 18 measured test sizes of LBFs collected from different water depths (20-130 m) and found that their size is directly related (negative) to light supply. The importance of irradiance for symbiont bearing foraminifera is obvious and was already observed by earlier studies 19 . However also other factors might be significant for the abundance of LBFs: Nobes et al. 20 found that irradiance flux only explained a small proportion of foraminifera distribution (based on the observation of large rotalids). Contrarily, the distance from the coast turned out to be the most important factor for LBF occurrence, whereby potentially also the nutrient flux will play a role in the foraminiferal distribution, but this aspect was not clarified by Nobes et al. In laboratory experiments, the same authors also found that the growth of the LBF Heterostegina depressa increased significantly at reduced light supply under continuous irradiance supply; therefore this taxon is considered a low light species. High irradiance of ~ 1200 µmol photons m −2 s −1 leads to increased mortality (50%) within a few weeks, whereas low light supply (60 µmol photons m −2 s −1 ) turned out as the light optimum for H. depressa 20 . These results fit to the findings of Röttger 21 , who postulated highest growth rates of H. depressa at low light supply. H. depressa is a species which is obligatorily dependent on the metabolic by-products of their symbionts and therefore shows a mixotrophic life style (= host cells are heterotrophic but obtain metabolites from their autotrophic symbionts) like other LBFs 22 . Because of the direct dependency on irradiance supply, this species is used for paleo-reconstruction of past water depths by analysing the occurrence of fossil LBFs 23 . Though some studies 18,20 have been conducted on the growth and size distribution of LBFs related to irradiance supply, no study has dealt with nutrient uptake of LBFs as dependent on light supply to our knowledge. We assume that the utilization of certain carbon-and nitrogen-related compounds is conducted by the symbionts or is enhanced by their presence under light. However, other compounds, like dissolved organic material, will also be taken up and assimilated by the foraminifera itself or by osmosis where the symbionts are not involved. For that purpose, we measured nutrient uptake (nitrate, ammonium, carbonate and glucose) during prolonged darkness and compared it with foraminifera grown at a diurnal light cycle. In addition, pulse amplified modulated fluorescence analyses were conducted with an imaging fluorescence instrument to study potential effects of irradiance supply and prolonged darkness on symbiont performance. With this study we want to clarify several aspects. First, it should be observed whether foraminifera absorb dissolved components of carbon and nitrogen in complete darkness. Based on this observation, further experiments with a normal daylight rhythm will be carried out to investigate the proportion of the up taken amount of elements by the symbionts. Finally, since LBFs are often used as model organisms, a statement should be obtained about which isotopes are best suited for further laboratory cultivation experiments. These results contribute to a better understanding of the host-endosymbiont relationship between foraminifera and diatoms and clarify which nutrients are more likely to be taken up by the diatoms and which by the foraminifera itself. In addition, these results can also be used for paleontological studies. Since foraminiferal assemblages are often used as proxies for the reconstruction of paleoenvironments, light-factor experiments in particular provide new data on the distribution patterns of certain species. Material and methods Main culture. We used individuals of a permanent culture of H. depressa, hosted at the Department of Palaeontology at the University of Vienna. All selected foraminifera had a diameter of approximately 1250 µm. The main culture is maintained in an aquarium at 25 °C and 30 µmol photons m −2 s −1 photosynthetically active radiation (PhAR). Photosynthetic performance of the photobiont. Experiments were performed in six-well plates with placing a single individual in each well. The specimens were covered with 5 ml sterile filtered artificial seawater and were incubated at 25 °C. Six individuals were each incubated in total darkness or under a light:dark-cycle of 16:8 h at 30 µmol photons m −2 s −1 , respectively (12 specimens in total). Photosynthetic performance of the photobiont symbionts of LBFs was measured several times during a period of 15 days using maximum variable chlorophyll fluorescence imaging of photosystem II (PSII; Imaging PAM Microscopy Version-Walz GmbH; excitation at 625 nm). Both, dark and light incubated foraminifera were measured at day 1, 3, 5 and 7 (Fig. 1). For this purpose, the same 12 individuals were measured every timepoint. The measured variable fluorescence as a proportion of maximum fluorescence yield (Fv/Fm) describes the difference between maximum fluorescence and minimum fluorescence (variable fluorescence), divided by maximum fluorescence, which is used as a measure of the maximum potential quantum efficiency of photosystem II 24 . Fv/Fm serves as a proxy for the integrity and physiological activity of the photosymbionts, ranging between 0.79 and 0.84, lower value indicating photobiont stress 24 . The PAM-images were evaluated using the software WinControl-3 (Walz GmbH); the photosynthetic area of each specimen was calculated with the software Image J (version 1.53 k, Java). Isotopic uptake experiments. Foraminifera were incubated for 1, 3 and 7 days in crystallisation dishes filled with 280 ml sterile filtered artificial seawater. Six foraminifera were placed into each dish, supplied separately with either isotopically enriched Na 15 NO 3 , 15 NH 4 Cl, NaH 13 CO 3 or 13 C-glucose to a final concentration of 0.2 mM each. One set of foraminifera was incubated at a light: dark-cycle of 16:8 h at 30 µmol photons m −2 s −1 , a second one in continuous darkness. In total 6 × 4 × 2 (number of replicates × isotopically enriched compounds × light conditions) foraminifera were incubated for this experiment. After the respective incubation times, the foraminifera were collected from each irradiance treatment and nutrient addition. For each treatment, 6 replicates were analysed individually. After incubation, the foraminifera were rinsed with distilled water and Statistics. The following hypothesis should be testes: different lighting conditions affect the activity of the symbionts (PAM experiments). Additionally, the hypothesis that the activity of the foraminifera is influenced by different chemical nitrogen or carbon sources components will also be investigated (isotopic uptake experiments). For statistical analysis, repeated measurement one-way ANOVA (level of significance = 95%) was performed for the PAM experiments over time to test if, prolonged darkness significantly altered the overall photosynthetic performance of the photobiont compared to natural irradiance supply. Two-way ANOVAs were used for the isotopic uptake to test if light supply and time affected the uptake of enriched 13 C-and 15 N-compounds. We used the software Past 4.03 and set the level of significance to 95%. Results Performance of the photosymbionts. The results of the PAM observations after experimental start and after day 1, 3, 7 and 15 are shown in Fig. 1 (values are provided in the supplementary file). During the whole experiment, Fv/Fm of all individuals was in the range between 0.6 and 0.8, which indicates a healthy state of the photosymbionts. Two-way ANOVA of the photosynthetic area over a period of 15 days, between the darkand light-incubated foraminifera show a significantly difference between the light cycle (p = 0.027) and time (p < 0.001) and also their interaction (p < 0.001). Within the dark incubated foraminifera, we observed no significant change in photosynthetic area over 7 days (rm-ANOVA, p = 0.110). Just a significant increase (rm-ANOVA, p < 0.001) of the photosynthetic area was observes from day 7 to 15. Isotopic uptake experiments. The rate of isotope incorporation differed significantly (one-way ANOVA) depending on the type of offered carbon form (carbonate > glucose, p < 0.001) and nitrogen form (nitrate > ammonium, p < 0.001). Two-way ANOVA (cycle × time) was performed to see if there are differences in the uptake of isotopes during light exposure and over time. Natural light supply in contrast to complete darkness, highly significantly increased the uptake of carbonate, nitrate and ammonium (p < 0.001) and significantly for glucose (p = 0.048). The interaction between cycle and time was significant for glucose (p = 0.020), carbonate (p < 0.001) and nitrate (p < 0.001), but not for ammonium (p = 0.164). Tracer uptake increased with time (Table 1) for all compounds under light conditions, except ammonium. Under dark conditions tracer uptake only increased for carbonate and ammonium, but not for glucose (p = 0.087) and nitrate (p = 0.376) ( Table 1). For nitrate, ammonium and carbonate, the element uptake during darkness was negligible (Fig. 2). Nitrate and carbonate uptake were higher than that of ammonium and glucose under natural light conditions (16:8 h light:dark). The uptake of nitrate and carbonate in the light was approximately twice compared to ammonium and glucose, respectively. In prolonged darkness, a substantial uptake of tracer was only recorded for glucose. Discussion Heterostegina depressa is known as a low light species (oligophotic), i.e. well-adapted to grow under low light conditions 20 . We found that the photosynthetic area of the foraminiferal symbionts remained constant over 7 days of continuous darkness and show a slightly increase between 7 and 15 days (Fig. 1). This means that even after 15 days without light, the photosymbionts of H. depressa were alive and adapt to these conditions. Interestingly, there was no uptake and assimilation of carbonate and inorganic nitrogen during this time (Fig. 2). Past experiments with dissolved carbonate show that LBFs can take it up by diffusion 26 . This uptake then follows a linear increase in the C concentration in the cytoplasm of the foraminifera as a function of time. www.nature.com/scientificreports/ However, we were only able to record a linear increase of the C concentration in the foraminifera during the experiments, which were carried out under light exposure. The dark incubated foraminifera show no uptake of carbon, which suggests that the C uptake does not take place by diffusion but by enzymatic activity as already suspected by Ter Kuile et al. 26 . During prolonged darkness, foraminifera operate purely heterotrophic, as shown by the uptake of dissolved glucose, which was likely metabolized for energy generation in the absence of any transfer of photosynthates and other metabolites from the photosymbionts. Glucose uptake might also be promoted by the presence of bacteria, since some foraminifera also contain heterotrophic bacteria as symbionts, which can quickly digest glucose 12 . Another explanation could be an active uptake and digestion of enriched bacteria-its presence cannot be ruled out during an experiment for more than 3 days. Röttger et al. 27 reported, that H. depressa can active feed on algae, but this food uptake just play a minor role in the energy budget of the foraminifera. It can therefore also be hypothesized that the uptake of glucose is caused indirectly by the phagocytosis of bacteria that have previously enriched themselves with 13 C. The bacteria uptake and the so called "bacteria farming" is a widely known strategy of small benthic foraminifera 28 . At the moment, this feeding strategy was only observed from non-symbiont bearing foraminifera. However, it cannot be ruled out that bacteria settle on the surface of the foraminifera, which then metabolize glucose. The 13 C-enriched metabolites of the bacteria can then be released into the culture water and absorbed through close contact by the foraminifera. In order to understand this more Table 1. One-way ANOVA of the isotopic uptake with time (n = 6, Df = 2, significant p values are in bold). www.nature.com/scientificreports/ closely, further studies using TEM or NanoSIMS must be carried out. These studies would also clarify whether this species is able to uptake glucose via osmotrophy. The isotope incorporation increased with time under natural light conditions (16:8 h light: dark). There was the same rising pattern for nitrate, ammonium, carbonate and glucose incorporation, which was fundamentally different from the pattern under continuous darkness. Although glucose uptake was similar, nitrate, ammonium, and carbonate uptake increased under irradiances supply substantially already after 7 days. We assume that glucose uptake is mostly driven by the heterotrophic foraminiferal host cell or by bacterial symbionts 29 and used to generate energy and carbon skeletons for physiological processes. Under light supply, additional energy and organic compounds for foraminiferal growth are generated by the photobionts. Lintner et al. 25 investigated the element uptake of the obligatory heterotrophic Cribroelphidium selseyense. During the first 7 days, they found only a marginal assimilation of carbonate and nitrate 25 , which however increased afterwards probably because of symbiotic bacterial activity. At the same time, C. selseyense showed a continuous uptake of ammonium during the whole experiment 25 . We conclude from our data, that assimilation of carbonate, nitrate and ammonium are light-dependent and triggered by the activity of the phototobionts, while glucose uptake is continued in darkness thus maintaining the holobionts metabolism. This helps the foraminifera to survive prolonged dark phases. We studied nitrogen using the two inorganic compounds ammonium and nitrate and were able to prove a much higher nitrate uptake (Fig. 2). For both, nitrate and ammonium, there was no uptake in the dark, which implies that inorganic nitrogen assimilation was performed by only the photosymbionts, mediated under light supply. Interestingly, during the experiments, which were carried out completely in the dark, no uptake of nitrate was recorded (see Fig. 2). From studies with marine diatoms, however, it is known that diatoms accumulate nitrate in the cells during darkness 30 . This now suggests that the foraminifera itself is not active even in the dark, nor does it have osmotrophy during this period, allowing dissolved nitrate to be carried to the symbionts. Such behaviour could be compared to dormancy in foraminifera 31 . Dormancy can be caused by exogenous factors such as stressful environmental conditions (here lack of light during dark conditions) and leads to a strong reduction in metabolism. This hypothesis can now be reconciled very well with our results. It now appears that the here investigated foraminifera goes into a kind of dormancy during complete darkness and reduced the metabolism to an absolute minimum. However, since there is no uptake of any isotope during total darkness, this strategy applies not only to the foraminifera but also to their photosymbionts. In general, both inorganic nitrogen forms (nitrate and ammonium) can be used by photoautotrophs (algae and higher plants) as nitrogen source 32 . For metabolic pathways (amino acids, proteins, nucleic acids and else), both inorganic nitrogen forms first need to be incorporated into amino acids, which in the case of nitrate requires additional reduction equivalents, energy and enzymatic reactions 33 . For many photoautotrophic organisms a mixture of both compounds led to the highest nitrogen uptake in plants 30 . Kronzucker et al. 32 reported that nitrate uptake and assimilation is inhibited at high ammonium concentrations. This aspect can be excluded for our results since we incubated the foraminifera separately with nitrate and ammonium. Further, Dortch 34 postulated that the preferred nitrogen source of phytoplankton is ammonium, which does not fit to our results. These differences can probably be explained by the positive effect of nitrate uptake on the cation-anion balance of phototrophic organisms (phytoplankton), allowing higher nitrogen uptake and growth rates with nitrate than with ammonium 33 . In the past, some cultivation experiments were carried out with foraminifera, which had either light or temperature as a stress factor 35 . However, the temperature effect on LBFs is species specific and it has been shown that temperatures above 31 °C lead to a rapid death of the photosymbionts in H. depressa 36 . Since the light condition was constant in the experiments by Schmidt et al. 36 and the temperature in our experiments, it cannot be stated which parameter has a stronger effect on the foraminifera. In order to examine this aspect more closely, crossdesign experiments with 2 variables (temperature × light supply) must be carried out in the future. It was shown that the availability of light is the essential factor for the distribution of foraminifera with depth 35 . Presumably not only the daylight but also the moonlight plays a role here. Observations showed that LBFs grown in the natural environment have oscillations in their chamber volume, which is probably caused by lunar and tidal cycles 37 . It is assumed that the lunar cycle influences the productivity of the photosymbionts in LBFs and thus has a positive effect on the activity of the symbionts at full moon night 37 . However, the light intensity of moonlight is much lower than that of sunlight, only about 0.0024 μmol m −2 s −138 which is around 12.5 k times lower than that in our experiment. It should be noted, that in sunlight all visible wavelengths are relatively equally present, whereas in moonlight the wavelengths are generally cantered around 400 nm 38 . If this wavelength-dependent irradiation affects the metabolism of H. depressa or not has not yet been investigated and could certainly shed more light on whether moonlight has an effect on the LBF symbionts. Laboratory experiments have shown that H. depressa is a low light species 21 and can therefore survive even in very poor light conditions. However, based on the results of our study, it can be clearly shown that in complete darkness the foraminifera do not absorb any essential nutrients. Recent studies have even shown that sequestered chloroplasts in foraminifera degrade within a few days when exposed to high light conditions and also have a photobleaching effect 39 . Even foraminifera, which have neither photosymbionts nor sequestered chloroplasts, can cope better with less light than with high light intensities 40 . All of these results and the data from this study suggest that high light intensities has a significant negative effect on their metabolism, but light is an essential factor for foraminifera with photosymbionts to survive. Conclusion The uptake of carbonate, nitrate, ammonium and glucose in H. depressa is highly dependent on the availability of light. Under dark conditions, the organisms take up mainly glucose to provide energy for maintaining the metabolic processes. If foraminifera are exposed to light, the photosymbionts are primarily responsible for uptake www.nature.com/scientificreports/ and assimilation of carbonate, nitrate and ammonium. Based on these results, in future uptake experiments with H. depressa it is recommended to enrich the culture water with carbonate and ammonium nitrate in order to offer best conditions to study the activity of the foraminifera and their symbionts with changing environmental conditions.
v3-fos-license
2018-05-26T08:15:32.056Z
2018-05-24T00:00:00.000
44109142
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://substanceabusepolicy.biomedcentral.com/track/pdf/10.1186/s13011-018-0159-0", "pdf_hash": "b0fcc4db3e0f0e602dcc50cfd3fc06cfa53d636d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2744", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "sha1": "b0fcc4db3e0f0e602dcc50cfd3fc06cfa53d636d", "year": 2018 }
pes2o/s2orc
Financial hardship and drug use among men who have sex with men Background Little is known about the role of financial hardship as it relates to drug use, especially among men who have sex with men (MSM). As such, this study aimed to investigate potential associations between financial hardship status and drug use among MSM. Methods We conducted a cross-sectional survey of 580 MSM in Paris recruited using a popular geosocial-networking smartphone application (GSN apps). Descriptive analyses and multivariate analyses were performed. A modified Poisson model was used to assess associations between financial hardship status and use of drugs (any drugs, tobacco, alcohol, marijuana, inhalant nitrites, and club drugs). Results In our sample, 45.5% reported that it was somewhat, very, or extremely difficult to meet monthly payments of bills (high financial hardship). In multivariate analyses, a high level of financial hardship was significantly associated with an increased likelihood of reporting use of any substance use (adjusted risk ratio [aRR] = 1.15; 95% CI = 1.05–1.27), as well as use of tobacco (aRR = 1.45; 95% CI = 1.19–1.78), marijuana (aRR = 1.48; 95% CI =1.03–2.13), and inhalant nitrites (aRR = 1.24; 95% CI = 1.03–1.50). Conclusions Financial hardship was associated with drug use among MSM, suggesting the need for interventions to reduce the burden of financial hardship in this population. Introduction Gay, bisexual, and other men who have sex with men (MSM) are more likely to use illicit drugs compared to general population [1][2][3][4][5], perhaps given that they are more vulnerable to negative experiences in their daily lives. These experiences, described by Meyer's minority stress model, include rejection, stigmatization, discrimination, and social isolation which is ultimately due to their sexual orientation [6,7]. In 2016, data from a national, population-based sample from Australia showed that gay and bisexual men were significantly more likely than heterosexual men to use an illicit drug during their lifetime [8]. Additionally, in a longitudinal, community-based cohort comprising 13,519 US adolescents, gay males were shown to be at higher risk for concurrent polysubstance use than completely heterosexual individuals in repeated measures analyses [9]. Similarly, a study in the UK revealed that recreational drug use was greater among MSM; the rates of lifetime and past-month use of drugs, including mephedrone, ketamine, volatile nitrites, sildenafil, gamma hydroxybutyrate (GHB), and gamma-butyrolactone (GBL), were significantly higher in the MSM group than a non-MSM group [10]. Drug use, specifically among MSM, can motivate risky sexual behaviors, which in turn leads to negative health outcomes. In a cohort study from1998 to2008, Ostrow et al. [11] reported that a specific combination of sex-drugs contributed to the majority of HIV seroconversions among a sample of MSM (n = 1667) in the United States [11]. In addition, drug use can relate to poor mental health among MSM, including depressive symptoms [12]. Risk factors related to increased prevalence of drug use among MSM are multifaceted and complex. Previous research suggested that rejection, stigma, discrimination, and social isolation due to their sexual orientation are potential risk factors for drug use among MSM [13][14][15]. In addition to these factors, emerging research has explored the associations between financial hardship (when one has insufficient financial resources to adequately meet household needs) and drug use among sexual minority groups. Wong et al. [16] found that financial hardship was associated with illicit drug use in a sample of young MSM (n = 526) in Los Angeles, California. To the best of our knowledge, this is one of few studies conducted on financial hardship and drug use in MSM and this aforementioned study utilized experiences of childhood financial hardship as an indicator of socioeconomic status; thus, it does not necessarily represent recent financial hardship status. While evidence for a relationship between financial hardship and drug use remains scant, MSM groups are more likely to experience financial hardships [17,18], which may be associated with increased likelihood of drug use. MSM individuals suffer from an average wage penalty of approximately − 6.5% when compared to heterosexual men in France [19]. Furthermore, it is important to note that, despite the economic growth in Western Europe and France, gaps in income inequality have widened and the unemployment rate in France is estimated to be above 10% [20,21]. The objective of this study was to examine the association between financial hardship and drug use among a sample of gay, bisexual, and other MSM in the Paris (France) metropolitan area who were recruited from a popular geosocial networking application for MSM. We focus on MSM in France because gaps in income inequality have widened and the unemployment rate in France is approximated to be above 10% [20,22]. Such an increase in income inequality suggests that MSM who were previously experiencing financial hardship may continue to do so. Study participants For this study, a popular geosocial networking smartphone application (app) for MSM was used to recruit participants by means of a broadcast advertisement. The advertisements were limited to users in Paris (France) metropolitan area. Consistent with previous studies [23], users were shown an advertisement with text encouraging them to click through the advertisement to complete an anonymous web-based survey. To encourage participation, the advertisement stated that users who completed the survey would have a chance of winning €65 (approximately $70). Upon clicking the advertisement, users were directed to a landing page where they provided informed consent and initiated a 52-item online survey. Details of the study design and methods have been presented previously [24]. Briefly, the survey was offered in both French and English. The survey was translated by three native French speakers, and subsequently reviewed and adjudicated by a fourth native French speaker. A fifth French speaker and health researcher pretested and finalized the survey by back-translation. The majority of respondents (94.3%) took the survey in French, and the survey took an average 11.4 min (SD = 4.0) for users to complete. Among 5206 users who clicked on the advertisement and reached the landing page of the survey, 935 users provided informed consent and began the survey, and 580 users signed informed consent and completed the survey. Thus, the overall response rate was 11.1% and the completion rate of 62.0%. The protocols were approved by the New York University School of Medicine Institutional Review Board before data collection. All participants reported being at least 18 years old at the time of survey administration. Financial hardship Financial hardship was measured using the previously reported question [25,26], "How difficult is it for you to meet monthly payments on bills?" Response options included: "not at all difficult"; "not very difficult"; "somewhat difficult"; "very difficult"; and "extremely difficult". The following binary variable was created: high financial hardship (somewhat difficult; very difficult; and extremely difficult) and low financial hardship (not at all difficult and not very difficult), consistent with prior research [26]. A trichotomous measure of financial hardship was also analyzed: high ("very difficult" and "extremely difficult"), medium ("somewhat") and low ("not at all difficult" and "not very difficult"). Tobacco, alcohol and drug use Participants were asked about their use of drugs during the prior 3 months. The substances included were cigarettes, electronic cigarettes or nicotine vapes, alcohol (five or more drinks in one sitting), marijuana, synthetic cannabinoids ("synthetic marijuana"), cocaine, ecstasy (3,4-methylenedioxymethamphetamine [MDMA]), ketamine, GHB and GBL, methamphetamine, heroin, prescription stimulants, prescription benzodiazepines, inhalant nitrites, other inhalants (e.g., glue, solvents, and gas), non-medical use of prescription opioids, psychedelics (e.g., lysergic acid diethylamide [LSD] and psilocybin mushrooms), new psychedelics (e.g., psychedelic phenethylamines and N,N-dimethyltryptamine [DMT]), synthetic cathinones (e.g., bath salts), and anabolic steroids. For analytic purposes, composite variables were created. Overall use was defined as the use of any substance described above. Any drug use was defined as the use of any product except tobacco (cigarettes and e-cigarettes) use and alcohol. Tobacco use included traditional cigarettes and electronic cigarettes (nicotine vapes). Club drugs included ecstasy (MDMA), ketamine, GHB and GBL. Alcohol, marijuana and inhalant nitrite use were also included in the analyses as separate, distinct variables. Statistical analyses Data were analyzed using descriptive statistics by drug use. Multivariate analyses were conducted to examine the association between financial hardship status and use of drugs (any drug, tobacco, alcohol, marijuana, inhalant nitrites, and club drugs) after adjustment for socio-demographic covariates. The modified Poisson model (generalized linear models [GLMs] using Poisson and log link), suggested by Zou [27] used to calculate the adjusted relative risks (aRRs) and corresponding 95% confidence intervals (CI) due to the convergence issues of the log-binomial model. Data analysis was performed using Stata version 14.0 (StataCorp, College Station, TX). A two-sided p-value < 0.05 was considered to indicate statistical significance. Results The socio-demographic and financial hardship characteristics of the study participants are shown in Table 1. Of the 580 MSM, the mean age of the sample was 35.24 ± 9.94 years with a median of 35 years (range: 18-66 years). 63% were less than 40 years old. More than 65% of the participants reported that they were employed and not currently in a relationship (e.g., single). More than 45% Discussions To the best of our knowledge, this is the first study to have examined the association between financial hardship and drug use among MSM in the European Union, as well as one of few studies examining the association between recent financial hardship and drug use among any MSM population. The results demonstrate that almost half of the participants had experienced financial hardship (46%) and the majority (83.7%) of them had used at least one type of drug in the past 3 months. In addition to alcohol use, inhaled nitrites was the most commonly reported drug used in this sample, consistent with other studies among MSM [28,29]. Although the effect sizes were of relatively small magnitude, our findings suggest that higher levels of financial hardship are significantly associated with overall drug use, as well as the use of tobacco, marijuana and inhaled nitrites after adjusting for covariates. This result is meaningful as effect sizes can be affected by sample characteristics and should be interpreted in a research-specific context [30,31]. Wong et al. [16] presents similar result reporting that childhood financial hardship was associated with increased risk of recent drug use among young MSM in Los Angeles [16]. However, no significant associations were observed between alcohol and club drug use in this study. Meyer [6] proposed the minority stress perspective [6] to conceptualize the association between increased levels of stress due to exposure to victimization, discrimination, rejection, hostility and negative attitudes about homosexuality, and greater levels of drug use among sexual minority populations. Since financial hardship could be both a cause and a consequence of discrimination [32], there is a possibility that experiencing Furthermore, there were several limitations to this study that are important to mention. First, there was a difference between the exposure and outcome measurements with regard to the time period included; financial hardship was measured at the time of the survey, while drug use was measured based on the past 3 months. Therefore, participants may have reported financial hardship during any period of their lives, including current or past hardships, or hardships spanning the lifetime. In addition, no causal inferences can be drawn due to the cross-sectional design of the study. Reverse causality and a potential bidirectional relationship cannot be ruled out (e.g., consumption of drugs may contribute to financial hardship). Also, our study relies on a single item to measure of financial hardship as conducted by other previous studies [26,33]. Future studies involving multiple scales/indicators of financial hardship are warranted. Moreover, self-reporting was used to collect data, which could have introduced social desirability bias, reporting bias, or recall bias, particularly among MSM [34]. Therefore, for example, we may have underestimated the exact prevalence of drug use. Because some of study variables were not collected with the aim of maximizing the participation rate, there may have been residual confounding from other unmeasured covariates (e.g., income, education status, race/ ethnicity and binge drinking) related to substance use. Finally, we focused on MSM in the Paris metropolitan area who used a single geo-social networking application. The relatively low response rate precludes generalization of our results. Despite these limitations, this study adds to the body of literature, highlighting general drug use in addition to specific drug use, and describes the association of financial hardship and drug use among MSM. Future research with MSM should utilize longitudinal and qualitative research to better understand causal relationships and identify mechanisms for the association found. This research can provide better direction of structural interventions and policies to reduce health inequalities, identify factors (e.g., social exclusion, discrimination) associated with financial hardship and provide drug screening services to MSM. Conclusions We found that financial hardship was associated with overall drug use, tobacco, marijuana and inhaled nitrites among a sample of MSM in the Paris metropolitan area. Future studies should investigate the causal pathways that may link financial hardship to drug use. Our findings will be of value in developing effective prevention measures that address drug use among MSMs.
v3-fos-license
2020-06-25T14:30:39.419Z
2020-06-25T00:00:00.000
220050805
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00415-020-10009-z.pdf", "pdf_hash": "20ae01027320cfbc78631d34664707c480aa53d0", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2745", "s2fieldsofstudy": [ "Medicine" ], "sha1": "20ae01027320cfbc78631d34664707c480aa53d0", "year": 2020 }
pes2o/s2orc
Relationship between stroke etiology and collateral status in anterior circulation large vessel occlusion Background and purpose Clinical outcome after mechanical thrombectomy (MT) for large vessel occlusion (LVO) stroke is influenced by the intracerebral collateral status. We tested the hypothesis that patients with preexisting ipsilateral extracranial carotid artery stenosis (CAS) would have a better collateral status compared to non-CAS patients. Additionally, we evaluated MT-related adverse events and outcome for both groups. Methods Over a 7-year period, we identified all consecutive anterior circulation MT patients (excluding extracranial carotid artery occlusion and dissection). Patients were grouped into those with CAS ≥ 50% according to the NASCET criteria and those without significant carotid stenosis (non-CAS). Collateral status was rated on pre-treatment CT- or MR-angiography according to the Tan Score. Furthermore, we assessed postinterventional infarct size, adverse events and functional outcome at 90 days. Results We studied 281 LVO stroke patients, comprising 46 (16.4%) with underlying CAS ≥ 50%. Compared to non-CAS stroke patients (n = 235), patients with CAS-related stroke more often had favorable collaterals (76.1% vs. 46.0%). Recanalization rates were comparable between both groups. LVO stroke patients with underlying CAS more frequently had adverse events after MT (19.6% vs. 6.4%). Preexisting CAS was an independent predictor for favorable collateral status in multivariable models (Odds ratio: 3.3, p = 0.002), but post-interventional infarct size and functional 90-day outcome were not different between CAS and non-CAS patients. Conclusions Preexisting CAS ≥ 50% was associated with better collateral status in LVO stroke patients. However, functional 90-day outcome was independent from CAS, which could be related to a higher rate of adverse events. Introduction Mechanical thrombectomy (MT) is the recommended treatment for acute ischemic stroke due to large vessel occlusion (LVO) of the anterior cerebral circulation [1]. With increasing experience and technical advances, successful recanalization can nowadays be achieved in up to 90% of all thrombectomy cases. However, successful recanalization does not always entail a favorable outcome after endovascular stroke treatment. In this context, the extent of leptomeningeal collateral perfusion has been identified as a major determinant of patients' clinical prognosis [2]. Unfavorable collateral status on preinterventional angiography has been related to larger final infarct volumes and consequently a worse clinical outcome after MT [2,3]. Eva Hassler, Markus Kneihsl contributed equally to the manuscript. Previously, it has been assumed that chronically developing extracranial carotid artery stenosis (CAS) could enhance cerebral collateral flow, which might be related to a more favorable prognosis in acute stroke patients [4]. This raised the question, whether an underlying CAS would also lead to better leptomeningeal collaterals in patients with acute anterior circulation LVO stroke compared to those with more abrupt vessel occlusion due to proximal embolism (e.g. cardiogenic embolism from atrial fibrillation). While small previous investigations showed inconsistent results on the predictive value of stroke etiology on collateral status in LVO patients, [5][6][7] a very recent subanalysis of the MR-CLEAN Registry reported higher collateral recruitment in acute stroke patients with an underlying atherosclerotic carotid artery stenosis [8]. However, for yet unknown reasons, this might not translate into a higher chance for CAS patients to achieve functional independency after stroke [8]. We aimed at investigating preinterventional leptomeningeal collateral status in acute LVO patients according to the underlying putative stroke mechanism (atherosclerotic CAS ≥ 50% versus patients without significant ipsilateral CAS) and how this affects postinterventional adverse events and clinical outcome. Patient selection and data collection For the present study, we identified all consecutive ischemic stroke patients aged ≥ 18 years, who were treated by MT for acute anterior circulation LVO (i.e. occlusion of the intracranial internal carotid artery or middle cerebral artery in the M1 or M2 segment) between 2010 and 2017 at our primary and tertiary care university hospital. Clinical data including demographics, cerebrovascular risk factors, stroke etiology, characteristics of the endovascular procedure and outcome were retrieved from our prospectively collected electronical thrombectomy database [9]. Patients were divided into those with an underlying atherosclerotic extracranial ipsilateral carotid artery stenosis ≥ 50% (CAS) and those without an indication of preexisting significant carotid steno-occlusive disease (non-CAS). The cut-off was chosen because it (1) corresponds to recent guideline recommendations for diagnosing symptomatic carotid artery stenosis [10] and (2) was also used in prior studies on this topic [8]. Presence and degree of stenosis was determined on preinterventional computed tomography (CT) or magnetic resonance imaging (MRI) based contrast enhanced (CE) angiography and confirmed by digital subtraction angiography during the thrombectomy procedure using the North American Symptomatic Carotid Endarterectomy Trial (NASCET) criteria [11]. Patients with extracranial carotid artery occlusion were excluded from the study, as it was not possible to determine whether they had a preexisting stenosis (Fig. 1). Mechanical thrombectomy was conducted by interventional radiologists using stent retrievers and/or aspiration systems. If ipsilateral CAS was present, acute stenting procedures were performed at the discretion of the treating physician depending on morphology and grade of carotid stenosis (e.g. high-risk stenosis due to ulcerated plaque, visible residual thrombi or filiform stenosis). Imaging work-up and analyzes All included patients underwent preinterventional brain imaging including intra-and extracranial CT or MRI based CE angiography (CT angiography: ≈ 90%). Postinterventional control brain imaging (predominantly MRI) was routinely performed 24 h after thrombectomy or at any time in case of clinical deterioration. All images were retrospectively analyzed by two experienced neuroradiologists (E.H., M.M.), who were blinded to clinical and outcome data. Leptomeningeal collateral status on preinterventional CT-or MR-angiography was categorized according to the collateral score by Tan et al. into scores 0: absent collateral supply of the affected MCA territory, 1: collateral supply filling ≤ 50%, 2: collateral supply filling > 50% but < 100%, and 3:100% collateral supply of the occluded MCA territory [12]. Postinterventional adverse events and outcome Postinterventional brain scans were reviewed to identify intracranial hemorrhage (ICH). ICH after MT was defined according to the Heidelberg Bleeding Classification and deemed symptomatic if a deterioration of patient's clinical symptoms was observed (defined as a National Institutes of Health Stroke Scale [NIHSS] score increase of > 2 points in one category or > 4 points in total) [13,14]. Treatmentrelated arterial re-occlusion or dissection was diagnosed by digital subtraction angiography in synopsis with postinterventional CT/MR-angiography and color-coded duplex sonography of the extra-and intracranial vessels. Functional neurological outcome according to the modified Rankin Scale (mRS) was assessed by a neurologist with special expertise in stroke in a personal visit at the stroke outpatient department or if not possible in a telephone interview at 90 days poststroke. Statistics Statistical analyses were performed using IBM SPSS Statistics, version 23. The association between stroke etiology (CAS versus non-CAS) and preinterventional leptomeningeal collateral status was investigated. Chi square test or Fisher's exact test was used for the comparison of dichotomous variables. Parametric continuous variables were compared using the Student's t test. For nonparametric data, the Mann-Whitney U Test was utilized. In addition, we calculated a multivariable binary logistic regression model with favorable collaterals as the target variable. Besides CAS, it also contained age, sex and other previously identified predictors of collateral status [hypertension, M2-occlusion, baseline NIHSS and baseline Alberta Stroke Program Early CT Score (ASPECTS)] [5][6][7]. A p value less than 0.05 was considered statistically significant. The study was approved by the ethics committee of the Medical University of Graz. Anonymized datasets generated during this study are available from the corresponding author upon reasonable request. Results Over the study period, 346 patients had undergone MT for anterior circulation LVO. Of those, 65 patients were excluded due to missing CE angiography data or insufficient imaging quality (n = 44), ipsilateral carotid artery dissection (n = 9) or extracranial occlusion (n = 12) (Fig. 1). None of the studied patients had a significant intracranial artery stenosis. Unfavorable collateral status and stroke etiology 138 patients had unfavorable collaterals on pre-treatment angiography (49.1%). Of those, only 11 patients had CAS-associated stroke (8.0%). Compared to non-CAS patients with an unfavorable collateral status (n = 127), Discussion This study shows that the presence of preexisting ipsilateral CAS ≥ 50% is associated with more favorable collateral status in acute LVO stroke patients. However, this does not translate into a better functional outcome at 90 days, which might be attributed to a higher rate of adverse events after MT. Although occurring rarely, CAS patients with unfavorable collaterals on pretreatment angiography face a particularly high risk of poor outcome three months after the intervention (≈ 90%). Preinterventional leptomeningeal collaterals have a significant impact on patients' clinical prognosis after MT. In line with earlier investigations, this study also shows that favorable collateral status on pretreatment angiography was associated with smaller postinterventional infarct size and a better functional outcome at 90 days after the intervention [5][6][7]. Conditions that could predict favorable leptomeningeal collaterals after acute cerebral artery occlusion are therefore of interest. In this context, chronic cerebral hypoperfusion was associated with improved cerebral collateral flow in experimental rat models and in patients with stenoocclusive disease of the carotid vasculature; and the effect increased with the degree of stenosis [4,15,16]. Moreover, repeated arterio-arterial (micro)embolism proceeding from aggressive carotid plaques could lead to recurrent and clinically silent cerebral ischemia, which might trigger better collaterals due to ischemic preconditioning [17]. Although recent studies only addressed primary collateral pathways in the circle of Willis, it seems plausible that CAS patients with acute intracranial LVO stroke would also be associated with favorable leptomeningeal collateral recruitment, which could further impact patients' clinical prognosis. While two small studies presented divergent results on that topic, [6,7] a very recent MR-CLEAN Registry subanalysis showed that CAS-related LVO stroke patients had a better collateral status than those with cardioembolic stroke, [8] which is in line with our work. In contrast to our findings, the latter study showed slightly better median 90-days mRS scores in their CAS subgroup. This might be attributed to methodological differences between both studies: From a pathophysiological perspective, we decided to include all patients without an indication of symptomatic carotid stenosis ≥ 50% (according to the NASCET criteria) in our non-CAS group. As we did not observe patients with ulcerated carotid plaques, most non-CAS patients should have had a proximal (cardio)embolic stroke etiology, which is frequently missed on routine stroke work-up (e.g. in case of paroxysmal atrial fibrillation). However, such initially cryptogenic stroke patients are younger and have less comorbidities compared to the standard cardioembolic stroke patients [18]. The exclusion of such patients as it was done by the MR-CLEAN investigators might therefore explain the reported baseline imbalances in their subgroup analysis [8]. In contrast, our study provides comparable subgroups (CAS versus non-CAS) regarding age, medical history and prestroke mRS. Of note, the effect on outcome presented in the MR-CLEAN Registry subanalysis remained relatively low as there was no statistically significant benefit for CAS patients in terms of 90-day post-stroke dependency or mortality rates [8]. The authors concluded that larger thrombi and difficulties in gaining intracranial access due to proximal stenosis might have caused longer thrombectomy procedures compromising the positive impact of good collaterals on clinical prognosis. Our study also shows a trend towards longer interventions in CAS-related strokes, but additionally draws attention to postinterventional adverse events (predominantly vessel re-occlusions and symptomatic ICH) in CAS-related thrombectomy patients. The high percentage of vessel re-occlusion (11%) we detected in our CAS subgroup might be also a result of more complex endovascular procedures (balloon dilatation: 43%, stenting of CAS: 24%) and residual stenosis leading to recurrent arterio-arterial embolism. Although data on vessel reocclusion in the early phase after MT are scarce, this finding is in line with a small retrospective study, which has shown rather high rates of postinterventional re-stenosis/occlusion and a poor prognosis in LVO stroke patients with tandem pathologies [19]. Symptomatic intracerebral bleeding was the second most common adverse event in our study and might be partly attributed to extracranial artery stenting in the CAS subgroup, which required more intense antiplatelet therapy (i.e. dual antiplatelets) after the intervention. In addition, carotid artery stenting might enlarge reperfusion injury, which seems generally more pronounced in chronically ischemic tissues due to disturbances of cerebral autoregulation [9]. Of note, this is the first study that casts light on acute LVO patients with unfavorable collaterals despite underlying CAS, as they are at very high risk for postinterventional adverse events (37%) and functional dependency at 90 days (91%). Although the number of patients in this subgroup was rather small, this finding should be considered in the clinical management of such patients. The major limitation of this study is the retrospective design and the fact that no blinding regarding stroke etiology and collateral status was possible. However, during rating, neuroradiologists were blinded to clinical information including adverse events and outcome. We decided to abstain from volumetry, but instead used pre-and postinterventional CT or MRI based ASPECT Scores to estimate the acute and final infarct, which is more feasible in daily clinical routine. Moreover, we did not analyze the Circle of Willis for anatomical variants, which could have affected our results and should be addressed in future studies on cerebral collaterals. Another restriction was, that not all patients underwent multi-phase CT/MR-angiography (≈ 34%). If collateral filling in the post venous phase occurs, collaterals could be underestimated when using the Tan Score in single-phase angiography. However, as this is not a frequent finding and collaterals were comparable to those of earlier investigations using multi-phase angiography, it should not have influenced our results to a major extent [20]. Finally, we cannot totally exclude that in few cases emboli broke off from the carotid stenosis leaving it less than 50%. However, patients with minor carotid artery stenosis < 50% did not meet the established ultrasound criteria for aggressive plaques (i.e. ulceration, intraplaque hemorrhage, etc.), which should exclude a major effect on the results of this investigation.
v3-fos-license
2018-04-04T00:05:05.634Z
2018-04-18T00:00:00.000
206139715
{ "extfieldsofstudy": [ "Materials Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2018/sm/c8sm00018b", "pdf_hash": "8fa6b9697a4a1834ebdb0a17d10e3ee831ab52b4", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2747", "s2fieldsofstudy": [ "Materials Science", "Biology" ], "sha1": "84c8ac6bf021af0d96a3e0c0e826d7ea9d4ffe12", "year": 2018 }
pes2o/s2orc
Guiding 3D cell migration in deformed synthetic hydrogel microstructures The ability of cells to navigate through the extracellular matrix, a network of biopolymers, is controlled by an interplay of cellular activity and mechanical network properties. Synthetic hydrogels with highly tuneable compositions and elastic properties are convenient model systems for the investigation of cell migration in 3D polymer networks. To study the impact of macroscopic deformations on single cell migration, we present a novel method to introduce uniaxial strain in matrices by microstructuring photo-polymerizable hydrogel strips with embedded cells in a channel slide. We find that such confined swelling results in a strained matrix in which cells exhibit an anisotropic migration response parallel to the strain direction. Surprisingly, however, the anisotropy of migration reaches a maximum at intermediate strain levels and decreases strongly at higher strains. We account for this non-monotonic response in the migration anisotropy with a computational model, in which we describe a cell performing durotactic and proteolytic migration in a deformable elastic meshwork. Our simulations reveal that the macroscopically applied strain induces a local geometric anisotropic stiffening of the matrix. This local anisotropic stiffening acts as a guidance cue for directed cell migration, resulting in a non-monotonic dependence on strain, as observed in our experiments. Our findings provide a mechanism for mechanical guidance that connects network properties on the cellular scale to cell migration behaviour. Introduction Multicellular organisms consist of a composite of cells and the extracellular matrix (ECM) that forms the scaffolding in which cells live and move. In animal tissue, the ECM is composed of a complex network of various biopolymers, including collagen and proteoglycans. The migration of cells in this environment is important to a variety of physiological processes, including the immune response, embryogenesis, and cancer metastasis. [1][2][3] To navigate such a complex environment, cells employ a multitude of biochemical signalling pathways. However, cells also make use of the available mechanical information, by probing the surrounding matrix via integrin-mediated adhesions. Integrins couple the ECM to the acto-myosin machinery of the cell, thereby enabling the transmission of forces between the cell and its environment. 4 This coupling equips the cell with mechanosensitive capability. Indeed, the structure of focal adhesions and the cytoskeleton can be altered by changing the ECM stiffness, with the formation of larger adhesion complexes and more pronounced actin structures on stiffer substrates. 5 This mechanosensitivity also affects migration: most cells typically migrate from the soft to the stiff side of a substrate, a phenomenon called durotaxis. 6,7 While much is known about the mechanosensitive signalling and response pathways in cells, 8 it is still unclear how mechanical cues such as deformations and heterogeneities in the matrix affect migration in 3D substrates. [9][10][11] Synthetic hydrogels have been introduced to study cell migration in 3D environments with highly controllable mechanical properties. These hydrogels can be composed of polyethylene glycol (PEG) with exact molecular composition such that the mesh size and the mechanical properties of the matrix can be precisely tuned for bioengineering applications and in vitro experiments. [12][13][14][15][16] For example, the viability of primary cells and specific signalling pathways important for angiogenesis can be enhanced by altering the concentration and availability of integrin binding sequences containing the RGD-peptide-motif in synthetic hydrogels. 17,18 Furthermore, to enable cell migration in gels with small mesh sizes, the presence of proteinase-sensitive cross-linkers in PEG-based hydrogels and adhesion mediating peptide sequences are crucial. 19,20 Even though the physical properties of these hydrogels can be tailored, this often only applies to the macroscopic properties. However, cells interact with their surrounding on a scale of a few tens of micrometers. 21 It is therefore important to not only tune the macroscopic properties of the matrix, but to also consider the network properties on the microscale and below. 9 Macroscopic deformations in the ECM could alter the network arrangement on the cellular scale. Interestingly, cells can reorient to strains applied to the substrate. [22][23][24] Indeed, a number of studies with 2D or 3D substrates show that the cell orientation direction varies when changing the matrix composition and dimensionality or when using different strain patterns. 11 In collagen gels, for example, the protease activity varies when the fibres are strained 25 and cells embedded in deformed collagen matrices orient parallel to the strain direction. 26 The response of cells to a static external strain in biopolymer gels like collagen has been attributed to the induced alignment of fibres in the direction of the strain [27][28][29] or to the strain stiffening behaviour of these extracellular fibre networks. 22,30 However, cell reorientation and directed migration in a strain field was also observed in synthetic hydrogels, which do not exhibit strong macroscopic strain stiffening. 6,31 Furthermore, highly cross-linked synthetic hydrogels do not contain large fibres, which could align to a strain. Thus, the underlying mechanism of how cells sense and react to strain in synthetic hydrogels remains unclear. Here we investigate, using both experiments and theoretical modelling, how deformations in matrix metalloproteinases (MMP) degradable and RGD functionalized PEG-based hydrogels affect the migration of embedded motile cells. We fabricate small strips of hydrogel photo-polymerized inside a microchannel slide, which results in anisotropic swelling in the direction of the strip width, straining the network uniaxially. We find that HT-1080 cells embedded in such gel strips exhibit a preferred migration direction parallel to the strain direction. However, the anisotropy of the cell migration reveals a nonmonotonic dependence on the magnitude of the strain. To understand this striking phenomenon, we introduce a computational model of a proteolytically active cell, which performs durotactic migration in a strained 2D network. The experimentally observed migration behaviour is reproduced by our model and can be explained by a local stiffening mechanism at the cellular scale. This anisotropic stiffening thereby provides a physical mechanism to explain the non-monotonicity of anisotropic cell migration with strain. Our study demonstrates that the microscopic properties of cell matrices are crucial to elucidate how mechanical cues can manipulate cell migration behaviour. Results A synthetic hydrogel with embedded cells serves as a model system for cell matrices with tuneable degradability To analyse cell migration in a simplified and highly controllable environment, we use a synthetic PEG-based material to encapsulate HT-1080 cells in a thick slab of hydrogel. These HT-1080 cells represent a well characterised fibrosarcoma cell line that expresses matrix metalloproteinases (MMPs) to digest the ECM, and is widely used in 3D migration experiments. The gel consists of 4-armed PEG-norbornene (PEG-NB), which is cross-linked by a peptide sequence that is cleavable by MMPs, as depicted in Fig. 1A. We add the peptide sequence CRGDS to promote cell adhesion via integrins to the otherwise bio-inert PEG backbone. A radical photo-initiator is included in the pre-polymer solution to initiate the thiol-ene polymerization reaction. To obtain isotropically swollen gels, we polymerize the hydrogel by homogeneous illumination of the entire polymer solution and allow the gel to float after polymerization. The floating gel swells isotropically and is subsequently immobilised on a micro-well surface. 32 The photo-induced polymerization is biocompatible and yields matrices with a storage modulus of 20-70 Pa (ESI, † Fig. S1) and mesh sizes of a few tens of nm's after swelling. 20,[33][34][35] Because of this small mesh size, cells can only migrate through the network if they are able to digest the cross-links with MMPs. 19, 36 We did not observe significant deformations of the cell body when it squeezes through the mesh, which was observed in prior work. 37,38 Note, however, that the pores in our hydrogels are orders of magnitude smaller than the minimal cell diameter that can be achieved by HT-1080 cells, and therefore matrix digestion by MMPs is necessary for cell migration in these hydrogels. 37 We observe that cells move through the hydrogels with a rounded morphology and small protrusions at the leading edge ( Fig. 1B), as previously described for HT-1080 cells embedded in synthetic hydrogels and dense collagen networks. 34 The trajectories of the cells inside the hydrogel appear to be random and isotropic (Fig. 1C). We can influence the migratory behaviour of these cells by substituting a fraction of the MMP cleavable peptide cross-linker by a non-cleavable PEG-dithiol linker to reduce the overall degradability of the gel. Even in gels where only 40% of the cross-links are cleavable, the cells still migrate, but their overall displacement decreases significantly in comparison to cells migrating in a completely degradable gel for the same amount of time, as illustrated in Fig. 1C (see also ESI, † Movies S1-S3). Under all these conditions, we observe that the cells migrate isotropically through the gels (ESI, † Fig. S2). Uniaxial strain by anisotropic swelling of confined microstructured hydrogels induces anisotropic cell migration Next, we sought to investigate how network deformations affect cell migration. To induce a strain in 100% degradable gels, we form hydrogel microstructures under confinement. Small strips with a high aspect ratio are polymerized inside 400 mm high channel slides by photolithography, as illustrated in Fig. 2A. After flushing the system with cell culture media, the hydrogel strips swell. We only analyse hydrogel swelling and cell migration in the middle 20% of the longitudinal section of the strips to avoid edge effects of the strip ends. To investigate the direction of swelling in our confined geometry in detail, we embed small fluorescent beads in a range of hydrogels polymerized with different compositions. To control the hydrogel composition, we vary both the overall PEG-NB monomer concentration and the cross-linker to monomer ratio. We monitor the movement of the tracer beads inside the hydrogel throughout the swelling process (see ESI, † Movie S4) and analyse their trajectories with particle image velocimetry (PIV) to obtain velocity fields that quantify the swelling behaviour. After 2 h no further bead movement in the gel is detectable, indicating a stable swelling of the hydrogel structures. The accumulated velocity fields of tracer beads within the first 2 h is displayed in Fig. 2B, showing bead movement mostly in the direction along the short axis of the strip. This anisotropic swelling behaviour is present in all the gels we tested (ESI, † Fig. S3 and S4). For smaller cross-linker ratios, we measure higher overall velocities demonstrating a higher degree of swelling. Hence, by varying the cross-linker ratio of the hydrogel, we can tune the swelling and thereby the uniaxial strain induced in the gel. To quantify the strain in the hydrogel due to the anisotropic swelling, we compare the width of hydrogel strips with embedded cells after completed swelling (W f ) with the initial strip width of 400 mm (W 0 ). We investigate how this swelling strain, g s , is affected by the gel composition, by varying the PEG-NB monomer concentration as well as the cross-linker ratio. We observe that the measured swelling strain increases almost linearly with decreasing crosslinker ratio, up to high strain values of roughly 1.4 (Fig. 2C). By contrast, the concentration of monomer in the gel does not significantly influence the magnitude of swelling. We exclude hydrogel strips with cross-linker ratios below 0.525 and 0.475, for 2 mM and 3 mM PEG-NB gels respectively. Such hydrogels exhibit high strains, but they are not stable over longer time periods, and are therefore unsuitable for cell migration studies. Thus, by constructing the gel with high enough cross-linker ratio in confined microstructures, we are capable of inducing uniaxial strain in hydrogels with values ranging from 0.4 to 1.4. To analyse how cells migrate in a uniaxially strained network, we embed HT-1080 cells in hydrogel strips and monitor their migration for 24 h starting 3 h after encapsulation (see ESI, † Movies S5 and S6). With increasing cross-linker ratio, the percentage of migrating cells in the strips decreases to the point where motility is completely inhibited (ESI, † Fig. S5). To illustrate the migratory behaviour of cells in hydrogels, we show a phase-contrast image of the analysed hydrogel area overlaid with tracked cell trajectories in Fig. 2D. Cells in this gel exhibit a highly anisotropic migration, with the main migration direction oriented parallel to the swelling direction. This observation is consistent with prior experiments showing that fibroblasts preferentially migrate parallel to an applied static strain inside 3D substrates. 31 Interestingly, however, when we compare the trajectories of cells migrating in hydrogels with different strains in our experiments, we observe a gradual shift from anisotropic migration in networks with moderate strains to a more isotropic mode of migration with higher strains ( Fig. 2E and ESI, † Fig. S6). This migration behaviour is surprising, because the migration anisotropy decreases with increasing strain anisotropy. Our observation suggests that the strain in the hydrogel is perhaps not the only relevant factor for cell guidance. Model of a durotactic motile cell with proteolytic activity in an elastic network To elucidate the basic principles of cell migration in strained networks, we develop a simple theoretical model. Specifically, we aim to capture the basic physical processes of mechanosensing and cell migration using a system-level description of a cell moving through an elastic network. In our model, we explicitly describe the matrix through which the cell moves using a coarse-grained model of a triangular spring network. 39 To introduce the intrinsic disorder of a real hydrogel, we randomly delete a fraction of the bonds in this network. The cell can mechanically interact with the network and move between lattice nodes. To develop a simple description of how the cell interacts with the polymer meshwork, we briefly summarise the key aspects of cell migration in such environments. In general, cells adhere to and contract the matrix, which allows the cell to mechanically probe its surroundings and generate a force to move the cell body as a whole. Cells typically move from the soft side of a substrate towards stiffer regions -a phenomenon called durotaxis. 6 Furthermore, to move through a dense 3D network, cells have to digest the matrix using proteinases. Interestingly, experiments have revealed that cells do not digest the matrix where cell adhesion and force-generation occurs. 40,41 Instead, matrix proteolysis is locally separated from force generation and is mostly localised behind the leading edge and near the cell body for HT-1080 cells in collagen networks, 40 as illustrated schematically in Fig. 3A. To capture these aspects of proteolytic cell migration, we propose a minimal model with the following steps: (i) Contraction: the cell pulls on the nearby lattice nodes in the network (yellow dots in Fig. 3B), thereby deforming the network. (ii) Mechanosensing: we calculate the local stiffness of the deformed nodes on which the cell pulls. (iii) Local durotaxis: the cell centre moves to the neighbouring node (shown in grey in Fig. 3B) with the highest local stiffness. (iv) Proteolysis: to capture MMP activity, we allow the cell to digest lattice bonds at a fixed rate. Importantly, this MMP activity only acts locally. Therefore, only bonds near the cell body can be cleaved by the cell (marked in grey in Fig. 3B). By repeating these four basic steps of this cell migration cycle (Fig. 3C), we simulate cell movement on a 2D lattice. For simplicity, we do not include cell polarization in our migration model (see ESI † for a model extension with time-averaged mechanosensing as polarization factor). To model externally deformed matrices, we stretch the spring network uniaxially up to a given strain under fixed boundary conditions before cell migration is simulated. Examples of simulated cell trajectories in strained and unstrained networks are shown in Fig. 3D and ESI, † Movies S7-S9. Our simple model enables us to simulate proteolytic cell migration on a 2D elastic lattice. To verify our model, we first compare the migratory behaviour of cells in our simulation with experimental data in isotropic networks. In both the model and our experiments, we observe that the mean squared displacement (MSD) increases with time as an approximate power law with an exponent of roughly 1.4-1.8 (Fig. 4A). This dependence indicates super-diffusive behaviour. Interestingly, if we increase the proteolysis rate in the model, the exponent of the apparent power law increases. This suggests that the migration of the cell becomes more persistent with increasing proteolytic activity (see ESI, † Fig. S7 for MSD comparison in strained systems). To further quantify the statistics of cell migration, we determine the velocity autocorrelation function (VACF) of the migration velocity. In our experiments, we find that the VACF decays with a characteristic time that appears to decrease only weakly with the degree of degradability of the hydrogels. Note, the simulated VACF is qualitatively similar to the experimental result even though we did not include cell polarization in this model. Cell polarization would imply an intrinsic persistence time for the migratory behaviour. By contrast, the persistence of cell migration in our model is an emergent phenomenon, which derives from the dilution of the network due to proteolytic digestion. Cells digest the network, Non-monotonic response of cell migration to external strain can be explained by anisotropic geometric strain-stiffening on the microscale After verifying our simple migration model in isotropic systems, we next sought to investigate cell migration in uniaxially strained networks by comparing simulated and experimental cell trajectories. To quantify the degree of anisotropy of the cell trajectories, we calculate the Anisotropic Migration Index (AMI), by comparing the total distance travelled by cells parallel, D 8 , and perpendicular, D > , to the main strain direction, When AMI 4 0, cells migrate preferentially parallel to the strain, while AMI o 0 indicates migratory behaviour preferentially oriented perpendicular to the applied strain. Experimentally, we observe that the AMI of cells in isotropically swollen 2 mM PEG-NB gels is close to zero, indicating isotropic migration (Fig. 5A). As the strain is increased, the AMI increases to values as high as 0.6, indicating highly anisotropic cell migration along the strain direction. However, when the strain is increased further, the migration of cells becomes more isotropic again, as was already suggested by the raw trajectories in Fig. 2E. This non-monotonic response of the anisotropy of cell migration to an increasing strain is surprising and suggests that the strain triggers an additional mechanism in the matrix that guides cell migration depending on the strain magnitude. In our model, an externally applied strain similarly leads to an anisotropic migration oriented preferentially parallel to the strain. Furthermore, we also observe a non-monotonic relationship between anisotropic migration and strain amplitude in the simulations, in accord with our experimental results. However, the overall anisotropy and corresponding AMI values are smaller in the simulated data (Fig. 5A). Nonetheless, our model is able to qualitatively capture the non-monotonic anisotropic migration response of cells migrating in deformed hydrogels. Furthermore, by including a simple cell polarization mechanism in the migration model, we can quantitatively reproduce the maximal AMI observed in our experiments (ESI, † Fig. S9). To understand the origins of the non-monotonic dependence of the anisotropic cell migration on external strain, we use our model to investigate the local matrix stiffness. Recall, in our model the cell performs local durotactic migration and is therefore guided by local stiffness differences in the matrix, always moving in the direction of highest local stiffness. To investigate the local ''stiffness landscape'', we analyse the node stiffness in different orientations relative to the external strain direction. Even though the springs in our network model are linear, we observe that the local stiffness depends on the external strain, but in a distinct way for different orientations (Fig. 5C). This stiffness is measured before cell migration in the network is simulated, therefore representing an intrinsic matrix property (see also ESI, † Fig. S8). The initial local matrix stiffness perpendicular to the deformation axis increases linearly with strain amplitude (Fig. 5C). This perpendicular stiffening is a direct result of the tension in the springs along the strained directions. Conceptually, this is a simple geometric effect similar to the greater stiffness experienced when plucking a string under increasing tension. By contrast, the local stiffness parallel to the deformation axis exhibits a fast initial stiffening, followed by saturation to a constant value at higher strains. This stiffening mechanism has previously been observed for the macroscopic response of the network. [42][43][44] Briefly, intrinsic heterogeneities in the network with a reduced local connectivity introduce softer elastic modes in the system, which get pulled out by the macroscopic strain. Thus, even though the springs that describe the network elasticity are linear, geometric effects induce stiffening of the local environment. In addition, the mean node stiffness measured by cells, which migrate through strained networks is also affected by the proteolytic digestion of the matrix by the cell (see ESI, † Fig. S8). The nonlinear effects described above may thus be enhanced by the proteolytic digestion of matrix bonds, which lowers the local network connectivity and thus introduces heterogeneity. Geometric stiffening effects lead to a local anisotropy in network stiffness that depends non-monotonously on the strain magnitude. Because of this anisotropic stiffening, we expect that at small strains a durotactic cell will migrate preferentially parallel to the deformation axis, where the node stiffness is highest, while at higher strain values, the cell will tend to steer away from the deformation axis. Thus, the local orientationdependent stiffening of the matrix can account for the nonmonotonic behaviour of the anisotropic cell migration in gels with increasing strain. Discussion Here we use a hydrogel system 34,45 to investigate how uniaxial deformations in synthetic hydrogels influence the migration of embedded HT-1080 cells. In this system, cell migration is dependent on the proteolytic digestion of matrix cross-links, because the mesh size of the gel is on the order of tens of nanometres, considerably smaller than the cell diameter. 19 We confirmed the importance of proteolytic activity by showing that cell migration is hampered when the fraction of degradable cross-links in the gel is too low (Fig. 1C). The linear elastic properties of the PEG-based matrix, as well as the defined composition and adjustability of the matrix properties provide a simplified, controllable environment for embedded cells. 46,47 Defined matrix properties enable the detection of fundamental guidance principles in such systems. 48,49 By using uniaxially swollen PEG-based hydrogels and our minimal cell migration model, we showed that HT-1080 cells preferentially migrate parallel to the main strain direction with the degree of anisotropy of migration depending non-monotonically on the strain magnitude. To analyse cell migration in deformed matrices, we established a new set-up to induce strain in synthetic, photopolymerizable hydrogels by microstructuring strips into channels. These hydrogels are confined in the z-direction of the channel. Because of this axial confinement and the high aspect ratio of the strips, the inherent swelling of the hydrogel only occurs in the direction of the short axis of the strip. Since the strain is induced by this uniaxial swelling process and not through mechanical stretching, no compression in the direction perpendicular to the strain occurs. The resulting uniaxial strain field offers an advantage over other straining devices where a mechanical stretch often results in complex strain fields, complicating the interpretation of experimental results. 23,24,50,51 Another advantage of our system is the excellent optical accessibility due to the use of commercially available channel slides of high optical quality. However, a draw-back of our system is that we regulate the degree of swelling and therefore the strain of our system by changing the cross-linker amount in the matrix (see Fig. 2). Thus, both the rigidity and the mesh size of the gel differ for different deformations generated in the matrix. Several studies have analysed how cells respond to external strains in the matrix, depending on the temporal and spatial properties of the imposed strain as well as the rigidity and composition of the matrix. 11,24,26,31 However, the underlying mechanism for the observed cell alignment remains unclear. 24,31 In naturally derived gels such as collagen or Matrigel, the strong strain stiffening response of these non-linear elastic materials can generate a macroscopic stiffness anisotropy in the gels. 52,53 Furthermore, multiaxial rheological experiments revealed that under strain, biopolymer matrices not only stiffen when strained but also show a softening in the compressed direction. 54 Such stiffness anisotropies can provide durotactic cues to cells, which together with the alignment of fibres in strained collagen networks has been suggested as possible mechanism to explain the preferential orientation of cell trajectories along the main strain direction. 28,55 However, cell alignment was also observed in synthetic hydrogels that show linear macroscopic elastic properties when stretched. 6,31 To explain the cell alignment in direction of the external strain, a theoretical model was introduced, 56 showing the alignment of static cells to the main strain direction, but the actual proteolytic migration of cells in deformed networks was not considered in this model. Our experimental analysis of cell migration directionality in strained matrices shows a preferred migration of HT-1080 cells parallel to the external strain direction. Importantly, this preferred migration along the deformation axes shows a nonmonotonic dependence on the strain magnitude, with the strongest alignment at intermediate strain levels. To further elucidate cell behaviour in strained matrices, we developed a minimal model of a cell migrating in a strained network. In our model, the cell is assumed to migrate in the direction of highest local stiffness and randomly dilutes cross-links locally around the cell body. The modelled cell migration captures our experimental observation: cells preferentially migrate along the deformation axis, but the degree of alignment along this axis depends non-monotonically on the strain magnitude. Furthermore, our model reveals a mechanism that gives rise to such a nonmonotonic dependence: the network of linear springs in our model locally stiffens due to the network strain, but this nonlinear effect is itself anisotropic. Indeed, the stiffness of the network nodes probed in different orientations to the strain direction depend in different ways on the strain magnitude (Fig. 5C). This nonlinear anisotropy can therefore account for the non-monotonicity of the cell migration behaviour with applied strain (Fig. 5A). The overall anisotropy of the migration directionality in the model is smaller than our experimental results. This small quantitative discrepancy could arise because we neglect the effects of cell polarization in our model. Nonetheless, cell polarization will not affect the magnitude of migration anisotropy, unless the polarization itself is strain or stiffness sensitive. Indeed, if cells migrate more persistently along the stiffer, strained direction this would result in higher values of the AMI. We explored this idea by considering an extension of our model where mechanosensing is performed based on time-averaging local stiffness measurements. In this extended model, the cell performs durotactic migration that is not only based on the current stiffness gradient, but also on a limited number of previously encountered gradients. Such a sensing memory can increase the maximal anisotropic migration in our model, achieving a better quantitative agreement with the observed experimental AMI values (see ESI, † Fig. S9). Furthermore, the proteolysis step in our model is assumed to be random and thus does not depend on the amount of strain applied to a cross-link. In experiments, however, the susceptibility of strained collagen fibrils to digestion by MMPs or other collagenases was observed to be influenced by applied strains. 25,57-59 Therefore, the strain in our experiments may affect matrix proteolysis as well as mesh size differences in different directions of the matrix, thereby favouring the migration parallel to the external strain and leading to higher AMI values. Thus, an important future direction is to understand how these factors compete with the nonlinear anisotropic effects in guiding cell migration in strained 3D environments. Conclusion By combining an experimental approach to study cell migration within a reduced 3D matrix with theoretical modeling, we have shown that cells in uniaxially strained hydrogels migrate preferentially parallel to the external strain direction, but with the degree of alignment depending non-monotonically on the strain magnitude. The non-monotonicity of the Anisotropic Migration Index (AMI) can be explained by a model of durotactic cell migration, in which the local anisotropic geometric stiffening of the matrix guides cell migration. Multiple studies on naturally derived gels, such as collagen and Matrigel, have suggested that stiffness acts as a guidance cue for cell migration in strained 2D and 3D matrices. 52,53 Here, we propose that local strain stiffening also occurs in cross-linked synthetic hydrogels, and that the resulting stiffness anisotropies in the matrix influence cell migration directionality. Such local changes of the mechanical properties of synthetic hydrogels should therefore also be considered when using similar hydrogels in implants or as tissue substitutes. Indeed, the non-monotonic local stiffening of the network indicated by our model may act as a regulator for cell migration directionality in synthetic hydrogels. Materials and methods Cell culture HT-1080 cells (DSMZ) are cultured in normal growth medium consisting of Dulbecco's modified Eagle's medium (Sigma) supplemented with 10% foetal bovine serum (Sigma). For experiments 1% penicillin/streptomycin (Sigma) is added to the normal culture medium. HT-1080 LifeAct-TagGFP2 cells (ibidi) are cultured in normal growth medium with addition of 0.75 mg ml À1 Geneticin (Gibco) as selective antibiotic to maintain transgene expression. All cultures are incubated at 37 1C and 5% CO 2 . Preparation of the pre-polymer solution Pre-polymer solution is prepared in PBS containing 2-3 mM 20 kDa 4-armed PEG-norbornene (PEG-NB, JenKem Technology), an off-stoichiometric amount of dithiol-containing, MMP-degradable cross-linking peptide (KCGPQGIWGQCK, Iris Biotech), 1 mM CRGDS-peptide (Iris Biotech) and 3 mM of the photo-initiator lithium phenyl-2,4,6-trimethylbenzoylphosphinate (LAP, synthesized as previously described 35,60 ). To decrease the degradability of the network, parts of the peptide cross-linker are substituted by a 1 kDa PEG chain that contains a thiol group at each end (PEG-dithiol, Sigma). To encapsulate cells in the gel, HT-1080 cells, suspended in PBS, are added to the pre-polymer solution at a final concentration of 6.7 Â 10 5 cells per ml. To tune the gel composition, we can vary the amount of PEG-NB monomer, as well as the amount of cross-linker. The cross-linker ratio r c is defined according to eqn (3), by comparing the ratio of functional groups of the cross-linker (two thiol groups in each cross-linker) to the concentration of functional groups of the PEG-NB monomer (4 norbornene groups on each monomer) in the pre-polymer solution. Preparation of freely swollen hydrogel Small amounts of pre-polymer solution and air are alternately aspirated with a pipette and injected into a silicon tubing (Tygon) with an inner diameter of 1.6 mm to form small gel slabs. The tube is illuminated with a collimated 365 nm LED light source (Rapp OptoElectronic) of 10 mW cm À2 for 30 s. The polymerized gels are pushed out of the tube into normal growth media by air pressure. They float in cell culture media and are allowed to swell for 2 h under standard conditions (37 1C, 5% CO 2 , 100% humidity). To enable long-time microscopic observation of cell migration inside the gel, without displacement of the gel itself, the hydrogel has to be fixed to a surface after swelling. Therefore, the bottom of an uncoated m-slide angiogenesis (ibidi) is functionalized with PEG-NB. A mixture of 5 mM PEG-NB with 3 mM of the photo-initiator 4-benzoyl-benzylamine hydrochloride (Fluorochem) is illuminated through the slide bottom with 302 nm light (Blak-Ray XX-15M, UVP) for 30 min. After washing the surface with PBS, a mixture of 20 mM PEG-dithiol and 10 mM LAP was illuminated with 365 nm light for 20 s, which yields a thiol presenting surface. After washing with PBS, 1 ml of 0.5 mM LAP in PBS is added to the functionalized surface and a gel slab is placed on top of the droplet. Illumination with 365 nm for 5 s covalently binds the gel to the surface. The wells are washed with culture media after illumination. High resolution microscopy To retrieve high resolution images of HT-1080 cells embedded in synthetic hydrogel, gel slabs are prepared as described in the Section ''Preparation of freely swollen hydrogel'' with one exception. Wild type HT-1080 cells are substituted by HT-1080 LifeAct-TagGFP2 to visualize the actin structure of the cells. A Zeiss Cell Observer SD equipped with a Zeiss Plan Apochromat 63Â oil objective is used for spinning disc confocal microscopy. While imaging, samples are kept at 37 1C and 5% CO 2 atmosphere. z-Stacks with a distance of 1 mm are recorded over the whole cell body height and projected with the image processing program ImageJ (ImageJ 1.50b) to a single image using the Max projection option. Preparation of hydrogel microstructures inside channel slides To microstructure hydrogels in confinement, pre-polymer solution containing HT-1080 cells is injected into the channels of a m-slide VI 0.4 uncoated (ibidi) and illuminated at 10 mW cm À2 for 20 s with collimated 365 nm light through a custom-made chrome mask (structures: 400 mm strips width, 600 mm spacing, 5 mm strip length, channel height 400 mm). After polymerization, the channels are washed with culture medium and incubated under standard conditions (37 1C, 5% CO 2 , 100% humidity). Particle image velocimetry analysis to visualize the swelling behaviour Hydrogel strips are polymerized as described in the Section ''Preparation of hydrogel microstructures inside channel slides''. However, in addition fluorescent latex beads with a diameter of 1.1 mm (Sigma) are added to the pre-polymer solution at a final concentration of 9 Â 10 8 beads per ml. Directly after illumination, slides are mounted on a Nikon Eclipse Ti-E inverted microscope. Channels are washed with PBS and time-lapse imaging with 2 min interval for 3 h is started directly afterwards. Particle image velocimetry analysis (PIV) of the data is performed with the MatPIV toolbox for MatLab (J Kristian Sveen: http://folk.uio.no/jks/matpiv/, GNU general public license) with a slightly customized script. Changes in the script include a smallest interrogation window size of 64 Â 64 pixels with a 50% overlap, a filtering process with signal-to-noise ratio filter, a global histogram operator and a local filter. All vectors which are removed by the filtering process are replaced by a linear interpolation from the neighbouring vectors if at least 5 surrounding vectors remain. Setting this minimal limit ensures a localization of the vector field inside the gel strip and prevents the propagation of the field beyond the strip edges. Measurement of the swelling strain Hydrogel microstructures of various composition are prepared as described in the paragraph ''Preparation of hydrogel microstructures inside channel slides''. Completely swollen gels with embedded cells are imaged 3 h after polymerization on an Olympus CKX41 inverted microscope equipped with a gas incubation and heating system (ibidi) to maintain standard incubation conditions while imaging. To determine the swelling strain, g s , the strip width in the middle of the longitudinal section of the structure is measured with ImageJ. The swelling strain is defined by comparing the structure width after swelling (W f ) with the initial structure width (W 0 ), according to eqn (1). The initial structure width after polymerization is 400 mm. Results are displayed as mean value with standard deviation. Migration studies of HT-1080 cells inside gels Hydrogel strips or openly swollen gel slabs are prepared as described in the paragraph ''Preparation of hydrogel microstructures inside channel slides'' or ''Preparation of freely swollen hydrogel'', respectively. The strips are imaged on an Olympus CKX41 inverted microscope and a Nikon Eclipse Ti-E inverted microscope, respectively. Both are equipped with a gas incubation and heating system (ibidi) to maintain standard incubation conditions. The medium in the reservoirs of the m-slide VI 0.4 is overlaid with Anti-Evaporation Oil (ibidi) as described in the slide instructions, to avoid medium evaporation. Time-lapse series with 10 min intervals for 24 h are recorded starting 3 h after polymerization. Cells are tracked with the ImageJ plug-in 'Manual Tracking'. Cells which migrated a distance smaller than 40 mm are considered nonmigrating. For each condition 3-5 biological replicates are performed with 2-3 positions each. 25 cells per position are randomly selected for analysis. For the hydrogel slabs, a static structure in the gel is tracked for every position to correct cell migration tracks, due to slight overall movements of the gel. Analysis of the cell migration behaviour in isotropic hydrogels To verify the theoretical model, an analysis of the basic migration parameters is performed for the experimental and simulated data of cells moving in an isotropic network. The autocorrelation function of the cell migration velocity (VACF) is given by where v(t) are the velocity vectors of a cell at times t, and the brackets indicate a time average at fixed t. The Mean Squared Displacement (MSD) is calculated using where the position vectors r(t) are the position of the cell at time t, and the brackets indicate a time average at fixed t. Quantification of the anisotropic cell migration For each tracked cell the coordinates at every time point are recorded (x t , y t ). The direction of uniaxial strain in the hydrogel strips, as well as in our simulations, is parallel to the y-direction. The cumulated covered distance (D > and D 8 , respectively) is calculated separately for the x and y direction according to The Anisotropic Migration Index (AMI) is defined by comparing the covered distances perpendicular and parallel to the strain direction according to eqn (2). An AMI of 0 indicates isotropic migration and a value of 1 is reached for cell movement completely parallel to the deformation. For simulated data, the displacement parallel to the strain is corrected by the applied strain g (eqn (7)) before the AMI is calculated. Without correcting for the changed node to node distance upon deformation of the modelled system, the retrieved AMI would be biased towards positive values. D 8,simulated = D 8,measured /(g + 1) Furthermore, the calculated AMI for the simulated data is normalized to the maximal AMI that can be reached in the model. Because the lattice axis is not aligned with the strain direction, a simulated cell that moves from node to node can never only migrate parallel to the strain, but always has a displacement perpendicular to the strain as well. Therefore, an ideal AMI of 1 cannot be reached. To quantitatively compare experimental and simulated AMI, the simulated AMI is normalized with the maximal AMI possible for simulated data, An AMI max of 0.577 is calculated for an angle of 151 between the strain direction and the lattice axis that is best aligned with the external strain. Theoretical modelling and local stiffness calculations A description of the theoretical model can be found in the ESI, † together with the calculation of the local node stiffness in the simulated network. Conflicts of interest There are no conflicts to declare.
v3-fos-license
2020-10-24T15:35:27.370Z
2020-10-09T00:00:00.000
234358174
{ "extfieldsofstudy": [ "Geology" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://se.copernicus.org/articles/12/119/2021/se-12-119-2021.pdf", "pdf_hash": "15d14fd9f1cb3eca074c4b1fb63c9ebb9cfacd8d", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2750", "s2fieldsofstudy": [ "Geology" ], "sha1": "15d14fd9f1cb3eca074c4b1fb63c9ebb9cfacd8d", "year": 2021 }
pes2o/s2orc
Reproducing pyroclastic density currents deposits of the AD 79 eruption of the Somma-Vesuvius volcano using the box-model approach In this study we use PyBox, a new numerical implementation of the box-model approach, to reproduce pyroclastic density current (PDC) deposits from the Somma-Vesuvius volcano (Italy). Our simplified model assumes inertial flow front dynamics and mass deposition equations, and axisymmetric conditions inside circular sectors. Tephra volume and density, 15 and Total Grain Size Distribution of EU3pf and EU4b/c, two well-studied PDC units from different phases of the AD 79 Pompeii eruption of Somma-Vesuvius (Italy) are used as input parameters. Such units correspond to the deposits from variably dilute, turbulent PDCs. We perform a quantitative comparison and uncertainty quantification of numerical model outputs with respect to the observed data of unit thickness, inundation areas, and grain size distribution as a function of the radial distance to the source. The simulations that we performed with PyBox were done considering: i) polydisperse 20 conditions, given by the total grain size distribution of the deposit, or monodisperse conditions, given by the mean Sauter diameter of the deposit; ii) round-angle axisymmetrical collapses or divided in two circular sectors. We obtain a range of plausible initial volume concentrations of solid particles from 2.5% to 6%, depending on the unit and the circular sector. Optimal modelling results of flow extent and deposit thickness are reached on the EU4b/c unit in a polydisperse and sectorialized situation, indicating that using total grain size distribution and particle densities as close as possible to the real 25 conditions significantly improve the performance of the PyBox code. The study findings suggest that the box model simplified approaches adopted have promising applications in constraining the plausible range of the input parameters of more computationally expensive models. Although the 1D kinetic approaches cannot capture the multidimensional features of dynamics, they represent an important tool for several purposes.Firstly, it is practical to rely on simplified and fast numerical codes, which can be run 10 4 -10 6 times without an excessive computational expense, in order to produce statistically robust probabilistic hazard maps (Neri et al., 2015;Bevilacqua et al., 2017;Aravena et al., 2020).Furthermore, since 2D or 3D multiphase models require high computational times, often on the order of days or weeks for a single simulation, it is convenient to use simplified approaches, such as the box-model, in order to constrain the input space (Ogburn and Calder, 2017;Bevilacqua et al., 2019a).Finally, extensively testing the numerical models in a statistical framework, and evaluating the difference between model outputs and actual observations, also allows estimating the effect of the various modeling assumptions under uncertain input conditions (e.g., Patra et al., 2018Patra et al., , 2020;;Bevilacqua et al., 2019b).Model uncertainty is probably the most difficult class of epistemic uncertainty to evaluate robustly, but it is indeed a potentially large component of the total uncertainty affecting PDC inundation forecasts. In this paper, we test the suitability of box-model approach (described in section 2.1 and in Appendix A), as implemented numerically in the PyBox code (Biagioli et al., 2019), by quantifying its performance when reproducing some key features of the remarkably well-known PDC deposits from one of the best studied and documented volcanic events, the AD 79 eruption of the Somma-Vesuvius (SV) volcano (detailed in section 2.2).It is indeed accepted that the box model is able to describe the main features of large-volume (VEI 6 to 8;Newhall and Self, 1982), low aspect ratio ignimbrites, whose dynamics is dominantly inertial (Dade and Huppert, 1996).However, the model has never been tested again PDC generated by VEI 5 Plinian eruptions.The procedure involves the calculation of the difference between model output and field data in terms of i) thickness profile, ii) areal invasion overlapping and iii) grain sizes (GS) volume fractions at various distances from the source.Similar approaches have been adopted in literature (Dade and Huppert, 1996;Kelfoun, 2011;Charbonnier et al., 2015).Tierz et al. (2016a,b) and Sandri et al. (2018) proposed a quantification of the uncertainty derived from the energy cone approach that relies on the comparison between invaded area and maximum runout of model output and field data.Our approach aims at the more detailed comparison of physical parameters (especially thickness and grain sizes, section 5), which allows a further investigation of the strengths and limitations of the PyBox model when used to simulate different PDC types (section 6).The PyBox code, is a numerical implementation of the box-model integral formulation for axisymmetric gravity-driven particle currents based on the pioneering work of Huppert and Simpson (1980).The theory is detailed in Bonnecaze et al. (1995) and Hallworth et al. (1998).The volume extent of gravity currents is approximated by an ideal geometric element, called "box", which preserves its volume and geometric shape class, and only changes its height/base ratio through time (see Figure 1).The box does not rotate or shear, but only stretches out as the flow progresses.In this study the geometric shape of the box is assumed to be a cylinder, or cylindrical sector, i.e. we assume axisymmetric conditions.The model describes the propagation of a turbulent particle-laden gravity current, i.e. a homogeneous fluid with suspended particles.Inertial effects are assumed to have a leading role with respect to viscous forces and particle-particle interactions. Thus, the particle sedimentation is modelled and modifies the current inertia during propagation.In this study we assume the classical dam break configuration, in which a column of fluid instantaneously collapses and propagates, under gravity, in a surrounding atmosphere with uniform density ρ atm .Other authors (Bonnecaze et al., 1995;Dade andHuppert, 1995a, b, 1996) have instead considered gravity currents produced by the constant flux release of dense suspension from a source.Our approach differs from considering constant stress acting on the basal area (Dade and Huppert, 1998).Constant stress dynamics have been further explored in literature, and they can lead to different equations if the basal area grows linearly or with the square of the radius (Kelfoun et al., 2009;Kelfoun, 2011;Ogburn and Calder, 2017;Aspinall et al., 2019).PDCs are driven by their density excess with respect to the surrounding air: the density of the current ρ c is defined as the sum of the density of an interstitial gas, ρ g , and the bulk densities of the pyroclasts carried by the flow, (ρ s i ) i=1,…,N .In this study we assume ρ atm ≠ ρ g , i.e. the interstitial gas is hotter than surrounding atmosphere, differently from Neri et al. (2015) and Bevilacqua et al. (2017).The code allows ρ atm > ρ g , but thermal properties remain constant for the duration of the flow, and in this study we assumed ρ atm = ρ g .The thermodynamics of cooling effects are explored in Bursik and Woods (1996).A proper way to express the density contrast between the current and the ambient fluid is given by the reduced gravity ′ , that can be rewritten in terms of the densities and the volume fractions described above (see Biagioli et al., 2019).That said, we make some additional simplifying hypotheses.First of all, we assume that the mixture flow regime is incompressible and inviscid, since we assume that the dynamics of the current is dominated by the balance between inertial and buoyancy forces. The assumption of incompressibility implies that the initial volume V 0 remains constant.Moreover, we assume that, within the current, the vertical mixing, due to turbulence, produces a vertically uniform distribution of particles.The particles are assumed to sediment out of the current at a rate proportional to their constant terminal (or settling) velocity (w s i ) i=1,…,N .Once deposited, they cannot be re-entrained by the flow; the converse is explored in (see Fauria et al., 2016).Finally, surface effects of the ambient fluid are neglected. Under these hypotheses, the box model for particle-laden gravity currents states that the velocity of the current front (u) is related to the average depth of the current (h) by the von Kármán equation for density currents = ( ′ ℎ) 1 2 , where Fr is the Froude number, a dimensionless number expressing the ratio between inertial and buoyancy forces (Benjamin, 1968;Huppert and Simpson, 1980) and ′ is the reduced gravity.In addition, we assume that particles can settle to the ground and this process changes the solid particle fractions (ε i ) i=1,…,N . The box model for axisymmetric currents thus reads: Eq. ( 1) By solving these equations, we computed the amount of mass loss by sedimentation, per unit area, per time step, for each particle class.So, the thickness profile of the i th particle class is the ratio of the i th deposited mass to the i th solid density multiplied by the packing fraction α measured in the deposit.More details on the numerical solver are provided in Appendix A. In the calculation of the region invaded by a PDC, first we calculate the maximum flow runout over flat, i.e. the distance at which ρ c = ρ atm .The flow stops propagating when the solid fraction becomes lower than a critical value, and the remaining mixture of gas and particles lifts off, possibly generating a phoenix cloud if hot gas is assumed.In the case of monodisperse systems there are analytical solutions for the maximum flow runout (Bonnecaze et al., 1995;Esposti Ongaro et al., 2016;Bevilacqua, 2019).Then, once set a vent location, we assess the capability of topographic reliefs to block the current.In particular, the invasion areas are obtained by using the so-called energy-conoid model, based on the assumption of nonlinear, monotonic decay of flow kinetic energy with distance (Neri et al., 2015;Bevilacqua, 2016;Esposti Ongaro et al., 2016;Bevilacqua et al., 2017;Aspinall et al., 2019;Aravena et al., 2020).In more detail, first we determine the maximum height ℎ of an obstacle the flow can overcome.Then we compare the kinetic energy of the current front and the potential energy associated to the obstacle top.In this approach we are neglecting returning waves.When investigating the current flow on complex topographies, we finally consider that the flow may start from positive elevation or encounter upward slopes after downward slopes.In this case, we compare h max at a given distance from the vent and the difference in level experienced by the current between the previous and the present sampled positions. In the PyBox code, the main input parameters are summarized by: a) the total collapsing volume (expressed in terms of the dimension of the initial cylinder/rectangle with height=ℎ 0 and radius/base= 0 ); b) the initial concentration of solid particles, subdivided (for polydisperse simulations) into single particle volumetric fractions (ε 0 ), with respect to the gas; c) the density of single particles ρ s ; d) ambient air density (ρ atm =1.12 kg/m 3 ) and gravity current temperature; e) Froude number of the flow, experimentally measured by Esposti Ongaro et al. (2016) as Fr=1.18;g) gravity acceleration (g=9.81 m/s 2 ).With respect to points b) and c), more details are provided in section 3.2. The PDC deposits of the EU3pf unit record the phase of total column collapse closing the Plinian phase of the eruption. These are ca. 1 m thick on average, radially dispersed (up to 10 km from vent area) and moderately controlled by local topography, so resulting in a complex vertical and lateral facies variability (Gurioli, 1999;Gurioli et al., 1999) possibly related to local variation in turbulence, concentration and stratification of the current.Median clast size tends to diminish gradually from proximal to distal locations, and the coarsest deposits (generally present as breccia lenses in the EU3pf sequence) are located within paleodepressions.Gurioli et al. (1999) pointed out that: i) in the southern part of the SV area the relatively smooth paleo-topography controlled only locally the overall deposition of this PDC; ii) in the eastern sector of SV, the interaction of the current with the ridge representing the remnants of the old Mount Somma caldera (Fig. 2a) possibly triggered a general increase of the current turbulence and velocity and a more efficient air ingestion which resulted in the local deposition of a thinly stratified sequence; iii) in the western sector of SV, the presence of a breach in the caldera wall and of an important break in slope in the area of Piano delle Ginestre (Fig. 2a), possibly increased deposition from the PDC, producing a large, several meters thick depositional fan toward the sea-facing sectors (like in Herculaneum -Fig.2a); iv) in The AD 79 EU4 marks the reappraisal of the eruption after the end of the Plinian phase and was related by Cioni et al. (1999) to the onset of the caldera collapse.This complex unit has been subdivided into three distinct layers (Cioni et al., 1992): a thin basal fallout layer ("EU4a"), a PDC deposit derived from the collapse of the short-lived column that emplaced the EU4a layer ("EU4b"), and the products of the co-ignimbritic plume mainly derived by ash elutriation from the current depositing EU4b ("EU4c").Gurioli (1999) illustrates how the EU4 unit is furthermore complicated, since it actually presents a second fallout bed ("EU4a2") interlayered within the level "EU4b".This fallout bed can be clearly recognized only in distal sections of the southern sector, while in the north and in the west it is represented by a discontinuous level of ballistic ejecta.Level "a2" divides level "b" in two parts, which are approximately 2/3 (the lower one) and 1/3 (the upper one) of the total thickness of level "b" (Gurioli, 1999).Runout of the EU4b PDC is one of the largest runouts observed for the SV PDCs; it was maximum toward south (up to about 20 km from vent area; Gurioli et al., 2010).This unit has been extensively studied by Gurioli (1999) who highlighted that the high shear rate exerted by the EU4b is clearly evidenced by the formation of "traction carpets" and local erosion of the pumice-bearing layer of the underlying EU4a.The EU4b deposit can be interpreted as derived from a short-lived sustained, unsteady, density stratified current.From a sedimentological point of view, EU4b shows clear vertical grain size and textural variations, from cross bedded, fine lapilli to coarse ash laminae at the base up to a massive, fine ash-bearing, poorly sorted, matrix-supported bed at top (Gurioli, 1999).During deposition of EU4b, ash elutriated from the current formed a convective plume dispersed from the prevailing winds in a south-eastern direction, which deposited EU4c mainly by fallout.The clear field association of these two deposits (indicated as EU4b/c) gives here the uncommon possibility to evaluate with a larger accuracy two of the most important PDC source parameters: erupted volume and TGSD. Model input parameters and field data for comparison The main properties of the EU3pf and EU4b/c units, i.e. thicknesses/total volume, maximum runout and total grain size distribution -TGSD, have been calculated in Cioni et al. (2020) and partially processed to fit with PyBox input requirements (see sections 3.1.1and 3.1.3).Densities of single grain sizes (Barberi et al., 1989; section 3.1.2)and emplacement temperatures of PDCs (T=600 K for both EU3pf and EU4; Cioni et al., 2004) are derived from the literature. In summary, the total volume, the TGSD, the densities and the temperature obtained from the field data are used as the main inputs of PyBox.Thus, the model produces several outputs: (i) mean unit thickness as a function of the radial distance from the source, (ii) inundated area, (iii) grain size distribution as a function of radial distance from the source.All these outputs https://doi.org/10.5194/se-2020-138Preprint.Discussion started: 9 October 2020 c Author(s) 2020.CC BY 4.0 License. are finally compared to the corresponding field data.The initial volumetric fraction ε 0 of the solid particles over the gas is the main tuning parameter that is explored to fit the outputs with the field data.This procedure is repeated under monodisperse and polydisperse conditions, and by performing round-angle axisymmetrical collapses or sectorialized collapses, i.e. divided in two circular sectors with different input parameters. Thickness, maximum runout and volumes Cioni et al. ( 2020) recently revised and elaborated a large amount of field data from EU3pf and EU4b/c (106 and 102 stratigraphic sections, respectively), tracing detailed isopach maps and defining the maximum runout distance (the ideal 0 m isopach) and the related uncertainty.Given the objective difficulty to trace the exact position of a 0 m isopach for the deposit of a past eruption, Cioni et al. (2020) proposed to define three different outlines of PDC maximum runouts, namely "5 th percentile", "50 th percentile" and "95 th percentile" (called Maximum Runout Lines, MRLs), basing on the uncertainty associated to each segment of the proposed 0 m isopach.The MRLs of EU3pf and EU4b are shown in Figure 3c and 3d respectively. Cioni et al. ( 2020) also calculated the volumes of both EU3pf and EU4b/c, using these maps to derive a digital elevation model of the deposits with the triangular irregular network (TIN) method (Lee and Schachter, 1980).In this study, we considered volume estimations (Table 1) related to the MRL 50 , i.e. the 50 th percentile of the maximum runout distance. Given the asymmetric shape of unit EU4b/c and, partially, of unit EU3pf, we have calculated also the volumes dividing each unit in two circular sectors: N and S for EU3pf, NW and SE for EU4b/c.These subdivisions have been also used to calculate the related TGSDs (see unit 3.1.3)and to perform sectorialized simulations (see section 4). Figure 3c,d Density data In order to provide density values for each GS, we used the mass fractions of the different components (juveniles, lithics and crystalssee Table S1 from the Supporting Information) calculated by Gurioli (1999).Such values were associated to the averaged density measurements for these three components presented in Barberi et al. (1989), through which we extrapolated the weighted mean (with respect to mass fraction) mean density of each grain size class for both EU3pf and EU4b/c units (Table 2).Table 2. Calculated mean densities for each grain size for both the EU3pf and EU4b/c units. Grain size data: total grain size distribution (TGSD) and mean Sauter diameter (MSD) The In the simulations under monodisperse conditions, we used the value of mean Sauter diameter (MSD) of the volumetric TGSD (e.g., Neri et al., 2015).According to Fan and Zhu (1998), the Sauter diameter of each particle class size is also called d 32 (see also Breard et al., 2018), and it is the diameter of a sphere having the same ratio of external surface to volume as the particle, which is given by: Eq. ( 4) where V is the particle volume, S is the particle surface, d v is the diameter of a sphere having the same volume as the particle and d s is the diameter of a sphere having the same external surface as the particle.In order to obtain a value for the MSD instead, given a deposit sample divided in N grain size classes, we have initially calculated the number of particles of each grain size i=1,…,N, that is: Eq. ( 5) where is the cumulative volume of the i-th grain size class, and is the radius of the i-th grain size.The mean MSD is finally derived as: Eq. ( 6) where and are the diameters of, respectively, the i-th and j-th grain sizes. Comparison between field data and simulation outputs Since the PyBox code assumes axisymmetric conditions, the thickness outputs are equal along all the radial directions of the collapse, and only vary as a function of the distance to the source.These output data were compared with the mean radial profiles of unit thickness (for both EU3pf and EU4b/c) as derived from the digital models of deposit in Cioni et al. (2020). For building the radial profiles, the average thickness was estimated over concentric circles drawn with a 100-m step of distance.The radial thickness profiles were drawn starting from a distance of 3 km from the vent, as no thickness data are available for sites closer than 3 km.Due to the lack of reliable data, we have moreover excluded from our analyses the portions of the circles located in marine areas.In order to describe the variation range of the thicknesses of the deposits, we are providing minimum and maximum thicknesses along each circle in Appendix B (Fig. A1). Concerning the inundation area, the methodology adopted is similar to the one used by Tierz et al. (2016b) and relies on the The False Negative case (no deposit nor simulation) has not been obviously calculated.In statistical literature, the True Positive value is also called Jaccard Index of similarity (Tierz et al., 2016b;Patra et al., 2020).While the TP/TN/FP approach, and in general the Jaccard Index, focus on the areal overlapping, other metrics can specifically focus on the distance between the boundaries of the inundated areas, i.e. the Hausdorff distance, detecting and comparing channelized features in the deposit (Aravena et al., 2020).However, PyBox is not specifically aimed at the replication of such features, and we focus on the areal overlapping properties.The scarcity of stratigraphic sections in the N sector (for the EU3pf unit) and the NW sector (for the EU4b/c unit) negatively affects the availability of comparisons with respect to volume fractions, which are forcedly limited to sections at 4 km of distance from the hypothetical vent area, most of which have been collected at the bottom of paleovalleys.Moreover, for the EU3pf unit, even in the S sector the available samples are mostly concentrated in the area of Herculaneum (5 samples). Results The results of 6 simulations (4 for the EU3pf unit and 2 for the EU4b/c unit) are discussed here (see Table 4 for the main input parameters).These simulations are the result of an extensive investigation in which a wide range of different values of ε 0 have been tested, following a trial-and-error procedure aimed at reproducing more closely the thickness profile of the deposit.In particular, we performed several simulations varying ε 0 between 0.5% and 6% (for EU3pf) and between 0.1% and 5% (for EU4b/c).The values in Table 4 represent the optimal combinations.simulations performed using the unmodified DEM did not produce major differences. Unit Simulation In the EU3pf case study, we performed both axisymmetrical simulations over a round-angle (given the quasi-circular shape of the deposit) and also axisymmetrical-sectorialized simulations to investigate possible sheltering effects of the Mount Somma scarp (Fig. 2a).In particular, two distinct column collapses, one to the N and the other to the S, each of which has a collapsed volume corresponding to the actual deposit volume in that sector.In the EU4b/c case study, we performed only axisymmetrical-sectorialized simulations, to reproduce more closely the dynamics of the related collapse, as indicated by the different dispersal in the NW and SE sectors of the PDC deposit.In particular, two distinct collapses for the same simulation, one to the NW and the other to the SE) In summary we provide: a) the thickness comparison between deposit and modelled results (Figure 6) and between simulations done with different initial volumetric fraction of solid particles (ε 0 -Figure 7); b) the inundation areas, including the quantitative matching of simulations and actual deposit (Figure 8 and Table 5); c) the grain size distribution comparison, between deposit and modelled values, i.e. the volume fractions of ash vs lapilli (Figure 9) and of all the grain size classes (Figure 10). General considerations Testing PyBox with respect to field data is aimed at two main objectives: i) quantifying the degree of reproduction of the real PDC deposit of Plinian eruptions in terms of thickness, inundation area and grain size and ii) evaluating the reliability of the code when considering different assumptions, i.e. polydisperse vs. monodisperse situations, and 360° axisymmetric conditions vs dividing circular sectors.Before commenting our results, two main general considerations, in common for both EU3pf and EU4b/c, deserve a special discussion. Runout truncation and non-deposited material We remind that PyBox code produces the map of the inundated area (Neri et al., 2015;Bevilacqua, 2016), by truncating the runout wherever the kinetic energy of the flow is lower than the potential energy associated to a topographic obstacle (Section 2.1 and Appendix A).In this way, however, the material that lies beyond the truncation is neither redistributed nor considered any more.However, depending on the topography in our case study, this amount of material is not extremely high.For instance, EU4_poly_AS (Table 4), in its SE part, has several truncations due to the intersection of the decay function of kinetic energy with several topographic barriers, i.e. the Apennines to the ENE and the Sorrentina Peninsula to the SE (Figs. 2 and 7).For the whole SE part of the deposit, the topographic barriers are located between 11.85 km and 19.25 km from vent area, with a mean vale of 15 km.If we truncate PyBox deposit in correspondence of these three limits, the non-deposited volume is between 3.46x10 6 m 3 (cut at 19.25 km) and 2.3x10 7 m 3 (cut at 11.85 km), with a mean value of 1.27x10 7 m 3 (cut at 15 km).Considering that the volume collapsed to the SE is 1.5x10 8 m 3 , the non-deposited volume corresponds therefore to a value between 2% and 15%, with a mean of 8%.The amount of volume effectively "lost" is relatively small, also considering that the total volume of the collapsing mixture is inclusive of the EU4c unit (co-ignimbritic part).However, further development of the code might consider a strategy to redistribute this non-deposited material (e.g., Aravena et al., 2020). Initial volumetric fraction of solid particles The value of the initial volumetric fraction of solid particles (ε 0 ) in the PDC represents one of the most uncertain parameters, for which few constraints exist.Recently, Valentine (2020) performed several multiphase simulations using mono-or bidisperse distributions to investigate the initiation of PDCs from collapsing mixtures, and to derive criteria to determine when either a depth-averaged model or a box-model are best suited to be employed for hazard modelling purposes.The author concluded that, among other factors (e.g.impact speed or relative proportion of fine to coarse particles), a volumetric concentration of particles of around 1% (slightly lower than those used in this paper) is generally capable of producing a dense underflow and a dilute, faster overriding flow.For such cases, and considering an impacting mixture consisting of at least ca.50% coarse particles (> 1 cm diameter) relative to fines (< 1 cm diameter), Valentine (2020) suggests that a depthaveraged granular flow model well approximates such PDCs, and could be reasonably used for hazard assessment purposes.For the units studied here, the sedimentological features show that there are clear evidences of the formation of a dense underflow in, respectively, the N part of Somma-Vesuvius volcano (EU3pf unit; Gurioli et al., 1999) and in correspondence of the urban settlements of Herculaneum and Pompeii (EU4 unit; Cioni et al., 1999;Gurioli et al., 2002).We however think the employment of a box-model is justified for at least the unit EU4b/c, which can be considered intermediate between a dilute, turbulent and a granular concentrated current, in the sense of Branney and Kokelaar (2002), but closer to the dilute end-member type.In this view, the box-model can be effectively employed to describe the overriding dilute part units similar to the EU4, following a two-layer approach (Kelfoun, 2017;Valentine, 2020). For the box-model used here, it should be kept in mind the variation of the ε 0 value might have an important effect on the simulated deposit thicknesses, as seen in Figure 7.In both units, in fact, the model results for thickness at the beginning of the simulated area (i.e. 3 km from vent area), vary from ca. 1 to ca. 2 m (for EU3pf) or from ca. 1.2 m to ca. 3.6 m (for EU4b/c) if ε 0 is varied, respectively, from 1.5% to 6% and from 0.5% to 5%. Thickness comparison The first parameter that we compare between the deposit and the modelled results is the thickness variation with the distance to the source, an approach already adopted, for instance, by Dade and Huppert (1996). Our comparison focuses on the average thickness calculated over concentric circles drawn with a 100-m step of distance. However, the thickness variation of the deposit in different radial directions describes two different situations for the EU3pf and EU4b/c units and deserves a brief discussion, detailed in Appendix B. The average thickness of the EU3pf deposit mean profile initially shows an increasing trend (between 3 to 4 km to the N and between 3 to 6 km to the S -Fig.6a) followed by a slow, constant decrease.This situation could highlight a lower capability of the current to deposit in more proximal areas, allowing the mass to be redistributed toward more distal sections.This could also be motivated with a spatial variation of the PDC flux regime, which was more turbulent in proximal areas than in distal ones, as also testified by the abundance of lithofacies typical of dilute and turbulent PDCs (//LT to xsLT; see Fig. 2b and Gurioli et al., 1999).Instead, the spatial homogeneity of lithofacies for the EU4b/c unit (Cioni et al., 1992) suggests a higher uniformity of its parent PDC.Moreover, the trend of the mean deposit thickness profile has a steep and rapid decrease of thickness up to 5-6 km, followed (after a break in slope) by a "tail" with an increasing gentler decrease of thickness.This peculiar trend is in agreement with the lithofacies association in the unit EU4b/c (Cioni et al., 1992), which indicate a progressive dilution of the current through time and a progressive aggradation of the deposit. That said, the degree of matching between the modelled and the real thickness of the EU3pf unit is less accurate than in the EU4b/c case study.However, the mean thickness profile of the actual deposit is roughly parallel with the model, in some parts.Under polydisperse conditions, PyBox does not improve its performance in replicating the thickness profile of EU3pf. The difficulties of PyBox in reproducing the thickness average profile testifies the strongest interaction of the EU3pf unit with the non-homogeneous topography (see also Gurioli, 1999;Cioni et al., 2020) and the likely dominant role of the density stratification and granular transport in the deposition process.To the North there was in fact an extremely rough topography, https://doi.org/10.5194/se-2020-138Preprint.Discussion started: 9 October 2020 c Author(s) 2020.CC BY 4.0 License. similar to the present one, where the interaction of the PDC with the surface produced largely variable lithofacies.To the South instead there was a gentler topography, with a topographic high on which the town of Pompeii (see Fig. 2a) was built.This latter aspect is also evident from Vogel and Märker (2010), who reconstructed the pre-AD79 paleotopography of the plain to the SE of the SV edifice.From this work, it is possible to appreciate how the modelled depth of the pre-AD79 surface is 0-1 m lower with respect to the present surface in correspondence of the present town of Pompeii and the ancient Pompeii excavations (due to the presence of piles of tephra fallout deposits up to 2 m thick), while it is up to 6-7 m deeper to the NW of these sites. The thickness comparison of the EU4b/c unit, on the contrary, suggests that this unit was likely deposited under inertial flow conditions, dominated by turbulent transport.The SE "tail" part of the deposit is particularly very well reproduced by the polydisperse simulations, where the simulated profile is almost coincident with the deposit profile (Fig. 6bright). Conversely, to the NW the modelled thickness in the initial part overestimates a bit the real deposit (Fig. 6b).The polydisperse simulations (blue dashed lines in Fig. 6b) are much closer to the measured trend of the mean thickness profile than under the monodisperse conditions (i.e.MSD), demonstrating the key role of the grain-size distribution in gas-particle turbulent transport. Comparison of inundated areas The areal overlapping between the model output area and the actual deposit (True Positive -TP) is discussed together with the quantification of model overestimation (False Positive -FP) and underestimation (True Negative -TN).In Table 5 we also provided the TP/FP/TN estimates also for the 5 th and 95 th percentiles of the maximum runout lines (MRLs), i.e. a measure of the spatial uncertainty affecting the actual deposit.We remark that the TN instances could be interesting from a hazard point of view, because they actually represent the underestimation of the model: a conservative approach is therefore to use the lowest value of the TN instances as a threshold to evaluate the reliability of a model. As said above, the polydisperse simulations of the EU3pf unit poorly fit with the deposit thickness, and the inundated area is significantly larger than the deposit area.Thus, they are not included in the quantitative estimation of area match/mismatch.For instance, while the maximum runouts of the deposit are on the order of 8-10 km, the maximum runout given by the model (in absence of topography) is ca.13-15 km.The monodisperse simulations perform better, in this sense, and maximum runouts are slightly different (ca.7-10 km) from the real ones: for this reason, only the monodisperse simulations for the EU3pf case have been considered in Figure 8 and Table 5.More precisely, the axisymmetrical (EU3pf_mono_AX) and the sectorialized (EU3pf_mono_AS) share a similar degree of TP instances (between 63% and 75% -Table 5), but have opposite properties for what concerns overestimation/underestimation.EU3pf_mono_AX has in fact a higher tendency to underestimate (FP < TN -Table 5) while EU3pf_mono_AS tends to overestimate the actual deposit (FP > TN -Table 5). For what concerns the EU4b/c simulations (Fig. 8), we report the quantitative matching of both the simulations under polydisperse and monodisperse conditions.The most striking feature that could be seen from Figure 8 is that, while to the SE https://doi.org/10.5194/se-2020-138Preprint.Discussion started: 9 October 2020 c Author(s) 2020.CC BY 4.0 License.a good matching is obtained, to the NW the polydisperse simulation tends to sensibly overestimate the inundation area. Conversely, the monodisperse simulation is more equilibrate between NW and SE.This could be motivated, for the SE part, with the surrounding morphology of the Sorrentina Peninsula and the Apennines, which acts as a natural barrier, and, for the NW sector, to the absence of morphological constraints especially to the N.The results presented in Fig. 8 and Table 5 show that the TP values for the simulation EU4_poly_AS are in the interval 61%-73%, while TN values range from 0.7% and 2%, and FP values range from 24% and 38%.Thus, while the degree of overlapping between model and deposit is at an acceptable value and the percentage of model underestimation is below 2%, the model tends to appreciably overestimate the median outline of the deposit.On the opposite, the simulation EU4_mono_AS shows the highest TP values (73%-80%) and the lowest FP (3%-8%).Despite these better performances, it should be always reminded how the thickness profile is less accurate under monodisperse conditions.We remark that, beyond 14 km (ca.2-3 km beyond the deposit MRL 95 ) the thickness provided by the model under polydisperse conditions is < 1 mm (see Figure 8c).Shallow deposits might be possibly affected by erosion, and the actual deposit in the NW sector might in fact resemble the PyBox results.We also remark that the MRLs defined by Cioni et al. (2020) have been defined up the 95 th percentile, meaning that there is still a 5% chance that the actual MRL could be placed further away from the source.This is very significant in the NW part of the EU4b/c deposit, where no or very few outcrops can be found beyond 5-6 km from vent area. Grain size comparison Finally, we consider the volume fraction of the grain sizes of the actual deposits versus those derived from PyBox.We present the results in two different ways.Firstly, we provide a general overview of what are the relative proportions of ash/lapilli with distance to the source (Fig. 9), and then we provide more complete volumetric grain size comparisons for each Φ unit (Fig. 10).We note that this comparison is one of the most uncertain because of some inherent epistemic uncertainties in the data, that are: i) the complete lack of ultra-proximal sites possibly enriched in coarse grained particles that influenced the calculated TGSD; ii) the fact that the sections used for TGSD calculation and data comparison are (for both units) located mainly along the aprons of the volcano, in many cases in correspondence of the lower parts of valleys or paleovalleys.This could have led to have an under-representation of the finer-grained deposits located in high or paleo-high morphological locations. The data presented in Fig. 9 confirms the differences between EU3pf and EU4b/c.EU3pf (Fig. 9a) shows that the simulated and real volumetric contents of ash/lapilli are similar only up to 4 km (both to the N and to the S).Then, the relative proportions of ash/lapilli in the simulations indicate that, after 6 km, the simulated grain sizes are made almost entirely (>90%) by ash, with a sensitive difference with respect to field data (only to the S, as to the N there are no available measurements).The most extreme situation could be seen at 9 km, where the modelled grain sizes are composed for > 80% in volume by the two finest ones (4Φ-5Φ), while deposit data indicate a more equal distribution of grain sizes.In Fig. 10a we https://doi.org/10.5194/se-2020-138Preprint.Discussion started: 9 October 2020 c Author(s) 2020.CC BY 4.0 License.observe that at 4 km (both N and S) the grain size distributions are similar between the actual deposit and the model, although there is a shift of ca. 2 Φ toward the finer grain sizes in the modelled data. For the EU4b/c unit, we observe that the general proportions between ash and lapilli (Fig. 9b) are more similar between the model and the deposit (especially at 4 km from the vent area to the N).However, in Fig. 10b we see that at 4 km to the N, the situation is opposite to EU3pf, since the modelled grain size is richer in coarse particles than the actual deposit.Such difference might be motivated with the above-mentioned roughness of the topography, which might favour the deposition of coarser particles at locations < 4 km.In the SE sector the differences between modelled and observed grain sizes are lower at 6 km and 9 km distance to the source, while are greater at 14 km and 20 km, where the 2 finest modelled grain sizes account for > 80% of the volume. Conclusions We have evaluated the suitability of the box-model approach implemented in the PyBox code to reproduce the deposits of EU3pf and EU4b/c, two well-studied PDC units from different phases of the AD 79 Pompeii eruption of Somma-Vesuvius (Italy).The total volume, the TGSD, the grain densities, and the temperature obtained from the field data are used as the main inputs of PyBox.The model produces several outputs that can be directly compared with the inundation areas and radially-averaged PDC deposit features, namely the unit thickness profile and the grain size distribution as a function of the radial distance to the source.We have performed simulations either under polydisperse or monodisperse conditions, given by, respectively, the total grain size distribution and the mean Sauter diameter of the deposit.We have tested axisymmetrical collapses either round-angle or divided in two circular sectors. The initial volumetric fraction ε 0 of the solid particles over the gas is the main tuning parameter (given its uncertainty) that is explored to fit the outputs with the field data.In this study, we obtained the best fit of deposit data with a plausible initial volume concentration of solid particles from 3% to 6% for EU3pf (depending on the circular sector) and of 2.5% for EU4b/c.These concentrations optimize the reproduction of the thickness profile of the actual deposits. Concerning the EU3pf unit: 1) the average thickness of the EU3pf deposit initially shows an increasing trend, from 3 to 4 km to the N and from 3 to 6 km to the S, followed by a slow, constant decrease.The simulated thickness poorly resembles the actual deposit, although the maximum values are comparable and the two profiles are roughly parallel, in some parts.Under polydisperse conditions, PyBox does not improve its performance in reproducing the thickness profile of EU3pf; 2) in the monodisperse simulations of EU3pf the maximum runouts are slightly different from the real ones, but overall consistent. The round-angle and sectorialized simulations share a similar degree of TP instances (between 63% and 75%), but have opposite properties for what concerns overestimation/underestimation.The round-angle axisymmetric simulation underestimates the actual deposit (FP < TN) while the sectorialized simulation overestimates the actual deposit (FP > TN); 3) the simulated and real volumetric contents of ash/lapilli in EU3pf are similar only up to 4 km.Then, the relative proportions https://doi.org/10.5194/se-2020-138Preprint.Discussion started: 9 October 2020 c Author(s) 2020.CC BY 4.0 License. of ash/lapilli in the simulations indicate that the simulated grain sizes are made almost entirely (>90%) by ash, with a sensitive difference with respect to field data after 6 km.We observe that at 4 km the grain size distributions are similar between the actual deposit and the model, although there is a shift of ca. 2 Φ toward the finer grain sizes in the modelled data. Concerning instead the EU4b/c unit: 1) this unit has a steep and rapid decrease of thickness up to 5-6 km, followed, after a break in slope, by a "tail" with a gentler decrease of thickness.The polydisperse box-model simulations are much closer to the measured trend of the mean thickness profile than under the monodisperse conditions.The SE thickness profile of the polydisperse simulation is almost coincident (within the uncertainty range) with the corresponding part of the deposit (specifically after 6 km, and with a ca.0.5 m overestimation between 3.5-6 km), while to the NW the modelled thickness slightly overestimates the real deposit in the initial part (up to ca. 6 km); 2) in the simulations of EU4b/c, a good matching of inundated area towards the South-East is obtained.Towards the North-West the polydisperse simulation sensibly overestimates the inundation area.On the opposite, the simulation under monodisperse conditions shows the highest TP values (73%-80%) and the lowest FP (3%-8%).However, the thickness profile is less accurate under monodisperse conditions.Moreover, shallow deposits in the NW sector might be possibly affected by erosion, and the actual deposit in the NW sector might in fact resemble the PyBox results obtained under polydisperse conditions; 3) the general proportions between ash and lapilli in EU4b/c are similar between the model and the deposit.However, at 4 km to the N, the situation is opposite to EU3pf, since the modelled grain size is richer in coarse particles than the actual deposit.In the SE sector the differences between modelled and observed grain sizes are lower at 6 km and 9 km distance to the source, while are greater at 14 km and 20 km, where the 2 finest modelled grain sizes account for > 80% of the volume; 4) in the SE sector, because of model runout truncation, we evaluated an average non-deposited volume of 1.2710 7 m 3 (cut at 15 km).Considering that the volume collapsed to the SE is 1.510 8 m 3 , the average non-deposited volume corresponds therefore to a value of 8%. Thus, the amount of volume effectively "lost" with the PyBox approach is relatively small, also considering that the total volume of the collapsing mixture is inclusive of the co-ignimbritic part. Pyroclastic density currents generated by Plinian eruptions span over a wide range of characters, and can display very different behaviour and interaction with the topography.During the AD 79 eruption of Somma-Vesuvius, two PDC units, despite both emplaced after column collapses, display significantly different sedimentological features and should likely be better described by different models.The study findings indicates that the box-model, which is suited to describe turbulent particle-laden inertial gravity currents, well describes the EU4b/c PDC unit but is not able to accurately catch some of the main features of the EU3pf unit.This is probably due to its strongly stratified features, which make the interaction with the topography of the basal concentrated part of the flow a controlling factor in the deposition process.In particular, results highlight again the key role of the grain-size distribution in the description of inertial PDCs: while the final runout is mostly controlled by the finest portion of the distribution, the total grain size distribution strongly affects the thickness profile (e.g., Fig. 6b) and it is an essential ingredient for proper modelling of the PDC dynamics. Figure 1 . Figure 1.a) Schematic diagram of an inertial gravity current with a depth h c , flow front velocity u c and density ρ c in an ambient fluid of density ρ 0 (modified from Roche et al., 2013); b) Evolution of channelized currents through a series of equal-area rectangles, according to the model (hence the name "box model"). https://doi.org/10.5194/se-2020-138Preprint.Discussion started: 9 October 2020 c Author(s) 2020.CC BY 4.0 License.Our model consists of a set of ordinary differential equations, that provide the time evolution of flow front distance from the source, l(t), together with the current height h(t) and the solid particle volume fraction (ε i ) i=1,…,N , N being the number of particle classes considered.The volume fractions refer to a constant volume of the mixture flow, not reduced by the deposition. Figure 2 . Figure 2. a) Location of the Somma-Vesuvius volcano.Coordinates are expressed in the UTM WGS84-33N system; b) the https://doi.org/10.5194/se-2020-138Preprint.Discussion started: 9 October 2020 c Author(s) 2020.CC BY 4.0 License. the northern sector of SV, the deeply eroded paleotopography (with many valleys cut on steep slopes) favoured the development within the whole current of a fast-moving, dense basal underflow able to segregate the coarse, lithic material and to deposit thick lobes in the main valleys, and of a slower and more dilute portion travelling and depositing thin, stratified beds also on morphological highs. Figure 3 . Figure 3. Thicknesses and isopach lines for the a) EU3pf and b) EU4b/c units; MRLs of the c) EU3pf and d) EU4b/c units.Inferred position of AD 79 vent (red triangle) and SV caldera outline (dark orange dashed line) after Tadini et al. (2017).Light green dashed lines delimit the sectors (N-S for EU3pf and NW-SE for EU4) of the different column collapses.Background DEM fromTarquini et al. (2007).220 TGSD estimations are necessary to do simulations under polydisperse conditions.The present version of PyBox takes as input the volumetric TGSD (i.e. in terms of volumetric percentages), while TGSD data from Cioni et al. (2020) are in weight percentages.These latter values have been therefore converted into volumetric percentages by considering the abovementioned densities (section 3.1.3). Figure 4 displays the volumetric TGSDs employed for EU3pf (Total, N and S) and the EU4b/c (Total, NW and SE). Figure 4 . Figure 4. Volumetric total grain size distributions for the EU3pf and EU4b/c units. approach described byFawcett (2006) and implemented byCepeda et al. (2010) for landslide deposit back-analysis.This method is based on the quantification of the areal overlapping between the measured deposit (true classes) and the modelled deposit (hypothesized classes) (Figure5).In particular, we quantify: a) the areal percentage of model intersecting the actual deposit (true positive -TP); b) the areal percentage of model overestimating the actual deposit (false positive -FP); c) the percentage of model underestimating the actual deposit (true negative -TN).More precisely: Figure 5 . Figure 5. Sketch representing the three areas used for the validation procedure (the model output outline is drawn in dashed black line). https://doi.org/10.5194/se-2020-138Preprint.Discussion started: 9 October 2020 c Author(s) 2020.CC BY 4.0 License.We adopted a simplified version of the paleotopography prior to the AD 79 eruption starting from the 10-m resolution Digital Elevation Model of Tarquini et al. (2007) and from the reconstruction given in Cioni et al. (1999) and Santacroce et al. (2003) (Fig. 8).Particularly, the present Gran Cono edifice and part of the caldera morphology have been replaced with a flat area, and a simplified reconstruction of the southern part of the Mount Somma scarp has been inserted.However, https://doi.org/10.5194/se-2020-138Preprint.Discussion started: 9 October 2020 c Author(s) 2020.CC BY 4.0 License. Figure 6 . Figure 6.Mean thickness comparison between the simulations (dashed lines) and the actual deposit (solid line) of a) EU3pf and b) EU4b/c units.Different boxes concern different circular sectors.330 Figure 7 . Figure 7.Comparison between simulations (dashed lines) assuming different initial volumetric fractions of solid particles (ε 0 ), and the actual deposit (solid line), of the a) EU3pf unit S and b) EU4b/c unit SE.In (b), the inset is a magnification of the thicknesses further than 9 km from the vent. Figure 8 . Figure 8. Inundation area of the simulations of the EU3pf (a-b) and EU4b/c (c-d) units.The dashed lines represent the theoretical isopachs (in m) of the simulated deposit.Vent location (red triangle), vent uncertainty area (red line) and SV caldera (orange dashed line) as in Tadini et al. (2017).MRLs as in Figure 3.The DEM used in the simulations and as a background derives from Tarquini et al. (2007) according to the modifications explained in section 4. 340 Figure 9 . Figure 9. Volumetric content of ash/lapilli of model/deposit with distance to the source, of the units a) EU3pf N/S (left and 345 right respectively) and b) EU4b/c NW/SE (left and right respectively). Figure 10 . Figure 10.Comparison of volumetric grain sizes of the a) EU3pf and b) EU4b/c units.Different boxes concern different distances to the source. Table 1 . Volume of the EU3pf and EU4b/c units. Table 3 summarizes the calculated MSDs for the studied units (in Φ), along with the corresponding density values (obtained interpolating those in Table2). Table 3 . MSD values and related densities for the different units studied. Table 5 . True Positive (TP), False Positive (FP) and True Negative (TN) instances of the simulations in Figure 8.
v3-fos-license
2017-06-23T03:18:48.485Z
2012-08-04T00:00:00.000
12390140
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/1756-0500-5-407", "pdf_hash": "bf1dd27f50fb73810271aa0e62dde1de28e0ee5c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2751", "s2fieldsofstudy": [ "Medicine" ], "sha1": "4b090f579b40c626722504fc18fbd9216d008535", "year": 2012 }
pes2o/s2orc
Intra-articular use of a medical device composed of hyaluronic acid and chondroitin sulfate (Structovial CS): effects on clinical, ultrasonographic and biological parameters Background This pilot open noncontrolled study was designed to assess the efficacy of intra-articular injections of a solution combining hyaluronic acid (HA) and chondroitin sulphate (CS) in the treatment of outpatients affected by knee osteoarthrosis. Findings Thirty patients with knee OA have been included. The primary objective was to assess clinical efficacy as measured by pain and Lequesne’s index. Secondary objectives were to assess potential effect of the treatment on ultrasound parameters, safety and biomarkers of cartilage metabolism and joint inflammation. After a selection visit (V1), the study treatment was administered 3 times on a weekly basis (V2, V3, V4). Follow-up was planned 6 (V5) and 12 weeks (V6) after the first intra-articular injection. Efficacy results showed a reduction in mean pain at V3 and V6 and in functional impairment, the most marked changes being measured at the two follow-up visits (V5 and V6). Although statistical significance was not achieved due to small sample size, a clear tendency towards improvement was detectable for ultrasound assessments as well as biomarkers. Except for a mild injection site hematoma for which the drug causal relationship could not be excluded, no adverse effect of clinical relevance was recorded during the study. Conclusion Although this pilot study was performed according to an open design only, the ultrasound as well as biomarkers changes strongly suggest a non-placebo effect. These preliminary results call now for a randomized controlled study to confirm the clinical relevance of the observed results. Trial registration #ISRCTN91883031 Findings In the treatment of osteoarthritis (OA), it is now agreed that surgical procedures should be at least delayed, and even avoid inasmuch as possible. Hyaluronic acid (HA) is a component of the synovial fluid, the lubricating effect of which is related to its viscoelastic properties. There is a large agreement that early manifestations of OA are related to changes in the viscoelasticity of the synovial fluid which account for a decrease in the protective action of the cartilage: such deterioration appears mainly due to a decrease in the concentration and molecular weight of synovial HA. HA injections into the joint may compensate for this deficit in elasticity, thereby improving articular lubrication. There is a large body of data regarding HS biocompatibility, its toxicology as well as its metabolism [1][2][3][4]. Regarding HA clinical efficacy in knee OA, a number of studies are available, some of them performed according a double-blind placebo-controlled design [5][6][7]. According to the European League Against Rheumatism (EULAR) recommendations published in 2003, « there is evidence to support the efficacy of HA in the management of knee OA both for pain reduction and functional improvement » which may induce pain relief « for several months » [8]. Structovial CS (Pierre Fabre Médicament) is a medical device combining a chondroitin sulphate (CS) (30 mg/mL) and HA (12 mg/mL) to treat knee OA. The biocompatibility of both products has been assessed during Structovial CS development. The role of CS is twofold: i) optimizing HA's rheological behaviour, due to specific interactions [9,10]; ii) regulating cartilage metabolism, as a substrate for polysulphated glycosaminoglycans synthesis as well as an inhibitor of catabolic cytokines and metalloproteinases synthesis (11)(12)(13). The primary objective of this study was to provide some clinical, sonographic, biologic parameters of 3 weekly intra-articular injections of HA/CS in knees affected by OA over a period of 12 weeks. Secondary objectives were to: i) assess the treatment effect on ultrasound (US) parameters; ii) analyze biomarkers known as related to cartilage metabolism and to joint inflammation; and iii) assess the treatment safety. Methods This was a single-centre, open-label, uncontrolled study (Trial registration #ISRCTN91883031) designed to assess intra-articular injections of HA/CS in knee OA. Patients Inclusion criteria were: male or female patients aged ≥ 45 and ≤ 80 years; suffering from internal and/or external femoro-tibial OA: meeting the criteria of the American College of Rheumatology (ACR) (14) (pain of the knee and crepitus on active motion or morning stiffness < 30 minutes or age >50 years); lasting for at least 6 months; pain ≥ 40 mm as measured on a visual analogue scale (VAS); stage II or III within the previous year according to the radiological classification of Kellgren and Lawrence [11]); OA deemed to justify a treatment with intra-articular HA according to the investigator; patient's written, informed consent. Non-inclusion criteria were related to any circumstances likely to interfere with the study treatment, namely: symptomatic femoro-patellar arthrosis or hip arthrosis on the same side, concomitant skeletal disease (Paget disease, rheumatoid arthritis, ankylosing spondylitis. . .); former or concomitant treatment (intra-articular corticosteroids, topical or oral NSAIDs, anti-arthritis slow acting treatment, recent surgery. . .); individual characteristics incompatible with a drug trial (pregnancy or lack of contraception, serious concomitant disease, participation in a clinical trial within the preceding 30 days. . .). Participation in the study could be prematurely withdrawn at the patient's or investigator's initiative, e.g. in case of a significant adverse event. Patients were not allowed to take any pain relief medication (eg, NSAIDs, analgesics) or any OA therapy (eg, diacerein, glucosamine, CS). In the event of severe pain, and if necessary, patients were permitted to take 1 gram tablets of acetaminophen, 1 at a time, up to 4 times per day, with a minimum of 4 hours between tablets. If the recommended dosage of acetaminophen was insufficient, it was permitted to take a NSAID. Study schedule The selection period ran from Day -21 to Day -1 (V1). The patients participated in the study from Day 0 to Day 84. The investigational drug was a sterile solution of HA/CS for intra-articular injections: each 2 mL injection contained 24 mg of HA and 60 mg of CS. It was injected on a weekly basis, on Days 0 (V2), 7 (V3, one week), and 14 (V4, 2 weeks). Then, Days 42 (V5, 6 weeks) and 84 (V6, 12 weeks) were for follow-up and end-of-study assessments, which brought to 6 the total of scheduled visits throughout the study. Study parameters Clinical parameters The main recorded parameters were a Visual Analog Scale (VAS) to measure spontaneous pain (from 0 = no pain to 100 = maximum pain), Lequesne's Algo-Functional Knee Index [12], concomitant medication as well as adverse events if any; on V1 (first injection) and V6 (end-of-study follow-up). Overall assessment of improvement was assessed by the patient and by the investigator, using a VAS (from 0 = worsening to 100 = improvement). The clinical response was assessed at V5 and V6 using the criteria defined by the Osteoarthritis Research Society International (OARSI) [13]. On V6, the patients were asked about their satisfaction regarding the treatment. Ultrasound parameters An US examination of the target knee was performed with a Logic 9 (GE) device using a 10-15 mHz high resolution transducer. Joint fluid was assessed by a longitudinal scan of the suprapatellar recess: grade 0 = no, grade 1 = fluid only detected when an isometric quadricipital contraction is done by the patient, 2 = fluid even at rest [14]. Synovial thickness was measured on a longitudinal image of the suprapatellar recess with an extended knee, with a knee flexed at 545°and on a transversal scan of the lateral recess. The used value was the addition of the 3 measurements. A detection (and a quantification when positive) of any popliteal cyst was done: 0 = no, 1 = yes (in cc). Biomarkers dosage Several biomarkers have been directly measured in the serum using immunoassays and following the manufacturer instruction: inflammation markers [ Statistical and ethical considerations As there was no control group, the efficacy analysis was mainly descriptive and there was no primary efficacy parameter. All tests performed were exploratory. All analyses were made using the statistical analysis software (SAS W ) Version 9.1.3 on the UNIX operating system software. AEs were coded using the MedDRA 10.1. Quantitative parameters were described using the following descriptive statistics: number of patients, arithmetic mean, standard deviation (SD), minimum, median and maximum values, and first and third quartile. Qualitative parameters were described using frequencies and percentages. Efficacy parameters (absolute change from baseline) were analyzed by linear regression on baseline values. As there was only one treatment group, all analyses were exploratory. For the statistical analysis, the date of first dose of study drug was considered relative Day 0 and the day before the first dose of study drug was considered Day -1. Relative days for assessments before, on, or after the first dose of study drug were calculated as follows): Relative Day = Date of Assessment -Date of First Dose (Day 0). A sample size of 30 patients was considered sufficient as the study is explanatory. The study protocol was approved on January 18 2008 by the Ethic Committee of Erasme Hospital, University of Brussels. The study was conducted in compliance with the Declaration of Helsinki and its amendments, the Good Clinical Practices (GCP 1996), and the ISO 14155 regulation. Disposition and description of patients From March 10, 2008 to October 13, 2008 a total of 31 patients were screened/selected at the Hôpital Erasme in Brussels, Belgium. Of these, 30 patients were included in the study and were treated with HA/CS: all of them were included in the safety and efficacy analysis sets. One patient, having completed Visit 5 (6 weeks), withdrew from the study on Day 101 for its personal convenience. No major protocol violation was reported within this study. The sex ratio of the included patients was 8 M/22 F, with a mean age [±SD] of 61.5 ± 9.4 years. Demographic data and baseline characteristics of included patients are summarized in Table 1. Regarding the patients joint condition, the median [range] time they had knee OA was 28 [5 -195] months; most patients (20 [66.7%]) were assessed as Kellgren-Lawrence Grade II on the basis of their most recent X-ray. Knee OA history is summarized in Table 2. No other medical history or concomitant disease was identified as significant enough to interfere with the study assessments. Efficacy parameters Pain intensity decreased during the study: as compared to baseline, the change (mean ± SD) was -23.3 ± 22.51 at Visit 3 (one week) and -36.1 ±28.54 at Visit 6 (12 weeks). Linear regressions of the absolute changes were performed on the baseline values: the most significant changes from baseline were measured at Visit 5 (6 weeks) (p = 0.0008) and at Visit 6 (12 weeks) (p = 0.0042). The evolution of pain throughout the study is summarized in Table 3. Likewise, functional impairment as assessed by Lequesne's index decreased during the study: as compared to baseline, the change (mean ± SD) was -1.34 ± 3.472 at Visit 3 and -3.40 ± 4.193 at Visit 6 (12 weeks). Linear regressions of the absolute changes were performed on the baseline values: the most significant changes from baseline were measured at Visit 5 (6 weeks) (p = 0.0031) and at Visit 6 (12 weeks) (p = 0.0012). The evolution of Lequesne's algo-functional knee index is summarized in Table 4. The patient and investigator assessment of global improvement changed only marginally throughout the study. The biggest difference in the VAS scores, for both the patients and investigators, was measured one week after the first study injection, but these differences were not significant. Regarding ultrasound parameters, the results are summarized in Table 5. A reduction of the synovial thickness was found from Visit 2 (baseline) to Visit 6 (12 weeks), especially in patients displaying articular liquid at baseline; however, statistical significance was not achieved, probably because of small sample size. Likewise, fewer patients showed articular effusion at Visit 6 (12 weeks) (n = 13) as compared to Visit 2 (baseline) (n = 18), but the difference was not statistically significant. The results obtained for biomarkers are summarized in Table 6. Mean values of Coll2-1, Coll2-1NO2 and CPII decreased between Visit 2 (baseline) and Visit 6 (end of the study, 12 weeks). To measure the linear dependency between biomarkers and pain intensity, correlation coefficients were researched between the absolute change of each biomarker and the absolute change of pain from Visit 2 (baseline) and V6 (12 weeks). The coefficients of correlation were mostly negative indicating that more the biomarker level change is high, less the pain change is important. Of note, the results observed on IL-6, with a dramatic reduction from 5825 ± 21720 pg/mL (baseline) to 162 ± 405 pg/mL. Safety parameters No severe adverse event was reported throughout the study. Of the 30 patients included in the safety analysis, 4 reported an adverse event: injection site haematoma (n = 1, 3.3%), wrist fracture (n = 1, 3.3%), arthralgia (n = 1, 3.3%), and venous stasis (n = 1, 3.3%). Of mild intensity, the haematoma was the only reported adverse event for which a drug causal relationship was not excluded by the investigator. No abnormality of clinical relevance was reported in vital or physical signs monitored during the study. Discussion The purpose of this open study was to assess Structovial CS (Pierre Fabre Médicament), a solution combining chondroitin sulphate (CS) (30 mg/mL) and HA (12 mg/mL) and administered by intra-articular injections, in 45 to 80-year old patients suffering from femoro-tibial OA. All enrolled patients received 3 intra-articular injections of a solution of HA/CS over a 3-week period, and were assessed in 6 clinic visits, up to 10 weeks after their last injection. Efficacy was assessed through measure of pain, functional impairment, clinical response, ultrasound and biomarkers. Both pain intensity and functional impairment decreased during the study. The most significant changes for both parameters were observed at 6 and 12 weeks after the first study injection. The patient and investigator assessment of global improvement changed only marginally throughout the study. The biggest difference in the VAS scores, for both Table 3 Evolution of pain during the study (100-mm VAS: 0 = no pain, 100 = maximum pain) Visit 3 (one week) (n = 30) Visit 4 (2 weeks) (n = 30) Visit 5 (6 weeks) (n = 29) Visit 6 (12 weeks) (n = 30) the patients and investigators, was measured one week after the first study injection, but statistical significance was not achieved. The majority of patients exhibited a clinical response to treatment at 6 weeks (79.3%) and 12 weeks after the first study injection (73.3%). No statistically significant changes in ultrasound parameters were seen throughout the study, although an improvement was found in term of a reduction in number of effusion and in term of synovial thickness. With a larger sample size this probable effect on the synovial inflammation could be demonstrated. The 5 measured biomarkers displayed a high variability although they tended to decrease in a consistent way throughout the study. No serious adverse events, no adverse event leading to study discontinuation, and no deaths were reported during the study. A total of 4 (13.3%) adverse events (AEs) were reported throughout the study: injection site haematoma, wrist fracture, arthralgia, and venous stasis. The injection site haematoma was of mild intensity. For this AE, the investigator did not exclude a relation to the study drug. No other change of clinical relevance was observed in physical examination or vital signs. On the basis of these results, the discussion should be balanced. In this non controlled study, the improvement in clinical parameters (pain intensity, functional impairment) was not clearly greater than that which could be induced by a placebo in a controlled study. On the other hand, the structural as well as biomarkers changes suggest a non-placebo effect, as lack of statistical significant for both "objective" parameters is most probably a consequence of small sample size. In particular, biomarkers changes appeared quite consistent although statistically not significant, with a decrease in Coll2-1 (degradation marker) and in IL-6 and Coll2-1NO2 (markers of oxidative stress and of inflammation). In a previous study, Hosigawa et al. already showed that intra-articular injection of hyaluronan was associated with a reduction in biomarkers in synovial fluid, suggesting that HA could help maintain normal cartilage metabolism at least in patients at an early stage of OA and with limited synovitis [15]. The biomarkers are the reflect of cartilage degradation, then directly correlated with the disease activity (i.e. inflammation and pain). The change in biomarker levels due to the medical device over time can be linked to the change in disease activity. By the way, the more important is the change in biomarkers, the more important is the effect on pain. Overall, the results of this pilot study are consistent with a favourable benefit/risk ratio of the medical device used, but they strongly call for undergoing now a randomized clinical trial with the required statistical power.
v3-fos-license
2017-08-03T01:52:12.982Z
2016-02-20T00:00:00.000
12711367
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://jbiomedsci.biomedcentral.com/track/pdf/10.1186/s12929-016-0247-2", "pdf_hash": "8dc2255ccc7fe1c8602cae692e920714b67546f3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2754", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "2733ba06c1b4c1bafd5806b371ab13a248b3333e", "year": 2016 }
pes2o/s2orc
CD44-mediated monocyte transmigration across Cryptococcus neoformans-infected brain microvascular endothelial cells is enhanced by HIV-1 gp41-I90 ectodomain Background Cryptococcus neoformans (Cn) is an important opportunistic pathogen in the immunocompromised people, including AIDS patients, which leads to fatal cryptococcal meningitis with high mortality rate. Previous researches have shown that HIV-1 gp41-I90 ectodomain can enhance Cn adhesion to and invasion of brain microvascular endothelial cell (BMEC), which constitutes the blood brain barrier (BBB). However, little is known about the role of HIV-1 gp41-I90 in the monocyte transmigration across Cn-infected BBB. In the present study, we provide evidence that HIV-1 gp41-I90 and Cn synergistically enhance monocytes transmigration across the BBB in vitro and in vivo. The underlying mechanisms for this phenomenon require further study. Methods In this study, the enhancing role of HIV-1 gp41-I90 in monocyte transmigration across Cn-infected BBB was demonstrated by performed transmigration assays in vitro and in vivo. Results Our results showed that the transmigration rate of monocytes are positively associated with Cn and/or HIV-1 gp41-I90, the co-exposure (HIV-1 gp41-I90 + Cn) group showed a higher THP-1 transmigration rate (P < 0.01). Using CD44 knock-down HBMEC or CD44 inhibitor Bikunin in the assay, the facilitation of transmigration rates of monocyte enhanced by HIV-1 gp41-I90 was significantly suppressed. Western blotting analysis and biotin/avidin enzyme-linked immunosorbent assays (BA-ELISAs) showed that Cn and HIV-1 gp41-I90 could increase the expression of CD44 and ICAM-1 on the HBMEC. Moreover, Cn and/or HIV-1 gp41-I90 could also induce CD44 redistribution to the membrane lipid rafts. By establishing the mouse cryptococcal meningitis model, we found that HIV-1 gp41-I90 and Cn could synergistically enhance the monocytes transmigration, increase the BBB permeability and injury in vivo. Conclusions Collectively, our findings suggested that HIV-1 gp41-I90 ectodomain can enhance the transmigration of THP-1 through Cn-infected BBB, which may be mediated by CD44. This novel study enlightens the future prospects to elaborate the inflammatory responses induced by HIV-1 gp41-I90 ectodomain and to effectively eliminate the opportunistic infections in AIDS patients. Background Cryptococcus neoformans (Cn) is an important pathogenic fungus with capsule and causes severe meningitis and disseminated infections, especially in patients with defective cellular immunity, such as AIDS patients [1,2]. Cryptococcosis is the most common opportunistic fungal infection and one of the major causes of death in AIDS patients (mortality rate~30 %) [3,4]. Despite major advances in the treatment of HIV-1 infection with Highly Active Antiretroviral Therapy (HAART), cryptococcosis remains prevalent even in developed countries [5][6][7][8][9]. Cn infects mainly through the respiratory tract, spreads from the pulmonary circulation to the brain tissues, resulting in meningitis [10,11]. The pathogenesis of cryptococcal meningitis (CM) is still largely unknown, while it is well known that crossing the BBB is the pivotal step leading to the development of meningitis. The damage of the BBB is generally induced by the interactions between pathogens and brain microvascular endothelial cells (BMECs), which leads to edema and increased permeability, and subsequently facilitate more interactions between the immune cells and BMECs [12]. Previous research had shown that Cn is able to cause considerable morphological changes and actin reorganization in HBMEC [1]. Many signaling molecules, including CD44, caveolin-1, PKCα, endocytic kinase DYRK3, in lipid rafts have been characterized and shown to play an important role during the Cn internalization [2,[13][14][15][16]. Cryptococcosis is one of the most fatal co-morbidity factors of AIDS. The interrelationship between HIV-1 and Cn is intriguing and intricate, as both pathogens cause severe neuropathological complications. The details of how HIV-1 virotoxins, including gp120 and gp41, enhance Cn invasion of the BBB are still largely unknown. Our recent study has shown that HIV-1-gp41-I90 has a remarkable effect in promoting the adhesion and invasion of Cn [17]. Through construction of a recombinant protein, HIV-1 gp41-I90, which is the ectodomain of gp41 (amino acid residues 579-611), we have shown that HIV-1 gp41-I90 ectodomain could activate many molecular events including up-regulation of ICAM-1 on the HBMEC, redistribution of CD44 and βactin on the lipid rafts and induction of membrane ruffling on the surface of HBMEC. These events could enhance brain invasion by Cn and eventually can lead to severe HIV-1-associated CM [17,18]. CD44 is a cellsurface glycoprotein involved in cell-cell interactions, cell adhesion and migration, which is widely distributed in a variety of endothelial cells, including HBMEC [19]. The interaction between hyaluronic acid (HA) on the Cn and its receptor CD44 on the surface of HBMEC is the initial step in cryptococcal brain invasion [13]. The role played by CD44/HA in the interaction between BMECs and leukocytes and the exudation of leukocyte is previously characterized [13]. CD44 has also been proposed to play an important role in Cn infection-induced adhesion and transmigration activities of leukocyte. It is reasonable to speculate that CD44 could also be important for HIV-1 gp41-I90 ectodomain mediated brain invasion of Cn. Delineating the mechanism of Cn transmigration across the BBB is essential to explore the potential of HIV-1 in enhancing the brain invasion by Cn. Many research groups have suggested three possible routes of Cn transmigration across the BBB: (1) Trans-cellular passage through endothelial cells by a specific ligand-receptor interaction [1,20], this mode of invasion has been observed for Escherichia coli [21][22][23], group B Streptococcus [24], Listeria monocytogenes [25], Neisseria meningitides [26] and the fungal pathogen Candida albicans [27]; (2) Paracellular penetration after mechanical or biochemical disruption of the BBB [1,28,29], just like the protozoan Trypanosomasp [30,31]; (3)"Trojan horse" method, in which the infected immune cells, such as monocytes carry the pathogen through the BBB, a method of infection by HIV-1 and simian immunodeficiency virus [32][33][34]. The existence of a Trojan horse method of crossing the BBB by Cn has been proved in a study by Caroline Charlier et al. [35]. Through infecting bone marrow-derived monocytes (BMDM) with Cn in vitro, the authors showed that fungal loads in brain of mice treated with Cn-infected BMDM were much higher than the control group. Accumulating evidence shows that Cn can use multiple means of transmigration and disruption of the BBB. Previous research had shown that HIV-1 infection is able to increase the monocyte capacity to migrate across the BBB [36]. As existence of a Trojan horse method of crossing the BBB by Cn, it is reasonable to speculate that HIV-1 enhanced transmigration activity of monocytes might be responsible for severe brain disorder caused by Cn. In present study, through performing transmigration assays, we found that Cn and HIV-1 gp41-I90 could synergistically enhance monocytes transmigration across the BBB. Our findings provide a new idea for understanding the interrelationship between HIV-1 and Cn during the pathogenic progress of HIV-1-associated CM. Fungi strains, cell lines and cultures Cn wild strains B-4500FO2 was a generous gift from A Jong (University of Southern California, Los Angeles, USA). Yeast cells were grown aerobically at 30°C in 1 % yeast extract, 2 % peptone and 2 % dextrose (YPD broth). Cells were harvested at early log phase, washed with phosphatebuffered saline (PBS) and resuspended. The yeast cell number was determined by direct counting from a hemocytometer [17]. Heat-inactivated Cn (H-Cn) was obtained by heating the microorganisms three times at 121°C for 15 min [38]. Only batches that showed no re-growth in YPD broth were employed. HBMEC were isolated and cultured as described previously [39][40][41], which were grown in RPMI 1640 medium supplemented with 10 % heatinactivated fetal bovine serum, 10 % Nu-serum, 2 mM glutamine, 1 mM sodium pyruvate, nonessential amino acids, vitamins, penicillin G (50 μg/ml) and streptomycin (100 μg/ml) at 37°C in 5 % CO2. Cells were detached by trypsin-EDTA and subcultured on collagen-coated Transwell (3 μm pore size, 6.5-mm diameter) (BD Biosciences, San Jose, CA, USA) from T-25 flasks when~70 %-80 % confluent. HBMEC monolayers on Transwell filters were monitored by measuring trans-endothelial electrical resistance (TEER) changes across the endothelial cell monolayer using an End Ohm epithelial voltohmeter (World Precision Instruments, Sarasota, FL, USA) [1,27]. The cells are positive for factor VIII and fluorescently labeled acetylated low-density lipoprotein (Dil-AcLDL) uptake, demonstrating their endothelial origin and also express gamma glutamyl transpeptidase (GGT) and carbonic anhydrase (CA) IV, indicating their brain origin [42]. HBMEC are polarized and exhibit an average TEER value of 250-300Ω/cm 2 [1]. The cells also exhibit the typical characteristics for brain endothelial cells expressing tight junctions and maintaining apical-to-basal polarity. THP-1 cells were purchased from the cell bank of Chinese Academy of Sciences and grown in RPMI 1640 medium supplemented with 10 % heat-inactivated fetal bovine serum, penicillin G (50 μg/ml) and streptomycin (100 μg/ml) at 37°C in 5 % CO 2 . Mice The C57BL/6 background mice (6 weeks of age) were brought from Animal Experimental Center of Southern Medical University (Guangzhou, China) and kept in the animal facility. They were raised in plastic cages and given food and water ad libitum. All experiments were approved by the ethics committee of Southern Medical University. CRISPR/Cas9-Mediated knockdown-CD44 The CRISPR-Cas9 system was used in our study to mediate down-regulated expression of CD44 in HBMEC. Human CD44 cDNA sequence was obtained from Gen Bank (NM_000610) and two pairs of single guide RNA (sgRNA) sequences (named CD44-1 and CD44-2, as below) were designed online (http://www.e-crisp.org/E-CRISP/designcrispr.html). The underlined sequences targeted the CD44 gene, and the bold italic letters indicate the BsmBI site. A 20 bp scrambled sequence (see below) was defined as a scramble control which was marked with "SC" in the text. THP-1 adhesion assay THP-1 adhesion assays were performed as described by Che et al. [43]. Briefly, confluent HBMEC monolayers on 24-well plates were stimulated with different concentrations of Cn (10 5 -2 × 10 7 CFU/ml) or gp41-I90 (0.02-20 μM) for 6 h. For the time-course study, confluent HBMEC monolayers were stimulated at different time intervals (0-24 h) with a single dose of Cn (5 × 10 6 CFU/ml) or HIV-1 gp41-I90 (2 μM). After the incubation, monolayers were washed with PBS for four times. Each well was added with 1 × 10 6 THP-1 and incubated with 90 min at 37°C. Then, cells were washed for 5 times and fixed with 4 % paraformaldehyde in PBS. Assays were performed in triplicate wells. Fifteen microscope fields were randomly selected from three wells for each treatment to count the number of adherent monocytes and the data were analyzed using analysis of variance (ANOVA). THP-1 transmigration assay THP-1 transmigration assays were performed as described previously [44,45] with modification. HBMECs or KD-CD44 HBMECs were cultured in trans-well filters (3 μm pore size, 6 mm diameter, Millipore). In order to exclude the possibility that the monocytes migration elicited was due to destruction of HBMEC, the integrity of the monolayer was inspected by TEER and microscopy before the start of the assay. For HBMEC stimulation, different doses of Cn or HIV-1 gp41-I90 were added to the upper chambers with 0.8 ml EM (EM; containing 49 % M199, 49 % Ham's F12, 1 mM sodium pyruvate and 2 mM L-glutamine) for 6 h. For the time-course study, HBMEC were stimulated at different time intervals (0-24 h) with a single dose of Cn (5 × 10 6 CFU/ml) Cn or HIV-1 gp41-I90 (2 μM). After stimulation, THP-1 (1 × 10 6 cells in 0.2 ml of EM) were added to the upper chamber and allowed to migrate over for 4 h (Dose response and kinetic assays were performed in advance to determine the optimized concentration and migration duration). At the end of the incubation, migrated THP-1 cells were collected from the lower chamber and counted in a blinded-fashion using a hemacytometer [43]. Final results of THP-1 transmigration were expressed as the percentage of THP-1 across the BMEC monolayers. For Bikunin treatment, BMEC were incubated with Bikunin (Gen-Script Corp., catalog no. 300233) in both upper and lower chambers for 1 h before stimulation [16]. The pre-treating time of bikunin was determined according to kinetic assays. The Bikunin was present throughout the monocytes transmigration experiment until the end. Assays of surface expression of CD44 and ICAM-1 As ICAM-1 and CD44 play a role in the leukocyte transmigration process during inflammatory, we next performed BA-ELISAs to measured the expression of CD44 and ICAM-1 on HBMEC. Before the assays, ICAM-1 and CD44 antibody were biotinylated with biotin using a biotinylation kit as described by the manufacturer. The methods for ELISAs were similar to those described previously [43]. HBMEC monolayers which grown on Transwell were treated with Cn (5 × 10 6 CFU/ml) and HIV-1 gp41-I90 (2 μM) alone or joint use of them and incubated for 6 h. Treated monolayers were washed three times with PBS, fixed with 4 % paraformaldehyde and blocked for 30 min with PBS containing 5 % BSA. Biotin conjugated ICAM-1 antibody or CD44 antibody were added immediately after the blocking step. Incubation was carried out for 1 h at 37°C. Cells were washed five times with PBS added 1 % BSA and incubated with peroxidase-conjugated avidin for 45 min at 37°C. After the avidin incubation, cells were washed five times and liquid TMB substrate was added. The liquid was transferred to an ELISA plate after 15 min. Equal volume stop solution was added, and optical density at 450 nm was read. For each ELISA, an isotype-matched control antibody was used in place of the primary antibody in three wells, and this background was subtracted from the signal. Preparation of membrane lipid rafts from HBMECs Lipid rafts were extracted using Caveolae/Rafts Isolation kit as described previously [13]. For each sample, HBMECs were grown in a 6 well plates for 2 days. On the day of the experiment, the cells were individually incubated with either PBS (control), or 2 μM HIV-1 gp41-I90, or 5 × 10 6 CFU/ml Cn or 5 × 10 6 CFU/ml Cn + 2 μM HIV-1 gp41-I90 individually for 6 h in the experimental medium. After incubation, the cells were washed with PBS three times, scraped in PBS and spun down at 750 g at 4°C. Cell pellets were lysed in 200 μl of TN solution [25 mM Tris/HCl (pH 7.5), 1 mM DTT (dithiothreitol), a cocktail of protease inhibitors, 10 % sucrose and 1 % Triton X-100] on ice, and incubated for 30 min on ice. Samples were mixed with 1.16 ml of ice-cold OptiPrep TM , transferred into SW40 centrifuge tubes and overlaid with 2 ml each of 30, 30, 25, 20 and 0 % OptiPrep TM in TN buffer. The gradients were spun at 35000 r.p.m. in an SW40 rotor for 5 h at 4°C. Nine fractions were collected from the top to the bottom of centrifuge tubes. For western blotting, equal amounts of proteins from each fraction were used. Rabbit anti-CD44 Ab (Abcam, 1:5000 dilution) and anti-rabbit-HRP conjugate (1:500 dilution) were used in these experiments. Mouse cryptococcal meningitis model All the animal experiments were performed strictly according to the guidelines for animal care in Southern Medical University (China). Our protocols were approved (Approval No. 2014A016) by the School of Public Health and Tropical Medicine of Southern Medical University, which obtained the permission for performing the research protocols and all animal experiments conducted during the present study from the ethics committee of Southern Medical University. All surgery was performed under anesthesia with ketamine and lidocaine, and all efforts were made to minimize suffering. For study the role of HIV-1 gp41-I90 on Cn-caused monocyte recruitment into the CNS of mice, mouse cryptococcal meningitis model was established as described previously [17]. 6 weeks-old C57BL/6 mice (6 mice each group) were intravenously injected with 10 6 Cn cells via the tail vein, with or without HIV-1 gp41-I90 (10 μg/g mouse weight). After 24 h injection, mice were anaesthetized with ketamine and lidocaine, and blood samples were collected from heart puncture for isolation and purification of mouse brain microvascular endothelial cells. After perfusion from heart puncture with 20 ml PBS, the skull was opened. CSF samples were collected by washing the brain tissues with 100 μl of PBS, and then by washing the cerebral ventricles and cranial cavity with another 100 μl of PBS. CSF samples containing more than 10 erythrocytes per μl were discarded as contaminated samples. As the expression level of CD14 is very low in mouse monocytes, anti-Ly6C Ab was used to determine monocyte in CSF [46]. Monocytes were stained with a PE-conjugated rat anti-mouse Ly6C Ab (eBiosciences, CA, USA) and counted under the fluorescence microscope. Isolation and purification of mouse brain microvascular endothelial cells Recently, we have demonstrated that circulating BMECs (cBMECs) can be used as potential novel cell-based biomarkers for indexing of the BBB injury [47]. This technology was used by us to explore whether HIV-1 gp41-I90 is able to increase Cn-associated BBB damages in our study. Briefly, beads were prepared according to the manufacturer's instructions (Invitrogen) and resuspended in Hanks' balanced salt solution (HBSS, Invitrogen Corp., Carlsbad, CA, USA) plus 5 % fetal calf serum (HBSS + 5 %FCS) to a final concentration of 4 × l0 8 beads/ml. The cBMECs were prepared as described previously [47,48]. Endothelial cells from blood samples were isolated by absorption to Ulex-coated beads [49] and detached from the beads by fucose. Detached endothelial cells were adhered again to MFSD2a-coated beads. To counting the cBMECs from blood samples, cells adhered to MFSD2a-coated beads were labeled with PE-conjugated CD146 antibody and transferred to glass splices by cytospin for counting under a fluorescence microscope. These endothelial cells were positive for CD146 [47], demonstrating their endothelial origin, and also expressed MFSD2a [50], indicating their brain origin. Total cBMECs were identified based on their CD146 (endothelial cell marker) + /DAPI (nuclei) + phenotypes. Histopathology and immunohistochemistry Mouse brain tissue was fixed in 4 % phosphate-buffered paraformaldehyde and was paraffin-embedded. Immunohistochemistry was performed on 5 μm paraffin tissue sections. Mouse monocytes were identified with anti-Ly6C (1:100; Abcam). To detect primary Abs, a goat anti-rabbit antibody conjugated with horseradish peroxidase was used with 50 mM Tris · HCl buffer (pH 7.4) containing DAB and H 2 O 2 , and the sections were lightly counterstained with hemotoxylin. Statistical analysis Data are shown in mean ± standard deviation and analyzed by one-way ANOVA tests. All statistical analysis was carried out at 5 % level of significance and P value less than 0.05 was considered to be significant. SPSS software (version 13.0) was used for statistical analysis. The synergistic enhancing effect on joint use of Cn and HIV-1 gp41-I90 was analyzed using the CalcuSyn Software (Biosoft). Results Effect of Cn and HIV-1 gp41-I90 on adhesion and transmigration of THP-1 Recruitment of monocytes into CNS plays an important role in the inflammatory response induced by fungal factors [51]. To determine the role of Cn and HIV-1 gp41-I90 on transmigration of monocytes, we first evaluated the effect of Cn and HIV-1 gp41-I90 on monocytes adhesion to HBMEC at different yeast doses (10 6 -2 × 10 7 CFU/ml) and time intervals (0-24 h). Individually, as shown in Fig. 1a-d, Cn and gp41-I90 not only could dose-dependently induce adhesion of monocytes to HBMEC, but also it is timedependent. Next, we performed transmigration assays to test whether Cn and HIV-1 gp41-I90 could induce monocytes transmigration across the BBB in vitro at a manner similar to adhesion. As we expected, Cn and HIV-1 gp41-I90 could also induce monocytes transmigration across the BBB in vitro in a dose-and time-dependent manner ( Fig. 2a-d). In order to exclude the possibility that the increased transendothelial migration by Cn or HIV-1 gp41-I90 was due to disruption of the BBB, the integrity of the monolayer was inspected by determining the TEER across the monolayer. As shown in Fig. 5c, the TEER only declined to <8 % of the starting value after incubation with indicated doses of Cn and HIV-1 gp41-I90 or joint use of them. These results suggest that Cn and gp41-I90 could induce monocyte adhesion to and transmigration across the HBMEC monolayers. HIV-1 gp41-I90 and Cn synergistically enhance the adhesion and transmigration activity of monocytes In this assay, the adhesion rate of THP-1 was measured in four groups: PBS, Cn, HIV-1 gp41-I90 and joint use of Cn and HIV-1 gp41-I90 (Fig. 3a). Compared with the control group, all other groups showed significant increase in the adhesion rates, among which the HIV-1 gp41-I90 + Cn group was the highest. For the time-course study of THP-1 transmigration, as shown in Fig. 3b, when the incubation time was increased to 24 h, the transmigration rate of HIV-1 gp41-I90 + Cn group increased to 32 % compared to the Cn group (19.2 %) and HIV-1 gp41-I90 group (21.8 %). Therefore, we concluded that the co-exposure of HIV-1 gp41-I90 and Cn in HBMEC has a significant time effect in transmigration of THP-1 cells. Moreover, the coexposure group showed a higher rate initially, and the pro-migration effect was more durable as well. Determination of a synergistic effect of Cn and HIV-1 gp41-I90 combination was performed according to the median effect principle using the CalcuSyn Software (Biosoft) as described previously [52]. The CI values for the combination treatment of Cn and HIV-1 gp41-I90 were less than 1, suggesting that the combination is highly synergistic. These results suggested that HIV-1 gp41-I90 and Cn was able to synergistically enhance the adhesion and transmigration activity of monocytes. Specificity of synergistically enhanced transmigration activity of monocyte by Cn and HIV-1 gp41 In Fig. 2, we showed evidence of a dose-and timedependent increase in monocyte transmigration activity following BMEC treatment with Cn and HIV-1 gp41. However, it is not clear whether these enhancing effects are specific to Cn and gp41. In order to further investigate this issue, heat-inactivated Cn, HIV Tat and p24 were used in transmigration assays. Briefly, HBMECs cultured in trans-well filters were treated with either PBS (control), Cn (1 × 10 6 CFU/ml), H-Cn (1 × 10 6 CFU/ml), HIV-1 gp41 (0.2 μM), HIV Tat (0.2 μM) or HIV p24 (0.2 μM) for 6 h. THP-1 transmigration assays were performed as described as Methods section. Like Cn and HIV-1 gp41, as shown in Fig. 4a, H-Cn and HIV Tat could also increase monocytes transmigration across BBB. Among these stimulations, the HIV Tat molecule contributes a higher enhancement of monocytes transmigration across to BBB. Next, we performed transmigration assays again to further examine whether H-Cn and HIV-1 gp41 or Cn and HIV Tat could also synergistically enhance the transmigration Fig. 4a-c, none of them could synergistically enhance the transmigration activity of monocyte. The synergistic effect was determined using the CalcuSyn Software as described above. The enhancement of Cn and HIV-1 gp41-I90 in transmigration of monocytes across the BBB is closely related to CD44 The HIV-1 envelope glycoprotein gp41 could up-regulate CD44 in AIDS patients with CM, which ultimately enhances the adhesion and invasion of Cn to BMECs [18,53]. In order to examine whether Cn and HIV-1 gp41-I90 enhance the transmigration of monocytes across the BBB is mediated by CD44, two different blockage approaches, genetic knockdown (KD-CD44 HBMEC) and chemical inhibition (CD44 inhibitor Bikunin) were used. KD-CD44 HBMEC was generated by the CRISPR Cas 9 genome editing technique, which is an effective way to down-regulate expression of protein in a broad variety of mammalian cells [54,55]. The down-regulating effect of Cas9 was measured at the protein level by Western blotting, approximately 77 % knock-down was achieved (Fig. 5a, b). In order to ensure that the barrier remains intact in the absence of CD44, the integrity of the barrier of KD-CD44 HBMEC was evaluated by TEER. As shown in Fig. 5c, stimulation of Cn and HIV-1 gp41-I90 alone or together has no significant effect on integrity of the barrier. Furthermore, we also performed Western blotting to examine the effect of down-regulated CD44 expression on tight junction protein ZO-1. As shown in Fig. 5d, the absence of CD44 has no effect on ZO-1 expression in HBMEC. THP-1 transmigration assays were performed with HBMEC, SC (scramble control) HBMEC and KD-CD44 HBMEC as described as Method section. As shown in Fig. 5e, significant reduction of THP-1 transmigration was observed in the KD-CD44 HBMEC groups. Bikunin is a serine protease inhibitor, which was confirmed to have an inhibitory effect on CD44 [56,57]. As shown in Fig. 5f, when the dosage of Bikunin was raised to 1 nM, it showed a significant inhibition on the enhancement of monocytes transmigration rate in Cn infected-HBMEC. Comparing to the control group (17.4 %), the monocytes transmigration rates of Bikunin group was down to 11 and 7.8 %, respectively with dosage 5 nM and 20 nM (Fig. 5f). Similar, Bikunin could also remarkably block enhancement of HIV-1 gp41-I90 in transmigration of monocytes across BBB. Hence, we concluded that, HIV-1 gp41-I90 and Cn enhance the monocyte transmigration across BBB is mediated by CD44. After demonstrating the effect of Cn and HIV-1 gp41 in monocytes transmigration across BBB in vitro, we focused on the role of Cn and HIV-1 gp41 in up-regulation of endothelial adhesion molecules that might be involved in monocytes transmigration. Surface expression of endothelial adhesion molecules was studied by using two approaches, western blotting and whole-cell BA-ELISA. As shown in Fig. 6a, b, following 6 h exposure of HBMEC to Cn and HIV-1 gp41-I90, significantly increase in surface expression of ICAM-1 and CD44 was observed. The data of BA-ELISAs were similar to Western blotting, as shown in Fig. 6d, e, the highest expression level of ICAM-1 and CD44 were observed in HBMEC treated with Cn in combination with HIV-1 gp41-I90. Interestingly, as shown in Fig. 6c, expression of CD44 in KD-CD44 HBMEC was up-regulated significantly following co-exposure to Cn and HIV-1 gp41-I90, although there was a slight increase in CD44 upon treatment of Cn or HIV-1 gp41-I90 alone. The threshold of induced monocytes transmigration and up-regulated CD44 expression by HIV-1 gp41 In the process of studying the effect of HIV-1 gp41-I90 on the transmigration of monocytes across the BBB in vitro, we were able to observe there is a limitation in the induced monocytes transmigration across BBB by HIV-1 gp41-I90. As shown in Fig. 7a, there was very little increase in transmigration rate 26.24 to 26.81 %, when the concentration of HIV-1 gp41-I90 was increased from 20 to 25 μM. This indicates a saturation level of monocytes transmigration, when the HIV-1 gp41-I90 concentration is approaching 25 μM. We further performed a BA-ELISA to confirm the biological relevance of this finding. HBMECs were treated with different dose of HIV-1 gp41-I90 (from 2 to 25 μM), the whole-cell BA-ELISA were performed as described Methods section to assess the expression of CD44 on the surface of HBMEC. As we expected, there is also a limitation of CD44 expression when the concentration of HIV-1 gp41-I90 was increased from 20 to 25 μM (Fig. 7b). Taken together, the results clearly demonstrate that there is a threshold in the enhancement of monocytes transmigration and over-expression of CD44 induced by HIV-1 gp41-190. Redistribution of CD44 to membrane rafts of HBMEC during Cn and HIV-1 gp41-I90 exposure Adhesion molecules recruited to specialized microdomains of lipid rafts is important to regulate intracellular signaling and leukocyte transendothelial migration [58]. Thus, we tested whether Cn and/or gp41-190 could induce CD44 redistribution to the membrane lipid rafts of HBMEC. As CD44 could be a membrane receptor on HBMEC, we used density gradient centrifugation to fractionate membrane rafts. HBMECs treated by Cn and/or Down-regulating effect of the CRISPR Cas9 system was measured by Western blotting. β-actin was used as a loading control for each sample. Results showed a significant decrease (77 %) in the CD44/β-actin optical density ratio (***P <0.001). c Effect of Cn and HIV-1 gp41 on the BBB permeability, as evaluated by TEER. HBMEC, SC HBMEC or KD-CD44 HBMEC cultures were treated with Cn (2 × 10 7 CFU/ml), HIV-1 gp41 (20 μm) alone or Cn in combination with HIV-1 gp41 for 6 h. Either Cn or HIV-1 gp41 alone or joint use of them had no significant effect on the permeability of HBMEC, SC HBMEC and KD-CD44. d Effect of down-regulated CD44 expression on ZO-1 in HBMEC. e Transmigration assays were performed with HBMEC, SC HBMEC and KD-CD44, a significant suppression of transmigration was observed in the KD-CD44 group. f Bikunin inhibits Cn-and HIV-1 gp41-I90-induced THP-1 transmigration across HBMEC in a dose-dependent manner. An uninfected BMEC as a negative control was designed in the assay. Results are expressed as the mean and standard deviation of quadruplicate assays. (*P < 0.05, **P <0.01, ***P < 0.001) HIV-1 gp-41 were lysed in a buffer containing 1 % Triton X-100. The fractionation was performed in OptiPrep TM density gradient centrifugation. After centrifugation, detergent-insoluble membrane lipid raft fractions floated to the interphase between 0 % and 20 % OptiPrep TM layers, peaking at fraction 2 in our study (asterisk in Fig. 8). The loading buffer floated to the top (fraction 1), but soluble proteins or cytoskeleton associated, detergent-insoluble proteins remained in the bottom fractions of the gradient (fractions 3-9). Protein blotting of each fraction was used to examine the distribution of protein components from HBMEC extracts of the Cn and/or gp41-treated samples. As shown in Fig. 8, in untreated HBMEC, CD44 was primarily associated with soluble fractions (fractions 6-9). For the cells treated with either Cn and/or HIV-1 gp41-I90, a significant portion of CD44 had apparently relocated to the membrane rafts as observed in the fraction 2. The result suggested that there was a reorganization of membrane rafts taking place during exposure to Cn and/or HIV-1 gp41-I90, and CD44 became enriched in these membrane rafts on the surface of HBMEC, which facilitates the monocytes transmigrate across the BBB. 6 Cn and HIV-1 gp41 enhanced expression of CD44 and ICAM-1 on the HBMEC. Western blotting analyses were performed to measure the up-regulating effect of CD44 (a, c) and ICAM-1 (b) on HBMEC or KD-CD44 HBMEC following exposure to Cn and/or HIV-1 gp41. The β-actin was used as a loading control for each sample. Results showed a significant increase in the CD44/β-actin optical density ratio (P < 0.01compared with control). Expression of ICAM-1 (d) and CD44 (e) was analyzed by BA-ELISAs. Assays were performed in triplicates. Results were expressed as an n-fold increase of protein expression, taking the control as 1. The significant differences between the treatment and control were marked with asterisks (*P < 0.05, **P < 0.01, ***P < 0.001) HIV-1 gp41-I90 increased Cn-induced monocyte transmigration, the BBB permeability and injury in vivo To further validate the biological relevance of the in vitro assays, the role of HIV-1 gp41-I90 in the monocyte transmigration across the BBB induced by Cn was tested in the mouse model, as described in Methods section. Animals of the same age were injected with Cn (10 6 cells) or HIV-1 gp41-I90 (10 μg/g mouse weight) alone or Cn in combination with HIV-1 gp41-I90. Three indexes, monocytes transmigration, EB concentration in brain tissue and number of cBMEC in blood were used to evaluate the pathogenicities of CM. As shown in Fig. 9a-c, all above indexes shown highest mean in the mice injected with Cn in combination with HIV-1 gp41-I90. These results show that HIV-1 gp41-I90 and Cn could synergistically facilitate the monocyte transmigration, increase the BBB permeability (increased EB concentration in brain) and injury (increased cBMEC in blood). Since most monocytes are recruited into brain parenchyma adjacent to blood vessels during the cryptococcal meningitis, next, we examined the effect of Cn and/or HIV-1 gp41-I90 on recruitment of monocytes into the brain parenchyma of mice. C57BL/6 mice were intravenously injected with Cn and/or HIV-1 gp41-I90 via the tail vein. After 24 h injection, mice were anaesthetized with ketamine and lidocaine, and the brains were removed and fixed in 4 % neutralbuffered formalin. Immunohistochemistry analysis was performed as described as Methods section. As shown in Fig. 10, expose to Cn and/or HIV-1 gp41-I90 was able to significantly increase monocytes transmigration across the BBB. These data suggested that Fig. 8 Redistribution of CD44 to membrane rafts during HBMEC exposure to Cn and/or HIV-1 gp41-190. HBMECs were treated with either PBS or Cn, or HIV gp41-I90 or both of them for 6 h. The cells were then lysed in buffer containing 1 % Triton X-100 on ice. Fractionation was performed in OptiPrep TM gradients, and nine fractions were collected. The lipid raft fractions are indicated by * in fraction 2. An equal volume of each sample was analysed by dot blots using antibodies against CD44. Redistribution of CD44 on the membrane rafts was observed in Cn and/or HIV-1 gp41-I90 treated HBMECs Fig. 9 Effects of HIV-1 gp41-I90 on Cn-increased monocyte transmigration, the BBB permeability and injury. a CSF concentration of monocytes in mice treated with 10 6 CFU/ml Cn, 10 μg/g HIV-1 gp41-I90 or both of them. b Concentration of EB in brain of mice treated with 10 6 CFU/ml Cn, 10 μg/g HIV-1 gp41-I90 or both of them. c Peripheral blood concentration of cBMEC in mice treated with 10 6 CFU/ml Cn, 10 μg/g HIV-1 gp41-I90 or both of them. Mice were divided into 4 groups (6 mice/group). Each experiment was performed three times. *P < 0.05, **P < 0.01, ***P < 0.001 expose to HIV-1 gp41-I90 increased Cn-induced monocytes recruitment into CNS. Discussion Cn is an opportunistic pathogen, which causes fatal meningoencephalitis, especially in AIDS patients. In order to cause meningoencephalitis, Cn must cross the BBB. A great deal of evidence supports the existence of the Trojan horse model of BBB transmigration of Cn. (1) Cn can survive in phagocytic cells via active phagosomal extrusion and spread to the phagocytes [59,60]; (2) The incidence rate of fungemia and meningoencephalitis is higher in HIV-1-infected patients than that in HIV-1-negative patients because HIV-1 can cause severe monocyte dysfunction in host [61][62][63]; (3) Cn was carried and transported by circulating phagocytes in the murine model of cryptococcosis in a previous study by Chrétien F. et al. [64]. (4) Cn is a facultative intracellular pathogen and has been shown to survive and multiply inside phagocytes in vitro [65]. Previous research had shown that HIV-1 infection is able to increase the monocyte capacity to migrate across the BBB [36]. In present study, we have suggested that Cn and/or HIV-1 gp41-I90 is able to enhance the transmigration activities of monocytes across BBB by using the in vitro and in vivo BBB models [66]. Importantly, we found that HIV-1 gp41-I90 was able to synergistically enhance the transmigration activity of monocytes in HBMEC infected with Cn and in mice with Cn-caused meningoencephalitis. Thus, we have firstly demonstrated the relationship between HIV-1, Cn and monocytes, which point out a new potential mechanism of invasion for this pathogenic fungus into the brain tissues of HIV-1-infected patients. Initially, we demonstrated that the transmigration of monocytes across the BBB in vitro could besynergistically enhanced by HIV-1 gp41 protein and Cn. The specificity of the synergistic effect is further confirmed by transmigration assays. Two experiments were designed. In the first experiment, we used H-Cn to examine whether H-Cn and HIV-1 gp41 could synergistically enhance the transmigrate ability of monocytes. Our results have shown that there is no synergistic effect on the transmigration of monocytes with a combination of H-Cn and gp41. Interestingly, we found that H-Cn could also increase monocyte transmigration ability. In the second experiment, HIV Tat and p24 proteins were used. HIV Tat is a regulatory protein that enhances viral transcription and replication, which plays a multifaceted role in pathogenesis of HIV infection, including favouring viral infection, contributing to inflammatory responses and inducing monocyte invasion into the brain [67][68][69][70]. Notwithstanding, we found there is no synergistic effect on enhancement of monocyte transmigration upon treatment by a combination of Cn and HIV-1 Tat protein. Similarly, HIV p24, which is a component of the HIV particle capsid, also has no synergistic effect on Cn-mediated Fig. 10 Recruitment of monocytes into the brain parenchyma of mice treated with Cn and/or HIV-1 gp41-I90. C57BL/6 mice were intravenously injected with Cn and/or HIV-1 gp41-I90 via the tail vein. After 24 h injection, mice were anaesthetized with ketamine and lidocaine, and the brains were removed and fixed in 4 % neutral buffered formalin. Immunohistochemistry analysis was performed as described as Methods section. a Normal brain. b Brian of mice infected with Cn. c Brian of mice treated with HIV-1 gp41-I90. d Brian of mice treated with Cn + HIV-1 gp41-I90. Arrows indicate infiltrating monocytes. Images are 400 × enhancement of monocyte transmigration. Taken together, these results suggest that the synergistic enhancement by the HIV-1 gp41 protein on monocyte transmigration across the Cn-infected BBB is viral factor-dependent. This is most likely due to the fact that both HIV-1 gp41 and Cn may elicit a similar signal, such as up-regulating CD44 and ICAM-1 expression (Fig. 6), activating membrane lipid rafts (Fig. 8) and NF-κB [44], to facilitate the transmigration of monocytes. Thus, we speculate that the ectodomain of HIV-1 gp41 may play a role as a trans-predilection factor for cryptococcal CNS invasion, suggesting that the HIV-1 fusion inhibitors targeting gp41, such as T20 and C34, may be helpful in the prevention and treatment of cryptococcal meningitis in HIV/AIDS patients. CD44 is a well-known type I transmembrane glycoprotein and functions as the major hyaluronan receptor, which is widely distributed in a variety of endothelial cells, mesenchymal cells, hematopoietic stem cells and mesodermal cells and tissues. Although, alternative splicing can produce a large number of different isoforms, they all retain the hyaluronan-binding link-homology region and a common transmembrane and cytoplasmic domain [19]. Recent studies have demonstrated that, the gene that encodes capsule hyaluronic acid synthase is a key virulence gene of Cn. The transmigration process of Cn across the BBB rely on HA binding to the BMEC receptor CD44, which activates the host signal pathway to induce cytoskeleton rearrangement required for Cn invasion [71,72]. In present study, we used the CRISPR-Cas9 system and CD44 inhibitor to examine whether the enhancement of Cn and HIV-1 gp41-I90 in transmigration of monocytes across the BBB is related to CD44. Indeed, our results revealed that CD44 was involved in the enhancement of monocyte transmigration across the BBB by Cn and HIV-1 gp41. Beside the effect of inducing monocyte transmigration across the BBB in vitro, in present study, we also found that Cn and/or HIV-1 gp41 could enhance CD44 redistribution to the membrane lipid rafts and up-regulate the expression level of ICAM-1 and CD44, which are two major endothelial adhesion molecules long known for its importance in facilitating leukocyte transmigration. These findings indicate that Cn and HIV-1 gp41-induced migration of monocytes across BMEC in a coordinate manner with up-regulation of ICAM-1 and CD44. Hence, we derived the conclusion that, HBMEC co-exposed with Cn and HIV-1 gp41 exhibited re-distribution of CD44 and over-expression of CD44 and ICAM-1, which lead to enhancement of the adhesion and transmigration rates of monocytes and facilitate cerebral invasion of Cn. During the process of studying the effect of HIV-1 gp41-I90 on the transmigration of monocytes across the BBB, we found the facilitation of HIV-1 gp41-I90 induced transmigration of monocytes is dose-dependent. When the concentration of HIV-1 gp41 was raised to a certain level, the facilitation get subdued, which remind us that, there is a threshold in the over-expression of CD44 induced by HIV-1 gp41-I90. In order to test the above assumption, different doses of HIV-1 gp41 (2-25 μM) was added to the HBMEC monolayers to observe the transmigration activities of monocyte. These results showed that the facilitation induced by HIV-1 gp41-I90 was significantly saturated with the higher concentrations of the recombinant protein (Fig. 7a). Furthermore, we performed BA-ELISAs to examine whether the over-expression of CD44 induced by HIV-1 gp41-190 is also dose-dependent. As we expected, the expression level of CD44 on HBMEC could became saturated when the concentration of HIV-1 gp41-I90 was increased from 20-25 μM (Fig. 7b). These results have profound clinical significance in antiretroviral therapies for HIV-associated Cryptococoal meningoencephalitis, as it suggests that adherence to antiretroviral therapies may minimize the risk of Cryptococoal neurologic disease. Conclusions In conclusion, HIV-1 gp41-I90 and Cn is able to promote the adhesion and transmigration activities of monocyte, and the co-exposure of HIV-1 gp41-I90 and Cn further accelerate the adhesion and transmigration activities of monocyte. This may result in a deteriorating cryptococcosis in the infected host. The details for how the HIV-1 enhances cryptococcal invasion into the human brain remain unclear. However, our studies provide the enlightenments to establish the exact mechanism of inflammatory responses induced by the HIV-1 gp41-I90 ectodomain often co-morbid with Cn that lead to HIV-1-associated CM, and provide a theoretical basis for new ways to effectively combat opportunistic infections of the central nervous system in AIDS patients.
v3-fos-license
2021-07-27T00:05:07.840Z
2021-05-31T00:00:00.000
236414604
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1049/pel2.12151", "pdf_hash": "1f7f82975311fc34acedc79814e0fc80bd7e61e1", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2756", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "sha1": "72edc613d3720b0bba9b4211e2ba782a5f0b5fda", "year": 2021 }
pes2o/s2orc
A universal automatic and self-powered gate driver power supply for normally-ON SiC JFETs Normally-ON silicon carbide junction-field-effect transistors have a simple design and exhibit advantageous performance in terms of losses, elevated junction temperatures and high switching frequencies. However, under a loss of power to their gate, normally-ON junction-field-effect transistors are subject to a shoot-through situation, which might be severe for their survivability. This paper presents a universal concept for an automatic and self-powered gate driver power supply circuit for normally-ON silicon carbide junction-field-effect transistors employed in high input-impedance circuits. The power to the gate is supplied during start-up and steady-state operations through a mutually coupled inductor with the high input impedance inductor and by employing a typical low-voltage, power supply circuit. The performance of the proposed automatic and self-powered gate driver was evaluated on a DC/DC boost converter rated at 6 kW, as well as in a low-voltage solid-state DC circuit breaker. From experiments it is shown that using the proposed circuit, the start-up process requires approximately 350 𝜇 s, while the steady-state switching process of the junction-field-effect transistor during steady-state is also shown. Using the proposed circuit in a low-voltage solid-state DC breaker, a fault current of 68 A is cleared within 155 𝜇 s. INTRODUCTION Silicon Carbide SiC power switching devices exhibit lower power losses, enable utilisation of high switching frequencies and can operate at higher temperatures (> 200 • C) compared to state-of-the-art silicon (Si) counterparts [1][2][3][4][5][6][7][8][9][10][11][12][13]. Today, SiC power metal-oxide-semiconductor field-effect transistors (MOSFETs) [14][15][16] and the SiC junction-field-effect transistors (JFETs) [17][18][19][20][21] are available with voltage ratings in the range of 650-1700 V. SiC JFETs can be designed as either normally-OFF or normally-ON switches. From a converter performance point-of-view, the normally-ON SiC JFET exhibits a lower specific on-state resistance that results in lower conduction losses and a significantly higher saturation current [10,22]. Moreover, normally-ON JFETs have a lower temperature coefficient compared to normally-OFF counterparts. From the driving perspective, normally-OFF SiC JFETs require a significant gate current for if on-state losses need to be optimised. On the other hand, normally-ON JFET has a voltage-controlled gate. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. Applications such as current-source and impedance-source inverters [23][24][25], high-power modular multilevel converters [26] and DC circuit breakers can benefit from using normally-ON SiC JFETs [27]. However, the normally-ON characteristics of SiC JFETs impose severe driving challenges when employed in power converters. In particular, the greatest challenge of driving normally-ON SiC JFETs is associated with ensuring a safe turn-OFF of the device not only during switching at steady-state operation, but also at the start-up process [28]. Thus, a sufficiently negative voltage (more negative than the pinch-off voltage, V pi ) must always be present and supplied to the gate driver. Various circuit concepts to deal with the "Normally-ON problem" have been developed and applied either at gate-driver [29][30][31] or converter level [32][33][34]. However, these concepts require an external power supply for energising the various circuit components. The need for external power supplies has been eliminated in the self-powered gate driver for normally-ON SiC JFETs shown in [28]. This circuit concept is able to energise the gate-drive circuit without the need of an external power supply, both at start-up phase of the converter and at steady-state operation. Nevertheless, this concept can only be activated in low-input impedance circuits. This is due to the fact that the self-powered gate driver utilises the energy from the shoot-through current during the start-up phase. If the input impedance is high, the anticipated energy of the shoot-through current might not be sufficient to properly activate the circuit. For steady-state operation, the required power to the gate driver is supplied by the blocking voltage of the JFET via a low-power DC/DC forward converter. A similar circuit concept for normally-ON SiC JFETs has been proposed in [35], but it also needs an external power supply for steady-state operation. Normally-ON SiC JFET can also be perfect device candidates for solid-state DC breakers compared to other SiC active devices. One main reason for this is their lower expected temperature rise compared to SiC MOSFETs under a short-circuit condition [36]. In addition to this, the normally-ON nature of JFETs eliminates the need for continuously supplying a positive gate voltage (i.e. as in MOSFETs) or a substantially high base current (i.e. as in SiC bipolar junction transistors (BJTs)) during current conduction [13]. In particular, normally-ON SiC JFETs can conduct the load current without biasing the gate-source junction, unless further reduction of conduction losses is targeted by applying a low positive voltage. The absence of gate-oxide layer in normally-ON SiC JFET also induces advantages compared to SiC MOSFETs under repetitive short-circuits when operating in solid-state DC breakers [37]. Under a fault condition, the voltage drop across the normally-ON SiC JFET can be utilised and converted to a sufficiently negative gate voltage in order to turn-OFF the device [27]. However, this type of self-triggered JFET-based solid-state DC breaker requires a complicated circuitry and a sufficiently high voltage drop for proper activation, which might cause excessive heat dissipation in the JFET die. Similar circuit concepts have also been presented for bipolar-injection field effect-transistors (BIFETs) [38], which also exhibit similar disadvantages. This paper presents a generic circuit concept of a universal automatic and self-powered (UASP) gate driver supply for normally-ON SiC JFETs employed in high-input-impedance circuits. The proposed circuit concept utilises the voltage drop across the high input-impedance and supply a sufficiently negative voltage to control the gate. By using the proposed solution, the need for connecting additional circuit components across the SiC JFET is eliminated. The performance and applicability of the proposed automatic supply concept is demonstrated in a switch-mode power converter during start-up and steady-state operations, as well as in a low-voltage solid-state DC breaker employing normally-ON SiC JFETs. The paper is organised as follows. Section 2 shows the operating principle of the proposed circuit, while the design process and dimensioning of the circuit is presented in Section 3. The experimental investigation of the UASP circuit operated in a DC/DC boost converter is found in Section 4. Section 5 presents both simulations and experimental results of the UASP circuit employed in a low-voltage solid-state DC breaker. Last but not least, conclusions are given in Section 6. OPERATING PRINCIPLE The operating principle of the proposed circuit is based on utilising the voltage drop across the high input-impedance and convert it to a sufficiently negative voltage suitable to control the gate. Figure 1 shows a block diagram of the proposed UASP circuit concept operating in a switch-mode converter, where the high input-impedance, L 1 is also depicted. In addition to this, the proposed circuit can also be employed to control the gate voltage in solid-state DC breakers utilising normally-ON SiC JFETs, as shown in Figure 2. For the latter case, the high impedance is basically the current-limiting inductor that is needed to limit the rate of rise of fault current, as well as the peak value of the fault current. In both applications, a second inductor, L 2 , is magnetically coupled with L 1 using a magnetic core and by employing a proper circuitry in the UASP circuit (e.g. a positive and a negative voltage regulators) the desired floated voltages can be generated and supplied to the gate. UASP in switch-mode converters In order to present the operating principle of the proposed UASP gate driver in a switch-mode converter, a non-isolated DC/DC boost converter is considered (Figure 3). L 1 operates as the main inductor of the DC/DC converter and L 2 is the auxiliary inductor, which is directly connected with a low-voltage diode rectifier, D b . The direct output voltage of D b , u C DC , is fed to a pair of Zener diodes, D Z p and D Z n that supply the positive and negative voltage to the gate. Any potential high current through the Zener diodes is limited by a series-connected resistor, R leak . This is basically one possible way to realise a power supply for the gate driver. A plethora of various low-voltage source concepts might also be employed in order to convert the voltage across L 2 to suitable voltage levels for the gate driver. To complete the gate driver design, an optocoupler for signal isolation and a totem-pole integrated-circuit driver (IC-driver) are also employed. Last but not least, for controlling the switching speed of the JFET and preventing breakdown of the gate, a series-connected gate resistor R g and a diode-resistor-capacitor (D g R p C g ) parallel network are connected on the output of the IC-driver [39]. A favourable characteristic of the proposed UASP circuit is the fact that the shoot-through current is only limited by means of L 1 , and thus, the need for a start-up resistor to limit the shoot-through current is eliminated. However, the inductance of L 1 must be properly chosen in order to set the peak value of the shoot-through current in such a way that it will not stress thermally the SiC JFET and that will not saturate the channel of the JFET either. Along with this, the range of the input voltages of the DC/DC boost converter must also be carefully set for ensuring a successful start-up procedure. Additionally, in steady-state operation of the converter, the design of the inductance L 1 must take into account the switching frequency of the SiC-JFET, as well as the input voltage and the load current. Then, the inductance L 2 will be defined accordingly, as mentioned above. It should be noted that a high switching frequency is anticipated due to the utilisation of SiC power semiconductor device and thus, L 1 and L 2 can be low resulting in a high power density DC/DC converter design. However, at elevated frequencies, the power losses associated with the mutually coupled inductors and more importantly the core losses, will also increase. In addition to this, operation at elevated switching frequencies imposes the need for more powerful gate drivers that will be able to supply sufficient gate current peaks for fast switching and to also ensure a stable gate voltage supply. When the start-up process is initiated, it is assumed that the input capacitor C in is fully charged at the input voltage V in . Moreover, it must be noted that the DC/DC converter is energised when the circuit breaker, CB, shown in Figure 4 closes, and thus, C in feeds the circuit. As soon as the CB is closed and since there is no power to the gate, J m is subjected to a short-circuit condition and the input voltage, V in appears across L 1 and J m . The derivative of the shoot-through current is determined by the values of L 1 and V in . A graphical representation of the current, i J m flowing through J m and current i L 1 through L 1 are shown in the first and second waveforms in Figure 5. From this figure, it is clear that two current peaks are observed on both of i J m and i L 1 waveforms. The different time intervals which can be observed in Figure 5 are analySed as follows. • t 0 -t 1 : The first current peak in the interval t 0 -t 1 is due to the charging current of the DC-link capacitor of the auxiliary circuit, C DC ( Figure 3). During this time interval, the voltage across L 1 , u L 1 , is rising and has a maximum value of approximately V in . In particular, the main part of V in appears across L 1 , while the voltage across J m is significantly lower due to its low ON-state resistance (fourth waveform in Figure 5). For simplicity, however, it is assumed that V in completely appears across L 1 . • t 1 -t 2 : When u C DC reaches its steady-state value at t = t 1 , the current in the auxiliary winding, L 2 becomes zero. However, a low leakage current i L 2 still flows through L 2 in order to compensate for the losses in the circuit (e.g. leakage current in C DC and Zener diodes). The performance of u C DC is shown in the bottom waveform in Figure 5. • t 2 -t 3 : At the time instant t 2 , the negative supply voltage of the optocoupler and IC-driver, u gs , is sufficiently low and provided that the propagation of the pulse width FIGURE 5 Theoretical performance of the UASP circuit employed in the DC/DC boost converter modulation (PWM) signal to the optocoupler starts at t = t 0 , J m is switching. However, J m is operating in the active region until u gs becomes more negative that the pinch-off voltage of J m , V pi . During this operating phase the drain-source voltage, u ds , is switching between a high and a low positive voltage level and the SiC JFET might be overheated, unless a proper dimensioning of the UASP and converter is made. As soon as u gs exceeds V pi at t = t 3 , the SiC JFET is switching normally in the saturation region. Additionally, the voltage across the primary winding of the coupled inductors, u L 1 becomes negative and i L 1 starts decreasing. This is due to the fact that J m is turned-OFF (operation in active region) and a high blocking voltage appears across the device. Furthermore, the output voltage of the DC/DC boost converter, V out , equals the envelope of the switching waveform of u ds . Thus, con- Block diagram of the UASP employed in a solid-state DC circuit breaker with a normally-ON SiC JFET tive voltage appears across L 1 . After t 2 , u L 1 starts to switch between a high negative and a low negative voltage level until u gs < V pi . • t > t 3 : At t = t 3 , u gs becomes lower than V pi and hence, the steady-state operation of the converter is reached. If the turns-ratio between L 1 and L 2 is N 1 :N 2 , the value of u L 2 equals (N 2 ∕N 1 ) ⋅ u L 1 . This voltage is supplied to the singlephase diode rectifier of the auxiliary circuit and dictates the value of u C DC . During steady-state operation of the DC/DC boost converter, the square-wave voltage of L 1 is continuously transformed to u L 2 and energises the UASP. UASP in low-voltage solid-state DC breakers For presenting the operating principle of the UASP when it is employed in a solid-state DC circuit breaker with normally-ON SiC JFETs, the block diagram shown in Figure 6 will be considered. A vital component of a circuit breaker is the seriesconnected current-limiting inductor, L 1 that limits the rate of rise, as well as the peak value of the fault current. In addition to this, a metal-oxide varistor (MOV) is connected in parallel to the JFET for preventing destructive overvoltage conditions and breakdown. In order to ensure galvanic isolation in the fault line, a residual mechanical switch is also connected in series, which is able to open when the fault current is cleared by the SiC JFET. Prior to the activation of the UASP, it is assumed that the solid-state circuit breaker conducts the direct line current, i L 1 = I nom , which flows through L 1 and the normally-ON SiC JFET and it is supplied to the load. When a fault occurs ( Figure 6), the line current increases rapidly because the voltage across L 1 equals the direct voltage of the grid, V DC . Similarly to the case of applying the UASP in a switch-mode converter, the voltage across L 1 can be utilised by magnetically-coupling a second inductor L 2 . This inductor L 2 feeds power to the UASP and, thus, the low-voltage and low-power circuit components contained in the UASP can be activated. However, in case of a solid-state breaker, there is no need for switching operation and it is only sufficient to supply a negative gate-source FIGURE 7 Theoretical performance of the UASP circuit employed in a solid-state DC breaker with a normally-ON SiC JFET voltage for turning-OFF the SiC JFET. It should also be mentioned that a damping resistor, R d connected in series with the second winding of the coupled inductors must be considered in order to dump potential voltage oscillations between L 2 and C DC due to resonance. The expected theoretical performance of the UASP when employed in a JFET-based solid-state breaker is illustrated in Figure 7. The operation of the UASP in different stages during a short-circuit clearance can be seen in that figure and it is analysed as follows. • t 0 -t 1 : Prior to the time instant that the fault occurs (t < t 1 ), a direct line current flows, the voltage across L 1 is zero (i.e. the resistance of L 1 is assumed to be negligible) and the UASP is inactive. • t 1 -t 2 : The fault occurs at t = t 1 , and thus the line current starts rising with a slope determined by the values of V DC and L 1 as shown in the first plot in Figure 7. Beyond t 1 , the entire grid voltage V DC appears across L 1 (i.e. u L 1 = V L 1+ = V DC ) and a voltage, u L 2 , is also induced across L 2 with a magnitude that is determined by the turns ratio N 1 ∶ N 2 . In particular, Considering the same implementation of the power supply shown in Figure 3, the induced voltage on L 2 is rectified and appears across C DC . As soon as, u C DC exceeds the sum of the breakdown voltages of the two Zener diodes, the negative and positive voltage supplies to the gate are regulated. However, by utilising the UASP in a JFET employed in a solid-state breaker, the need for a positive gate voltage supply could be omitted, unless conduction power losses are to be further reduced. On the other hand, if the optimisation of the conduction losses is of high design priority, an external positive voltage supply can be used. • t 2 -t 3 : At t = t 2 and after the voltage across the C DC has been stabilised at V C DC , the u gs becomes equal to the pinch-off voltage, V pi , and therefore, the voltage across the JFET, u ds starts rising as illustrated in the bottom waveform with red line in Figure 7. The solid-state breaker employing a normally-ON SiC JFET can be designed to be either self-controlled or externallytriggered. Self-controlled design means that the overall UASP design and dimensioning of the components are such, that when the fault current exceeds a predefined current threshold, the gate-source voltage becomes less negative than V pi , and thus, the JFET turns-OFF. In the externally-triggered design, a positive gate voltage can be supplied by an external voltage source, while the negative gate voltage can be generated by the UASP. However, in this case, an external signal to the optocoupler is needed for controlling the turn-OFF of the JFET. This is crucial when such a solid-state DC breaker operates in a multi-terminal grid, where selective protection might be required. High-input-impedance converter case During the start-up process in the high-input-impedance converter, the design of UASP must be such that will not cause an extensive discharge of C in . This means that along with the proper selection of L 1 , C in must also be selected with respect to the allowed input voltage drop during the activation of the UASP. If V in drops to very low values or zero (fully discharging of C in ), the voltage across L 2 will also be either low or zero, and the UASP might not be activated. A generic schematic diagram showing the path of the shootthrough current in a converter is shown in Figure 8. In this figure, L tot is the total inductance seen from the shoot-through current (i.e. combination of the mutually coupled inductors L 1 and L 2 ). It is also assumed that the normally-ON SiC JFET J m has an on-state resistance r on . Based on Figure 8, Equation (1) gives the shoot-through current, i st , as a function of the time t during the start-up phase. By solving this equation, the analytical expression for i st can be derived. Thus, the energy released from C in and dissipated in J m can also be calculated using Equation (2). In this equation, I st is the peak value of the shoot-through current and t su is the time needed for J m to start its switching process in a converter. Equation (3) gives the energy, ΔE in , released from the capacitor during the start-up phase. In this equation, V in and V ′ in are the voltages across C in before the start-up process is initialised and when J m is turned-OFF, respectively. In case of dissipation of the entire energy stored in C in , the voltage V ′ in will drop to zero, which prevents the proper activation of the UASP. It is, therefore, necessary to set a criterion for the maximum allowed energy ΔE in,allowed that can be released from C in , as shown in Equation (4). This criterion dictates that ΔE in,allowed must be significantly higher than the expected energy dissipation in the normally-ON SiC JFET. Thus, the anticipated voltage drop in C in will also be kept low, which results in proper activation of the UASP. The criterion shown in Equation (4) can also be expressed in terms of the peak shoot-through current, I st , as shown in Equation (5). In the calculation of I st the required turn-OFF time, t su , must also be taken into account. However, the various combinations of I st and t su are, to some extent, directly associated with the value of L tot . On the other hand, t su is also related with the activation time of the auxiliary gate driver power supply. In particular, a specific time is also necessary in order the gate driver to supply an adequately negative gate voltage which turns-OFF J m . This time is associated with the activation of the voltage regulators for V p and V n , optocoupler and IC-driver. If the input voltage V in drops more than the value set by the design limits, the voltage across the Zener diodes will not be adequately high to reverse-bias them. Consequently, the design of the coupled inductors L 1 and L 2 must also be done with respect to the range of the input voltage. The voltage at which the C DC , will be stabilised, V C DC , must be at least higher than the sum of the reverse breakdown voltages of the Zener diodes, for operating as voltage regulators. During the startup phase, V C DC depends on the input voltage V in and the turns-ratio of the coupled inductors, N 2 ∕N 1 . On the other hand, in case the UASP is employed in a boost converter operating in CCM, during steady-state operation, the voltage V C DC depends on the input voltage, V in , output voltage, V out , and the duty ratio of the converter, D: The voltage given by Equation (6) must fulfil the following criterion: where V n and V p are the absolute values of the negative and positive voltage supplied by the Zener regulators and V R leak the voltage drop across R leak . Solid-state DC circuit breaker case The design of the UASP in the fault-clearing process in a SiC-JFET-based breaker faces different challenges. In particular, the grid voltages are usually higher than the input voltage of a DC/DC boost converter and hence, the discharge of C in is not likely to occur under high input voltages V DC . Therefore, the design of the proposed circuit will not take into consideration the input capacitance of the DC grid. On the other hand, the high grid voltage leads to the use of a different turns-ratio of the coupled inductors compared to the design of the UASP for the case of a switch-mode converter. The voltage in the UASP circuitry and particularly in C DC should be kept at much lower levels than the grid voltage leading to the need for more turns in the primary inductor compared to the secondary side. In addition to that, the importance of the rate of rise of short-circuit current, di L 1 ∕dt should be emphasised since this may lead to high peak currents, which might heat up the JFET die excessively. The fault current rise in the circuit shown in Figure 6, is governed not only by the L 1 , but also by the mutual inductance between L 1 and L 2 , and it is given by the following equation. where M is the mutual inductance given by: In this expression, c is the coupling coefficient of the coupled inductors. Additionally, as mentioned above, a damping resistor, R d must be considered. The possible oscillations between L 2 and C DC should be damped and thus, the following criterion must be set. Finally yet importantly, the charging time of the C DC , t ch , given by approximately 5 ⋅ R d C DC , should be set in such a way, that the peak short-circuit current will be within an acceptable limit. The t ch indicates the start of the JFET turn-OFF and hence the peak short-circuit current. Therefore, the following criterion must be set. where, I nom and I SC max are the nominal line current and the maximum allowable fault current respectively. It should also be mentioned that the capacitor voltage when the last is fully charged, V C DC will be lower than N 2∕N 1 ⋅ V DC , due to the voltage drop in the damping resistor R d . At the same time, Equation (7) FIGURE 9 Photograph of the experimental DC/DC boost converter prototype employing the automatic start-up circuit must hold true. Therefore, the choice of C DC , R d and L 2 which will set the N 1∕N 2 are of great importance and they must be defined precisely. EXPERIMENTAL RESULTS FOR OPERATION IN A SWITCH-MODE POWER CONVERTER The performance of the proposed UASP power supply for normally-ON SiC JFET operating in switch-mode converters has been validated experimentally using a DC/DC boost converter rated at 6 kW. The lab prototype was designed using a 1200-V SiC JFET with an ON-state resistance of 45 mΩ at room temperature, a pinch-off voltage of −5 V and a chip area of approximately 9 mm 2 . A photograph of the experimental DC/DC boost converter prototype is shown in Figure 9. In order to emulate the start-up process of the circuit, a circuit activation switch employing a silicon IGBT (IXYS IXA55I1200HJ) was connected between the pre-charged C in and L 1 . However, in a realistic converter, the start-up switch might consist of a relay or a mechanical switch. In this paper, however, the main target is to demonstrate the operating principle of the proposed universal automatic and self-powered circuit, and thus, the investigations are not expanded to the design and performance of the circuit activation switch. The design of the coupled inductors, L 1 and L 2 , is very crucial for the proper operation of the uasp circuit. Assuming the range of the input voltage to be 50-150 V, the steady-state peakpeak ripple on the inductor current to be kept lower than 8 A, and continuous conduction mode (CCM) for the converter, L 1 was calculated to be 125 H. Moreover, given that the voltages supplied by the Zener regulators equal V n = −30 V and V p = 2.5 V and by taking into account a minimum input voltage of V in,min = 50 V, the turns-ratio must be equal to N 1 ∕N 2 = 1:1. Thus, even if the lowest boundary of the input voltage (V in,min = 50 V) is fed to the converter, the Zener voltage regulators will be activated properly. Table 1 shows the design parameters of the coupled inductors, which prevent magnetic saturation of L 1 . The parameters of the experimental setup are summarised in This set of experiments has been performed by setting the input voltage to V in =50 V. The complete start-up process of the DC/DC boost converter is shown in Figure 10. In this figure, the measured gate-source and drain-source voltages, as well as the drain current of J m and the current flowing through L 1 are illustrated. As expected, when the start-up process is initialised, the shoot-through current (either I L 1 or I J m ) starts rising. The first current peak, due to the charging of C DC , appears approximately 100 s after the initialisation of the start-up process. After this, the shoot-through current continues rising until the auxiliary gate driver supply is activated. The term "activation" Measured gate-source voltage of J m (yellow line, 10 V/div), voltage across L 1 (purple colour, 50 V/div), drain current I J m (green line, 100 A/div), and inductor current I L 1 (red colour, 100 A/div), (time base 200 s/div) during the start-up process of the converter refers to the time point where the IC-driver is able to supply an adequately negative output voltage V g , which is able to turn-OFF the JFET. Considering that the PWM signal starts simultaneously with the activation of the converter, the switching process of J m also starts as soon as the IC-driver is activated. This can be seen in Figure 10 approximately 350 s after the initialisation of the start-up process. It must be noted that, in order to prevent large overvoltages on the output of the converter, the duty-ratio is slowly increasing from zero up to the steady-state value. Moreover, from Figure 10 it is clear that J m is switching in the active region, because V gs is lower than zero and less negative that the pinch-off voltage of the device (V pi =-5 V). The switching operation in the active region can also be seen from the simultaneous stress of J m with high values of blocking voltage (purple line in Figure 10) and current I J m (green line in Figure 10). The voltage across the inductor L 1 is shown with the purple line in Figure 11. When the start-up process starts, this voltage is positive and causes a rising current that flows through Figure 11, respectively). When the PWM switching process starts, the voltage across L 1 becomes negative and I L 1 starts to decrease. This is due to the fact that when the switching operation starts, J m conducts a high current and the device operates in the active region. Moreover, during the start-up process and before the switching process starts, the output voltage of the DC/DC boost converter, V out , equals zero. As soon as V ds starts rising, the output voltage of the DC/DC boost converter, V out , also starts increasing and equals the envelope of the switching waveform of V ds . Thus, V L 1 = V in −V out , which is a negative voltage and I L 1 is decreasing. The steady-state operation of the DC/DC boost converter is reached a few milliseconds after the time instant that the start-up process is initialised. This time interval depends on the values of the passive components of the power converter and the design of the UASP. A caption shown the normal switching operation of J m at steady-state is presented in Figure 12. From this figure, it is obvious that the converter is operating in CCM at a switching frequency of 50 kHz and a duty ratio slightly higher than 0.5. Simulation results The application and performance of the proposed UASP in a solid-state DC breaker employing a normally-ON SiC JFET has been investigated using simulations. For this purpose a lowvoltage DC breaker consisting of a 1200-V SiC JFET that is connected in a DC line has been modeled using LTspice. The SiC JFET is rated at 63 A and has an on-state resistance of 35 mΩ at room temperature. Since the focus of these investigations is to validate the operation of the UASP at device level, a Spice Table 3 summarises the design and modelling parameters for the breaker and the DC line. It is assumed that during normal operation of the solid-state DC CB, the SiC JFET conducts a line current of 35 A as shown in Figure 13 prior to t =100 s. At this time instant t =100 s a pole-pole fault occurs and, thus, the line current starts rising due to the positive voltage of V DC across L 1 . The induced voltage across L 2 is rectified and the gate-source voltage, u gs starts to develop as shown in Figure 13 fault occurrence, is clamped at the breakdown voltage of the MOV, which has been set to 900 V as shown in Figure 13(b). As long as the residual energy from the line is dissipated in the MOV, the SiC JFET is blocking 900 V, whereas when the energy dissipation is complete, u ds drops to the nominal grid voltage of V DC . The performance of the various currents during a fault clearing process is illustrated in Figure 14. From this plot, it is obvious that the line current, i L 1 is equal to the sum of the JFET current, i d and the MOV current, i MOV . When the SiC JFET is turned-OFF, the fault current commutates to the MOV, which dissipates the residual magnetic energy of the DC grid. The residual energy dissipation lasts for approximately 135 s. In addition to that, the peak short-circuit current reaches approximately 68 A, which is within the limit set. A further observation relates to the choice of C DC and its impact on the peak fault current. Figures 15 and 16 show the voltage across the C DC , the gate-source voltage and the anticipated line current for various values of C DC . Two issues must be highlighted regarding these figures. Firstly, the C DC is charged at higher voltage level by decreasing the capacitance as illustrated in Figure 15(a). This holds true due to the shorter charging time, t ch at lower capacitances along with the voltage drop across the damping resistor, R d . Secondly, the peak short-circuit current increases by increasing C DC as shown in Figure 16, because as the value of this capacitor becomes higher, a longer time interval is required for the u gs to reach V pi and turn-OFF J m , as illustrated in Figure 15(b). High currents through the SiC JFET might result in extensive thermal stress and eventually thermal destruction of the device, unless its chip area is sufficiently large to withstand such high surge currents. On the contrary, a very low value of C DC will trip the CB at very low values of fault current. This might cause breaker tripping under load variations, which is undesired in practical applications. Furthermore, the importance of the design of the secondary inductor, L 2 can be seen in Figures 17 and 18. In particular, Figure 17 shows the voltage across the C DC , u C DC for three values of L 2 . Higher inductance of the secondary inductor leads to smaller turn-ratio and thus the voltage u C DC becomes higher. Therefore, the gate-source voltage reaches sooner the pinch-off voltage, V pi and as a result, the short-circuit current is interrupted at lower peak value, I SC max as depicted in Figure 18. However, significantly high inductance might cause high current in UASP circuitry, as well as, breaker tripping under load variations, similar to the C DC case. All in all, the choice of both C DC and L 2 , as well the overall design of the UASP must be made based on the design and operating constraints of the specific application. More specifically, if the protected DC line feeds power to very critical and sensitive loads or supplied by sensitive power sources, it is inevitable to tune the breaker and UASP such that the fault is cleared as fast as possible. On the other hand, for not very critical source and loads and especially for those exhibiting variations during normal operation, the tuning of UASP could be more flexible. Experimental results The performance of the UASP circuit in a solid-state breaker has also been assessed experimentally using the test circuit illustrated in Figure 19. Similar to the simulations presented in Section 5.1, a 1200-V, 63A normally-ON SiC JFET with an ONstate resistance of 35 mΩ at room temperature from UnitedSiC (UJ3N120035K3S) has been used as the main breaker switch. Besides that, a 3.6 kV and 50A IGBT (IXYS IXBX50N360HV) has been considered as an auxiliary switch S 1 , which is used to initiate the fault condition. In particular, when S 1 turns-on, a fault line current is flowing through the solid-state breaker. A single pulse test was performed as Figure 19 shows. A photograph of the DC breaker prototype along with the UASP circuit is depicted in Figure 20. Tables 4 and 5 summarise the design parameters for the coupled inductors and the test circuit, respectively. On the other hand, Figure 22 shows similar results but in case of C DC = 1 F. The increase of the capacitance prolongs the turn-off process of the normally-ON SiC JFET and hence, the line current increases accordingly. As a result, the peak current in that case reaches 33 A and the fault is cleared within 330 s. The capacitor, C DC , is charged in 20 s reaching a steady-state value of 27.5 V. The last case with C DC = 1 F corresponds well with the simulation results shown in Figure 14, where the fault current starts at 35 A and reaches a peak value of 68 A. CONCLUSION A universal automatic and self-powered circuit for normally-ON SiC JFET employed in high-input impedance circuits was proposed. The main concept of the proposed circuit is to supply an adequately negative gate voltage using the voltage across the high-impedance component and an auxiliary coupled winding during both the start-up process and steady-state operation. Apart from its applicability to switch-mode converters, the proposed UASP concept can also be utilised in a low-voltage solidstate circuit breaker. It has experimentally been shown that applying the UASP in a switch-mode converter, the normally-ON SiC JFET starts switching approximately 350 s after the start-up process is initialised. However, this time depends on the design of the gate driver supply circuit and the converter. In addition, the steadystate operation of the converter using the UASP circuit is also experimentally shown. Based on these experimental results, a normal switching operation of the normally-ON SiC JFET at 50 kHz during steady-state is observed. The performance of the proposed UASP concept has also been validated in a low-voltage solid-state DC breaker employing a normally-ON SiC JFET by means of both simulations and experiments. From simulations, it has been shown that the SiC JFET clears a fault current of 68 A within approximately 155 s, while in experiments, the solid-state breaker interrupts a 33 A short-circuit current in 330 s. However, a proper and application-oriented tuning procedure is necessary in order to set the tripping current level for the UASP, as well as the expected peak fault currents and thermal stress of the SiC JFET. It is clear that the design complexity of the proposed UASP gate driver is higher compared to a conventional voltage-source gate driver with external power supply. However, normally-ON SiC JFETs exhibit a better power loss performance in power converters compared to the normally-OFF counterparts. Not only the lower specific on-state resistance and the lower temperature coefficient, but also the voltage-controlled gate-source junction contribute to lower losses.
v3-fos-license
2023-09-24T15:25:38.016Z
2023-09-21T00:00:00.000
262189670
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2571-8797/5/3/56/pdf?version=1695275728", "pdf_hash": "60b56a5f820a7b850ee3d4a20aaccc34f32b116e", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2758", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "sha1": "5fda44e62671b968c935d8398be841feed38a393", "year": 2023 }
pes2o/s2orc
Coupling a Gas Turbine Bottoming Cycle Using CO 2 as the Working Fluid with a Gas Cycle: Exergy Analysis Considering Combustion Chamber Steam Injection : Gas turbine power plants have important roles in the global power generation market. This paper, for the first time, thermodynamically examines the impact of steam injection for a combined cycle, including a gas turbine cycle with a two-stage turbine and carbon dioxide recompression. The combined cycle is compared with the simple case without steam injection. Steam injection’s impact was observed on important parameters such as energy efficiency, exergy efficiency, and output power. It is revealed that the steam injection reduced exergy destruction in components compared to the simple case. The efficiencies for both cases were obtained. The energy and exergy efficiencies, respectively, were found to be 30.4% and 29.4% for the simple case, and 35.3% and 34.1% for the case with steam injection. Also, incorporating steam injection reduced the emissions of carbon dioxide. Introduction Energy and environmental impact analyses have gained importance in recent years due to increasing concerns over hydrocarbon fuel consumption and environmental pollution [1].Recently, international agreements have attempted to decrease fuel consumption and environmental pollution, as well as retire many fossil fuel power plants [2].The electricity production market is also changing.Between 2015 and 2035, nearly 90 GW of fossil fuel power plant capacity will be retired in the United States [3].Meanwhile, natural gas power plants are gradually increasing in number.Gas turbines play a prominent role in electricity generation technology today, with the potential to grow.Nearly 80 GW of new gas turbine power plant capacity is predicted to enter the electricity generation market by 2035 [4]. Decreasing fuel consumption for a given output makes power plants operate more economically by reducing fuel consumption costs.However, a capital cost investment is normally required to obtain high efficiency and is offset by fuel cost savings.Gas turbine cycles can work on an extensive range of fuels comprising natural gas, which exhibits cleaner combustion than other fossil fuels [5][6][7].In designing new gas turbine units, it is often advantageous to increase turbine inlet temperatures and pressure ratios.Other beneficial gas turbine modifications include the use of intercoolers and interstage turbine reheat [8][9][10].Gas turbines power generation plants can also incorporate solid oxide fuel cells [11][12][13]. Nowadays, the utilization of gas turbine power plants incorporating steam injection to the combustor with natural gas is one of the most effective ways for the reduction of NOx emissions.Such plants also have relatively good energy efficiencies.Exhaust gases can be used to produce superheated steam, which is one of the most effective heat recovery methods [14]. A thorough review of wet gas turbine research [15] identified those cycles having the highest future potential.Romeliotis and Mathiodakis [16] analyzed the effect of water injection on engine efficiency and performance as well as on compressor behavior.Techniques were investigated for water injection through internal methods that ascertain water injection influences on the gas turbine and on compressor off-design performance.Enhanced performance for the gas turbine was demonstrated with water injection.Eshati et al. [17] presented a model for industrial gas turbines to investigate the impacts on heat transfer and cooling of turbine blades of the air-water ratio.It was shown that, with a rise in the air-water ratio, the cooling temperature of the blade inlet was reduced along the blade opening.The temperature of blade metal in each part was reduced as the air-water ratio increased, and this also increased the creep life of the blade. Renzi et al. [18] evaluated the effects of syngas (produced gas) and its performance in a gas microturbine with steam injection (SI).The results showed that the energy of the synthesis gas in the combustion chamber (CC) reduced NOx emissions by nearly 75%.In contrast, the CO emissions increased slightly with natural gas combustion.It was found that the maximum value of injected steam in the combustion chambers of the gas turbine system was 56 g/s.Mazzocco and Rukni [19] thermodynamically investigated a parallel analysis for solid oxide fuel cell plants, hybrid gas turbines with steam injection, gasification power plant combinations, and simple power plants.For the optimized power plants, the energy and exergy efficiencies were shown to be 53% and 43%, respectively, significantly more than the related values for conventional 10 MW power plants fed with biomass.A thermo-economic analysis identified the average cost of electricity for the arrangements with the best performance at EUR 6.4 and 9.4/kW, which is competitive in the marketplace. Using energy, exergy, economic and environmental analyses, Amiri-Rad [20] investigated steam injection and heat recovery for a gas turbine having steam injection in addition to an anti-surge system.Waste heat recovery via a heat exchanger produced steam from the gas turbine exhaust.Finally, the employed method introduced the optimal steam injection conditions for the combustion chamber; for a relative humidity of 10% and an ambient temperature of 38 • C, the optimal steam temperature was observed to be 318.5 • C. Steam injection to the gas turbine with integrated thermal recovery at the optimal steam temperature reduced the cost of electricity production by 25.5% and increased the net generated power by 56 MW and the energy efficiency by 4.6%. Ahmed [21] examined a modified gas turbine by injecting steam between the combustion chamber exit and turbine entrance.Current optimized cycles having steam injection yield higher power output and efficiency, which results in lower specific costs.Bahrami et al. [22] improved gas turbine transient performance through steam injection during a frequency drop.A control system was presented that, during the frequency drop, utilized an auxiliary input of steam injection to enhance gas turbine transient performance.The control algorithm's performance was investigated at several conditions, demonstrating that steam injection increased the performance notably for the standard control algorithm, particularly near full load conditions.Sun et al. [23] performed energy, exergy, and exergoeconomic investigations of two systems using supercritical CO 2 combined with a gas turbine.They considered the effects on energy efficiency of five parameters, including temperature difference of the inlet and outlet for exhaust gases, pressure ratio, and compressor inlet pressure.They also obtained values of the exergy efficiency and cost per kilowatt hour.Comparing the traditional combined cycle and the design proposed, they reported that the S-CO 2 cycle had competitive economic performance without any significant thermodynamic performance loss. In the present work, a gas turbine cycle using a working fluid of carbon dioxide is examined, with steam injection to the combustion chamber (SIGTSC) and without (GTSC).Then, the cycle variations are compared.The novelty of this work lies mainly in (1) the proposal of a power generation system with two subsystems (gas turbine cycle with steam injection and two-stage turbine and SCO 2 subsystem) in a combined form, and (2) ascertaining the effects of steam injection percentage to the combustion chamber on the overall system performance with an in-depth analysis (considering ten combustion products) to elicit more realistic results.The steam injection also improves the system's environmental characteristics like carbon dioxide emissions, which are important today. Description of System Figure 1 depicts the considered system, which consists of a SCO 2 recompression bottoming cycle and a Brayton topping cycle as the cycle.Air enters the air compressor at ambient atmospheric conditions; then air, methane, and superheated steam flows mix at different conditions, and the combustion process occurs.Hot exhaust gases are conveyed to the two-stage gas turbine where work is produced and the temperature decreases.The SCO 2 subsystem utilizes exhaust gases as a high-temperature heat source.The SCO 2 cycle is described elsewhere [24,25].After transferring heat from the output gases in the HEX heat exchanger, the cooled gases enter the HRSG and supply the superheated steam used by the combustion chamber.In this study, the efficiency was examined for the power generation system with two subsystems (gas turbine cycle with steam injection and two-stage turbine and SCO 2 subsystem) in a combined form, as were the effects on the whole system of steam injection percentage to the combustion chamber. Energy Analysis The first law of thermodynamics was employed to balance energy rates for the power generation components.Following conservation of mass principles, mass flow rates and molar flow rates of flows of working fluids were determined.For a control volume operating at steady state, general rate balances for mass and energy, respectively, are: Here, W cv and Q cv, respectively, denote the power and the heat transfer rate into the control volume. For the simulation, EES software was used. Combustion Modeling Combustion Process with Steam Injection In the present work, the incoming air from the compressor was mixed in the combustion chamber with fuel (methane), while superheated steam was injected through the process to control the emissions of pollutants to the environment.The chemical reaction occurring in the CC was as follows [28,29]: Various approximations and simplifications were invoked during the analysis: • All gases were assumed ideal with specific heat and enthalpy changes depending on the temperature, except for injected steam. • Nitrogen and oxygen compression factors were assumed to be ideal even at the lowest temperature and highest pressure of the analysis. • Due to thermodynamic restrictions, the turbine inlet temperature could not exceed 1440 K. • The air entering the compressor was considered completely dry and contained 21% oxygen and 79% kmol nitrogen on a molar basis. • The combustion chamber efficiency in the gas turbines utilizing natural gas and methane in gas phases was very high and, in most studies, a value of 99% has been considered. • Combustion was considered to be steady, and the CC was considered a well-stirred reactor (WSR). • The temperature of combustion was based slightly on the stoichiometric rich side.This was performedbecause Lefebvre [26] showed that, for a fixed enthalpy of reactants, the lower the product mixture average specific heat is, the higher the resulting flame temperature is because of the richer average specific heat for the products. • In the Brayton subsystem of recompression of supercritical carbon dioxide, the system operated at steady flow, and variations in kinetic and potential energies could be disregarded [24,25]. • Pressure losses and heat losses in all heat exchangers and pipelines could be disregarded [24]. Energy Analysis The first law of thermodynamics was employed to balance energy rates for the power generation components.Following conservation of mass principles, mass flow rates and molar flow rates of flows of working fluids were determined.For a control volume operating at steady state, general rate balances for mass and energy, respectively, are: . Here, . W cv and . Q cv , respectively, denote the power and the heat transfer rate into the control volume. For the simulation, EES software was used. Combustion Modeling Combustion Process with Steam Injection In the present work, the incoming air from the compressor was mixed in the combustion chamber with fuel (methane), while superheated steam was injected through the process to control the emissions of pollutants to the environment.The chemical reaction occurring in the CC was as follows [28,29]: Here, ϕ and ε are the equivalence ratio and the molar air-fuel ratio, respectively, while x denotes the injection molar ratio of H 2 O.These quantities can be written, respectively, as follows: [28,29]. In Equation ( 4), s is the steam injection ratio.Usually, designs of gas turbines allow up to 5% of steam injection into the CC [30].The molar balance for the 10 species in Equation ( 1) of the combustion reaction are related as follows: N : 1.58 = 2ν 3 + ν 10 (10) Also, there are six chemical balances among the species in of combustion products according to the following [28]: The chemical equilibrium constants for the above reactions are obtained according to the following equations [28,29]: RT product ( 17) In Equation ( 17), T product is the temperature of combustion products.Also, ∆G S denotes the variation in Gibbs function of chemical equilibrium reactions in the atmospheric pressure and is obtained from the following: In the above equations, g i is the molar Gibbs function of species i in exhaust gases, and the chemical equilibrium at the atmospheric pressure is obtained using the following [31]: After determining the chemical equilibrium constant and solving the set of chemical equations of the combustion reaction, the numbers of moles of products in the CC were determined. Combustion Process without Steam Injection For the simple conventional gas turbine system without steam injection, the combustion process under complete chemical equilibrium conditions is as follows: The molar balances for the species in the chemical equation are presented in Equations ( 32)-( 35): C : O : The chemical equilibrium equations are exactly the same as the steam injection mode (see Equations ( 11)-( 29)). Analysis of Expansion For the high operating pressure associated with the proposed gas turbine system, a two-stage turbine was utilized in the configuration, as shown in Figure 1.The HPT and LPT pressure ratios can be written as [32]: where Energy balance equations of the component used in the proposed plant are presented in Table 1 The first law efficiency expressions for each subsystem of the plant in both the steam injection and simple modes are as follows: Exergy Analysis We now write exergy rate balances for the power generation system components and to determine the irreversibility rate of each.For a control volume at steady state, a general exergy rate balance can be written as [33]: Here, Σ .Q j 1 − T 0 T j represents the exergy rate with heat transfer, while T j denotes the temperature where heat is transferred. . I cv represents the internal irreversibility rate, which is always a positive quantity.The working fluid's total exergy flow rate .E is the sum of the thermodynamic and chemical flow exergy rates [33].That is, For a working fluid, the exergy flow rate can be written as [33]: . m i , h i , and s i , respectively, denote the mass flow rate, specific enthalpy, and the specific entropy for the working fluid at state i; and h 0 and s 0 , respectively, are the specific enthalpy and entropy for the working fluid at the dead state.The chemical exergy flow rate for a mixture of ideal gases is expressible as follows [33]: . Here, y i denotes species i molar fraction for the mixture, and e ch.0 i standard chemical exergy of an ideal gas.According to Figure 1, exergy rate balance relationships are listed in Table 2 for each power generation system device. I percooler To examine the quality of energy obtained from the power generation system, the exergy efficiency (sometimes referred to as second law efficiency) was used.For each of the existing subsystems, as well as the overall system, the exergy efficiencies were as follows: The carbon dioxide emission index can also be determined following Equation [11]: . Combustion and Chemical Equilibrium Equation To verify and validate the correctness of the number of moles obtained from combustion products for the main combustor with steam injection, the results from the present analysis were contrasted with the results of reference [21].Thermodynamic modeling of the CC was performed using the molar balance and the chemical equilibrium conditions of the combustion products, and the molar fractions of the resulting combustion gases are contrasted with the results in reference [21] in Table 3.Furthermore, for ϕ = 0.6, the adiabatic temperature for the current study was 1542 K, while for [26], it was 1542.4K.For ϕ = 1.2, the adiabatic temperature for the current study was 1971 K, while for [26], it was 1972.6 K. SCO 2 Subsystem Table 4 provides a validation of the current results via a comparison with the results of Ref. [24].The results show the accuracy of the SCO 2 cycle modeling. Power Generation System Case Study Results are given in Tables 5 and 6 for the GTSC and SIGTSC, respectively, following the power generation system input data of Table 7. Energy and exergy results are provided in Table 8 for both systems.showing the exergy rate of each component flow for the case when the air pressure ratio was equal to 10, the percentage of steam injection was 5%, and the TIT was equal to 1300 K. Also, the pressure ratio in this figure for the SCO 2 subsystem was 2.8.The equivalence ratio was considered to be 0.4017.Figure 3a demonstrates the rates of consumed or generated electric powers of the components of the proposed systems.The negative value of produced power indicates components with power consumption.Component exergy destruction rates are also provided in Figure 3b.According to this figure, the highest and lowest exergy destruction rates were for CC and HEX, respectively (except the pump exhibited the lowest exergy destruction rate for SIGTSC).Figure 3a demonstrates the rates of consumed or generated electric powers of the components of the proposed systems.The negative value of produced power indicates components with power consumption.Component exergy destruction rates are also provided in Figure 3b.According to this figure, the highest and lowest exergy destruction rates were for CC and HEX, respectively (except the pump exhibited the lowest exergy destruction rate for SIGTSC). Parametric Study Figure 4a illustrates the impact of the equivalence ratio of the CC on the net output power.It is seen that, with rising equivalence ratio, the net output power was augmented.Meanwhile, as steam injection increased from zero to 10%, the net output power rose.Steam injection rose the mass flow rate of the cycle, increasing the net power generation.Raising the equivalence ratio boosted the fuel flow rate.Therefore, the flow rate of the output products also increased; thus, the output work rose.Also, at a specified equivalence ratio, the input flow increased with an increase in the amount of steam injection, and, as a result, the output work increased.Figure 4b shows the influence of equivalence ratio on system exergy destruction In this figure, the equivalence ratio rise increased the exergy destruction rate unt stochiometric equivalence ratio decreased.Increasing the quantity of steam inje caused the exergy destruction rate to diminish for a specified equivalence highlighting the advantage of steam injection in gas cycles. Figure 4c illustrates the variation with equivalence ratio of CO2 emission index.a rise of the equivalence ratio, the mass flow rate of carbon dioxide increased reaching the stochiometric equivalence ratio and then decreased.According t increasing trend of carbon dioxide mass flow rate and specific work, the increasing of the mass flow rate was higher than the specific work; as a result, the slope of the was increasing, but in the rich state, the increasing slope of specific work was highe the mass flow rate of carbon dioxide, which is shown in Figure 4a, and the general was decreasing.According to Figure 4c the value of the CO2 emission index was red as the amount of injected steam into the CC increased. Figure 4d illustrates the energy efficiency for the overall system as a functi equivalence ratio.An equivalence ratio rise was seen to increase the fuel mass flow lowering the overall energy efficiency.The energy efficiency rose with the steam inje to the CC. Figure 4e depicts the influence on the exergy efficiency of the overall sy equivalence ratio.The trends in exergy and energy efficiency mirrored each oth described above. The effects of the variations of turbine inlet temperature (TIT) are shown in F 5a-e for five main performance parameters.Figure 5a illustrates the impact of varyin on net output power.As the TIT increased, the specific work exhibited an upward t This reveals that, with an increase in temperature at the outlet, the enthalpy of the gases to the turbine also increased and, as a result, the output work increased.Lik trend described above, the more steam that is injected into the CC, the greater net o power derived is.Equivalence ratio Exergy efficiency (%) s=10% s=10% s=5% s=5% without steam injection without steam injection Figure 4b shows the influence of equivalence ratio on system exergy destruction rate.In this figure, the equivalence ratio rise increased the exergy destruction rate until the stochiometric equivalence ratio decreased.Increasing the quantity of steam injection caused the exergy destruction rate to diminish for a specified equivalence ratio, highlighting the advantage of steam injection in gas cycles. Figure 4c illustrates the variation with equivalence ratio of CO 2 emission index.With a rise of the equivalence ratio, the mass flow rate of carbon dioxide increased until reaching the stochiometric equivalence ratio and then decreased.According to the increasing trend of carbon dioxide mass flow rate and specific work, the increasing slope of the mass flow rate was higher than the specific work; as a result, the slope of the graph was increasing, but in the rich state, the increasing slope of specific work was higher than the mass flow rate of carbon dioxide, which is shown in Figure 4a, and the general trend was decreasing.According to Figure 4c the value of the CO 2 emission index was reduced as the amount of injected steam into the CC increased. Figure 4d illustrates the energy efficiency for the overall system as a function of equivalence ratio.An equivalence ratio rise was seen to increase the fuel mass flow rate, lowering the overall energy efficiency.The energy efficiency rose with the steam injection to the CC. Figure 4e depicts the influence on the exergy efficiency of the overall system equivalence ratio.The trends in exergy and energy efficiency mirrored each other, as described above. The effects of the variations of turbine inlet temperature (TIT) are shown in Figure 5a-e for five main performance parameters.Figure 5a illustrates the impact of varying TIT on net output power.As the TIT increased, the specific work exhibited an upward trend.This reveals that, with an increase in temperature at the outlet, the enthalpy of the input gases to the turbine also increased and, as a result, the output work increased.Like the trend described above, the more steam that is injected into the CC, the greater net output power derived is. Impacts on the total exergy destruction rate of the variations of the TIT are illustrated in Figure 5b.Increasing the TIT was seen to decrease the total exergy destruction rate.As the temperature increased, due to approaching the adiabatic flame temperature, the resulting heat loss decreased, so the exergy destruction rate declined.According to Figure 5b, for a constant TIT, the exergy destruction rate diminished with increasing steam injection. Effects on the carbon dioxide emission index of varying TIT are illustrated in Figure 5c.As shown in the previous section, the exergy destruction rate of the overall system rose with TIT; Figure 5b explains and justifies this behavior.Impacts on the total exergy destruction rate of the variations of the TIT are in Figure 5b.Increasing the TIT was seen to decrease the total exergy destructio the temperature increased, due to approaching the adiabatic flame temper resulting heat loss decreased, so the exergy destruction rate declined.According 5b, for a constant TIT, the exergy destruction rate diminished with increas injection. Effects on the carbon dioxide emission index of varying TIT are illustrated 5c.As shown in the previous section, the exergy destruction rate of the overall sy with TIT; Figure 5b explains and justifies this behavior. Figure 5d,e show, respectively, the effects on energy and exergy efficien variations of TIT.As the TIT rose, the energy and exergy efficiencies exhibit upward trends, as anticipated. Figure 6a demonstrates the impact on the system's net power output of va which a ained a maximal value at a specific value of Pr1 (around 5).As Pr1 incr power produced by the turbines increased.However, as Pr1 exceeded the opti the system net power decreased because the power used by the compressor exc power generated by the turbines.Figure 5d,e show, respectively, the effects on energy and exergy efficiencies of the variations of TIT.As the TIT rose, the energy and exergy efficiencies exhibited similar upward trends, as anticipated. Figure 6a demonstrates the impact on the system's net power output of varying Pr 1 , which attained a maximal value at a specific value of Pr 1 (around 5).As Pr 1 increased, the power produced by the turbines increased.However, as Pr 1 exceeded the optimal value, the system net power decreased because the power used by the compressor exceeded the power generated by the turbines. Figure 6b presents for the system (including all components), the impact on the total exergy destruction rate of pressure ratio.With climbing pressure ratio, the exergy destruction rate was seen to rise.The system output work increased with pressure ratio, increasing the exergy destruction rate. The impact of varying pressure ratio on carbon dioxide emission index is seen in Figure 6c.As depicted in Figure 6a, the value of net output power first increased with Pr 1 and then decreased; based on Equation (51), the trend of carbon dioxide emission index was inverse to the net output power. Figure 6d,e show the respective impacts on system energy and exergy efficiencies of variations of Pr 1 .Meanwhile, the energy and exergy efficiencies were observed to increase and then to decrease while decreasing the net output power (Equations (41) and (49)).Note that there was a straight relation between both energy and exergy efficiencies and net output power.Both efficiency trends were similar and had maximum points.Figure 6b presents for the system (including all components), the impact on exergy destruction rate of pressure ratio.With climbing pressure ratio, th destruction rate was seen to rise.The system output work increased with press increasing the exergy destruction rate. The impact of varying pressure ratio on carbon dioxide emission index Figure 6c.As depicted in Figure 6a, the value of net output power first increase and then decreased; based on Equation (51), the trend of carbon dioxide emiss was inverse to the net output power. Figure 6d,e show the respective impacts on system energy and exergy effic variations of Pr1.Meanwhile, the energy and exergy efficiencies were observed t and then to decrease while decreasing the net output power (Equations (41) Note that there was a straight relation between both energy and exergy efficie net output power.Both efficiency trends were similar and had maximum points Figure 7 portrays how the system's bo om cycle pressure ratio affecte output power.As the bo om cycle's pressure ratio rose, the net output power in The subsystem pressure ratio had a small impact on the main parameters in b examined, so further a ention was not placed on the phenomenon.As the q injected steam rose, the net output power improved.Figure 7 portrays how the system's bottom cycle pressure ratio affected the net output power.As the bottom cycle's pressure ratio rose, the net output power intensified.The subsystem pressure ratio had a small impact on the main parameters in both cases examined, so further attention was not placed on the phenomenon.As the quantity of injected steam rose, the net output power improved. Clean Technol.2023, 5, FOR PEER REVIEW 24 Figure 7. Effect of variation on the system net output power of the bo om cycle pressure ratio. Conclusions A combined cycle comprised of a gas turbine with two stages and steam injection coupled with a SCO2 subsystem cycle was investigated, considering energy and exergy aspects.Furthermore, in the case study and parametric study, the behaviors of both GTSC Conclusions A combined cycle comprised of a gas turbine with two stages and steam injection coupled with a SCO 2 subsystem cycle was investigated, considering energy and exergy aspects.Furthermore, in the case study and parametric study, the behaviors of both GTSC and SIGTSC systems were assessed separately.For both cycles, the combustion chamber was examined in-depth so that the modeling was more realistic. The main findings of the research and the conclusions drawn from them are as follows: • Increasing the amount of steam injection improved the system net output power and lowered the exergy destruction rate.Moreover, it reduced the carbon dioxide emission index. • Steam injection in the SIGTSC reduced the heat loss of the combustion chamber compared to the GTSC. Figure 2 Figure2depicts a system Sankey diagram, showing the exergy rate of each component flow for the case when the air pressure ratio was equal to 10, the percentage of steam injection was 5%, and the TIT was equal to 1300 K. Also, the pressure ratio in this figure for the SCO 2 subsystem was 2.8.The equivalence ratio was considered to be 0.4017. Figure 3 .Figure 3 .Figure 3 . Figure 3. (a) Rate of the generated electric power for components of both systems.(b) Exergy destruction rates of the components of both systems. Figure 4 . Figure 4. (a) Effect on the net output power production of the CC equivalence ratio.(b) Effect total exergy destruction rate of the equivalence ratio.(c) Effect on the CO2 emission index equivalence ratio.(d) Effect on the system energy efficiency of the equivalence ratio.(e) Effect system exergy efficiency of the equivalence ratio. Figure 4 . Figure 4. (a) Effect on the net output power production of the CC equivalence ratio.(b) Effect on the total exergy destruction rate of the equivalence ratio.(c) Effect on the CO 2 emission index of the equivalence ratio.(d) Effect on the system energy efficiency of the equivalence ratio.(e) Effect on the system exergy efficiency of the equivalence ratio. Figure 5 . Figure 5. (a) Effect on the system net output power of the variation of TIT.(b) Effect on the total system exergy destruction rate of the variation of TIT.(c) Effect of the variation of TIT on the CO 2 emission index of the system.(d) Effect of the variation of TIT on the energy efficiency of the system.(e) Effect on the system exergy efficiency of the variation of TIT. Figure 6 . Figure 6.(a) Effect on the system net power output of variation of Pr 1 .(b) Effect on the system total exergy destruction rate of the variation of Pr 1 .(c) Effect of the variation of Pr 1 on CO 2 emission index of the system.(d) Effect of the variation of Pr 1 on the energy efficiency of the system.(e) Effect of the variation of Pr 1 on the exergy efficiency of the system. Figure 7 . Figure 7. Effect of variation on the system net output power of the bottom cycle pressure ratio. Table 1 . Energy rate balance relations for the components of the power generation system. Table 2 . Exergy rate balance relations for the components of the power generation system. Table 3 . [28]arison of molar fractions of combustion products from the current study with reference[28]. Table 5 . Thermodynamic properties of states of the GTSC power generation system. Table 6 . Thermodynamic properties of the states of the SIGTSC power generation system. Table 7 . Input data for modeling the considered power generation system. Table 8 . Thermodynamic performance in terms of efficiencies. • Energy and exergy efficiencies of 35.3% and 34.1%, respective, were obtained for the SIGTSC, which were greater than the corresponding values for the GTSC: 30.4% and 29.4%.Steam injection improved the thermodynamic efficiency.•Due to the combustion chamber's design temperature limitations for this configuration, TIT could only vary within a certain range.In addition, at 1440 K, the CC was considered almost adiabatic.
v3-fos-license
2020-01-30T09:05:39.286Z
2020-01-28T00:00:00.000
211024850
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://res.mdpi.com/d_attachment/cells/cells-09-00313/article_deploy/cells-09-00313-v2.pdf", "pdf_hash": "685d9bc7ea73448ae20377a773c6ab6886e4cbb2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2759", "s2fieldsofstudy": [ "Biology", "Psychology" ], "sha1": "6da20115f63871e6ba24296853a118556adbc48f", "year": 2020 }
pes2o/s2orc
Proteomic Analysis of Brain Region and Sex-Specific Synaptic Protein Expression in the Adult Mouse Brain Genetic disruption of synaptic proteins results in a whole variety of human neuropsychiatric disorders including intellectual disability, schizophrenia or autism spectrum disorder (ASD). In a wide range of these so-called synaptopathies a sex bias in prevalence and clinical course has been reported. Using an unbiased proteomic approach, we analyzed the proteome at the interaction site of the pre- and postsynaptic compartment, in the prefrontal cortex, hippocampus, striatum and cerebellum of male and female adult C57BL/6J mice. We were able to reveal a specific repertoire of synaptic proteins in different brain areas as it has been implied before. Additionally, we found a region-specific set of novel synaptic proteins differentially expressed between male and female individuals including the strong ASD candidates DDX3X, KMT2C, MYH10 and SET. Being the first comprehensive analysis of brain region-specific synaptic proteomes from male and female mice, our study provides crucial information on sex-specific differences in the molecular anatomy of the synapse. Our efforts should serve as a neurobiological framework to better understand the influence of sex on synapse biology in both health and disease. Introduction Synapses are the key structures for signal transduction and plasticity in the vertebrate central nervous system [1,2]. They form the core components of neural circuits and networks, collectively referred to as the brain connectome [3]. Although synapses were originally considered to be simple connection sites between neurons, the identification of synaptic proteins using mass spectrometry Mice Male and female 6-week-old C57BL/6J mice (P42) were used for this study. They were housed under defined conditions at a 12-h light/dark cycle and had free access to tap water and food. Mice were euthanized with carbon dioxide and the brain regions prefrontal cortex, hippocampus, striatum and cerebellum were dissected and stored at −80 • C after snap-freezing in liquid nitrogen. Animal experiments were performed in accordance with the regulations of the German Federal/Saxony-Anhalt State Law, the respective EU regulations, and the NIH guidelines. For each brain region, we generated four biological replicates of both, male and female mice. For each biological replicate material from three animals was pooled. Subcellular Fractionation For preparation of protein samples enriched for synaptic membrane structures, tissue was homogenized in 300 µL of 0.32 M sucrose with 5 mM HEPES and Complete™ protease inhibitor cocktail (Roche, Basel, Switzerland). Samples were centrifuged at 12,000 × g for 20 min. The resulting pellets were re-homogenized in 1 mL of 1 mM Tris/HCl, pH 8.1 containing protease inhibitors and incubated for 30 min at 4 • C. After incubation, samples were centrifuged at 100,000 × g for 1 h. The resulting pellets were re-homogenized in 400 µL of 0.32 M sucrose with 5 mM Tris/HCl, pH 8.1 and loaded on a 1.0 M/1.2 M sucrose step gradient. After centrifugation at 100,000 × g for 1.5 h synaptic membranes were collected at the 1.0 M/1.2 M sucrose interface. For proteome analysis, samples were resuspended in PBS and pelleted to reduce sucrose levels. A detailed description of the different enrichment steps is compiled in Supplementary Figure S1. Moreover, by means of bioinformatic tools and immunoblot analysis we confirmed that our preparations are representative for synapse structures and synaptic substructures (Supplementary Figures S2 and S3). Proteolytic Digest of Enriched Synaptic Proteins After enrichment, synaptic proteins were dissolved in a buffer containing 7 M urea, 2 M thiourea, 5 mM dithiothreitol (DTT), 2% (w/v) CHAPS and disrupted by sonication at 4 • C for 15 min using a Bioruptor (Diagenode, Liège, Belgium). The protein concentration was determined using the Pierce 660 nm protein assay (Thermo Fisher Scientific, Waltham, MA, USA) according to the manufacturer's protocol. 20 µg of total protein were subjected to tryptic digestion using a modified Filter Aided Sample Preparation (FASP) protocol as described in detail before [39,40]. In brief, proteins were transferred onto spin filter columns (Nanosep centrifugal devices with Omega membrane, 30 kDa MWCO; Pall, Port Washington, NY, USA) and detergents were removed washing the samples three times with a buffer containing 8 M urea. Proteins were reduced using DTT and alkylated with iodoacetamide (IAA). Afterwards, excess IAA was quenched with DTT and the membrane was washed three times with 50 mM NH 4 HCO 3 followed by overnight digestion at 37 • C with trypsin (Trypsin Gold, Promega, Madison, WI, USA). An enzyme-to-protein ratio of 1:50 (w/w) was used to digest the proteins. After digestion, peptides were recovered by centrifugation and two additional washes with 50 mM NH 4 HCO 3 . After combining the flow-throughs, samples were acidified with trifluoroacetic acid (TFA) to a final concentration of 1% (v/v) TFA and lyophilized. Purified peptides were reconstituted in 0.1% (v/v) formic acid (FA) for LC-MS analysis. Nanoscale Liquid Chromatography Mass Spectrometry (nanoLC-MS) Analysis Samples were analyzed by LC-MS using a Synapt G2-S HDMS mass spectrometer (Waters Corporation, Milford, MA, USA) coupled to a nanoAcquity UPLC system (Waters Corporation, Milford, MA, USA). Water containing 0.1% (v/v) FA, 3% (v/v) dimethyl sulfoxide (DMSO) was used as mobile phase A and acetonitrile (ACN) containing 0.1% FA (v/v), 3% (v/v) DMSO as mobile phase B [41]. Tryptic peptides (corresponding to 200 ng) were loaded onto an HSS-T3 C18 1.8 µm, 75 µm × 250 mm reverse-phase column from Waters Corporation in direct injection mode and were separated running a gradient from 5-40% (v/v) mobile phase B over 90 min at a flow rate of 300 nL/min. After separation of peptides, the column was rinsed with 90% mobile phase B and re-equilibrated to initial conditions resulting in a total analysis time of 120 min. The column was heated to 55 • C. Eluting peptides were analyzed in positive mode ESI-MS by ion-mobility separation (IMS) enhanced data-independent acquisition (DIA) UDMS E mode as described in detail before [40,42]. Acquired MS data were post-acquisition lock mass corrected using [Glu1]-Fibrinopeptide B, which was sampled every 30 s into the mass spectrometer via the reference sprayer of the NanoLockSpray source at a concentration of 250 fmol/µL. All samples (i.e., biological replicates) were analyzed by LC-MS in duplicates. Moreover, to monitor reproducibility and long-term stability of the LC-MS platform, we generated four sample pools, one for each brain region. Toward this end, equal amounts of the four female and four male biological replicates were mixed for each brain region. LC-MS analyses of the sample pools were scheduled between the actual sample runs resulting in up to six replicate measurements for the sample pools. Data Processing and Label-Free Quantification Analysis Raw data processing and database search of LC-MS data were performed using ProteinLynx Global Server (PLGS, ver.3.0.2, Waters Corporation, Milford, MA, USA). Data were searched against a custom compiled UniProt mouse database (UniProtKB release 2018_09, 16,991 entries) that contained a list of common contaminants. The following parameters were applied for database search: (i) trypsin as enzyme for digestion, (ii) up to two missed cleavages per peptide, (iii) carbamidomethyl cysteine as fixed, (iv) and methionine oxidation as variable modification. The false discovery rate (FDR) for peptide and protein identification was assessed using the target-decoy strategy by searching a reverse database. FDR was set to 0.01 for database search in PLGS. Post-processing of data including retention time alignment, exact mass retention time as well as IMS clustering, normalization and protein homology filtering was performed using the software tool ISOQuant ver.1.8 [40,42]. Algorithms and ISOQuant settings have been described in detail before [40,42]. For cluster annotation in ISOQuant, an experiment-wide FDR of 0.01 was applied at the peptide-level. To be included in the final list a peptide had to be identified at least four times across all runs. Only proteins that had been identified by at least two peptides with a minimum length of seven amino acids, a minimum PLGS score of 6.0 and no missed cleavages were used for quantification and included in the final dataset. For each protein absolute in-sample amounts were calculated using TOP3 quantification as described before [43]. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium (http://proteomecentral.proteomexchange.org) via the PRIDE partner repository [44] with the dataset identifier PXD015610. Statistical analysis of the data was conducted using Student's t-test, which was corrected by the Benjamini-Hochberg (BH) method for multiple hypothesis testing (FDR of 0.05). T-tests were only calculated if a protein was identified at least in three biological replicates. R (version 3.6.1) was used for further analyses and to plot the data [45][46][47][48]. Functional annotation analysis of synaptic proteins that displayed significant changes between brain regions (after BH correction, log 2 fold change >1) was performed using the Gene Ontology (GO) knowledgebase (http://geneontology.org/) [49,50]. Differential Expression of Synaptic Proteins across Different Brain Regions To resolve the brain region and sex-specific mouse synaptic proteome, we enriched pre-and post-synaptic proteins from the (i) prefrontal cortex, (ii) hippocampus, (iii) striatum, and (iv) cerebellum of adult mice (P42). In total, four biological replicates (each pooled from three mice) of both, male and female animals, were collected for each brain region. Synaptic proteome samples were analyzed after tryptic digestion by DIA LC-MS ( Figure 1a). Combined label-free quantification analysis of all replicates revealed distinct, brain region-specific synaptic protein expression patterns (Figure 1b). Around 3000 proteins could be quantified in each brain region, with a total of 3173 proteins (corresponding to over 40,000 peptides) in the complete dataset (Supplementary Tables S1 and S2). Out of the 3173 proteins, 2896 proteins were identified in all four brain regions ( Figure 2a). To assess the quality of our proteome analysis, we quantitatively compared protein abundances across the whole dataset. Between biological replicates, Pearson's correlation coefficients for protein abundances were between 0.86 and 0.99, respectively, demonstrating high technical and biological reproducibility (Supplementary Figure S4). Almost all proteins in the present dataset showed brain region-dependent expression profiles and were significantly enriched in either a single or two brain regions (Figures 1b and 2b and Supplementary Table S3). Only a small subset of proteins (a total of 142) could not be assigned to distinct brain regions. Some of these proteins did not pass our filter criteria (i.e., were present in less than three biological replicates in the respective brain region(s), variation between biological replicates was too high) or were enriched in three brain regions, i.e., displayed lower expression in a single region. In total, 24 proteins showed stable and similar protein levels across all replicates (i.e., were identified in all runs with a coefficient of variation (CV) for the protein abundance < 25%). Almost all proteins in the present dataset showed brain region-dependent expression profiles and were significantly enriched in either a single or two brain regions (Figures 1b and 2b and Supplementary Table S3). Only a small subset of proteins (a total of 142) could not be assigned to distinct brain regions. Some of these proteins did not pass our filter criteria (i.e., were present in less than three biological replicates in the respective brain region(s), variation between biological replicates was too high) or were enriched in three brain regions, i.e., displayed lower expression in a single region. In total, 24 proteins showed stable and similar protein levels across all replicates (i.e., were identified in all runs with a coefficient of variation (CV) for the protein abundance < 25%). Figure 2. Composition of the synaptic proteome differs between brain regions: (a) Overlap of proteins identified at the synapses of the prefrontal cortex (PFC), hippocampus (Hip), striatum (Str) and cerebellum (Cer). Presence of the proteins was inferred following alignment between runs. (b) Number of significantly enriched proteins in one or two brain regions (BH corrected Student´s t-test, p < 0.05). Proteins were always assigned to the group displaying the highest significance (i.e., lowest p-value). Transparent bars display numbers of all significant proteins and non-transparent bars proteins that are at least 2-fold enriched; (c,d) Gene Ontology (GO) enrichment analysis of synaptic proteins that are significantly associated with a certain brain region (Benjamini-Hochberg correction, p < 0.05, log2 fold change compared to other regions >1). (c) Selected GO terms for components as well as the (d) top 15 biological processes are displayed. In case of PFC-specific synaptic proteins, no biological process was significantly enriched. Hierarchical clustering indicates that the cerebellum is the most diverging region, whereas the synaptic proteomes of the prefrontal cortex and hippocampus show the highest similarities ( Figure 1b). We detected 650 proteins that were significantly enriched in the cerebellum as compared to the other brain regions, followed by the striatum with 490 region-specific proteins (Figure 2b). In case of the prefrontal cortex and the hippocampus, the number of enriched proteins was markedly lower (143 and 182, respectively). Moreover, in line with previous findings [51], cortical and hippocampal Figure 2. Composition of the synaptic proteome differs between brain regions: (a) Overlap of proteins identified at the synapses of the prefrontal cortex (PFC), hippocampus (Hip), striatum (Str) and cerebellum (Cer). Presence of the proteins was inferred following alignment between runs. (b) Number of significantly enriched proteins in one or two brain regions (BH corrected Student's t-test, p < 0.05). Proteins were always assigned to the group displaying the highest significance (i.e., lowest p-value). Transparent bars display numbers of all significant proteins and non-transparent bars proteins that are at least 2-fold enriched; (c,d) Gene Ontology (GO) enrichment analysis of synaptic proteins that are significantly associated with a certain brain region (Benjamini-Hochberg correction, p < 0.05, log 2 fold change compared to other regions >1). (c) Selected GO terms for components as well as the (d) top 15 biological processes are displayed. In case of PFC-specific synaptic proteins, no biological process was significantly enriched. Hierarchical clustering indicates that the cerebellum is the most diverging region, whereas the synaptic proteomes of the prefrontal cortex and hippocampus show the highest similarities (Figure 1b). We detected 650 proteins that were significantly enriched in the cerebellum as compared to the other brain regions, followed by the striatum with 490 region-specific proteins (Figure 2b). In case of the prefrontal cortex and the hippocampus, the number of enriched proteins was markedly lower (143 and 182, respectively). Moreover, in line with previous findings [51], cortical and hippocampal synapses share the most proteins with similar expression patterns (Figure 2b, Supplementary Figure S5). Proteins significantly enriched in striatal and cortical synapses with more than a twofold expression difference as compared to other regions were mainly associated with mitochondria and the cytoplasm (Figure 2c). In case of striatum, our analyses additionally revealed a high enrichment of neuronal and synaptic proteins, including voltage-gated potassium channels, Ras family members as well as receptor tyrosine and MAP kinases (Figure 2c). The top 15 biological processes associated with striatal-specific synaptic proteins mainly relate to mitochondrial processes and functions (Figure 2d). However, we also found a high enrichment for proteins involved in dopamine signaling and exocytosis such as the D(1A) dopamine receptor (DRD1), the sodium-dependent dopamine transporter (SC6A3) or Vacuolar protein sorting-associated protein 11 homolog (VPS11), which is required for the fusion of endosomes and autophagosomes with lysosomes (Supplementary Figure S6). In case of the prefrontal cortex, no biological process was significantly enriched. However, among the cortex-enriched synaptic proteins, we detected, for example, the neuronal migration protein doublecortin (DCX), which is involved in the initial steps of neuronal dispersion and cortex lamination during cerebral cortex development [52]. Other proteins are associated with mitochondrial functions or display protein serine/threonine kinase activity such as the cation channel TRPM6 or the death-associated protein kinase 1 (DAPK1), which is involved in multiple cellular signaling pathways triggering cell survival, apoptosis, and autophagy. GO enrichment analysis for hippocampal and cerebellar synaptic proteins highlighted the (glutamatergic) synapse, the cell junction and the (intracellular) organelle part as the most significant components, respectively (Figure 2c). Proteins exclusively identified in the cerebellum (Figure 2a) include, for example, the GABA(A) receptor subunit alpha-6 (GBRA6) involved in GABAergic synaptic transmission, the Purkinje cell protein 2 (PCP2) and Cerebellin-1 (CBLN1). The cerebellum-specific protein CBLN1 is involved in cerebellar granule cell differentiation and essential for cerebellar synaptic integrity and plasticity [53]. Downregulation or loss of CBLN1, a key node in the protein interaction network of ASD genes, impairs sociability and weakens glutamatergic transmission in ventral tegmental area (VTA) neurons [54]. Moreover, GO analysis of biological processes revealed an enrichment of proteins at the cerebellar synapse that are associated with mRNA processing/splicing (Figure 2c and Supplementary Figure S6). Alternative splicing is a crucial mechanism for neuronal development, maturation, as well as synaptic properties [55] and local protein synthesis is a ubiquitous feature of neuronal pre-and post-synaptic compartments [56]. Regarding the hippocampus, GO analysis of biological processes revealed that proteins involved in neurogenesis and cell differentiation are enriched at its synapse (Figure 2c and Supplementary Figure S6). Moreover, proteins involved with typical hippocampal functions include, for example, the sodium/calcium exchanger 2 (NAC2), which is essential for the control of synaptic plasticity and cognition [57], or the protein-tyrosine kinase 2-beta (FAK2), which is associated with long-term synaptic potentiation and depression. Sex-Specific Differences in the Synaptic Proteome One major focus of the present study was to resolve sex-specific differences in the synaptic proteome across different brain regions of adult mice (Figure 3). Table S4); (d) Relative protein levels of DDX3Y and DDX3X. Asterisks (***) indicate highly significant differences in protein abundances between the two sexes (p < 0.001, Student´s t-test). n.s., not significant. We observed the highest divergency between male and female mice in the hippocampus (Figure 3a,b). In total, 71 proteins showed differences in their expression levels between the two sexes including multiple proteins known to be involved in neurological disorders (such as Parkinson´s and Alzheimer´s disease) (Supplementary Figure S7). Only little differences between male and female mice were observed in the striatal and the cortical synaptic proteome. Here, only seven and eight proteins differed significantly in their abundance, respectively. In the cerebellum, we detected 28 Table S4); (d) Relative protein levels of DDX3Y and DDX3X. Asterisks (***) indicate highly significant differences in protein abundances between the two sexes (p < 0.001, Student's t-test). n.s., not significant. We observed the highest divergency between male and female mice in the hippocampus (Figure 3a,b). In total, 71 proteins showed differences in their expression levels between the two sexes including multiple proteins known to be involved in neurological disorders (such as Parkinson's and Alzheimer's disease) (Supplementary Figure S7). Only little differences between male and female mice were observed in the striatal and the cortical synaptic proteome. Here, only seven and eight proteins differed significantly in their abundance, respectively. In the cerebellum, we detected 28 differentially expressed proteins comparing male and female animals, mainly involved in neuron projection and synaptic transmission, as well as in RNA binding and processing (Supplementary Figure S8). In a recent study, Block et al. [58] investigated sex differences in protein expression for a selected panel of about 100 proteins associated with learning/memory and synaptic plasticity in the hippocampus, cerebellum, and cortex of female and male controls and their trisomic littermates (Dp(10)1Yey mouse model of down syndrome). In line with our findings, the authors observed by far the most differences in the hippocampus between the two sexes in their control group, followed by the cerebellum. Interestingly, we observed no overlap of sex-associated synaptic proteins between the different brain regions. Only one protein displayed differential expression across all regions between male and female mice, the ATP-dependent RNA helicase DDX3Y (Figure 3b). As the Ddx3y gene is located on the chromosome Y, it is expected that the respective gene product will be only found in male individuals. Interestingly, its paralog DDX3X, is listed as strong ASD candidate (category 2) in the SFARI autism gene database and has been associated with cases of intellectual disability, hyperactivity, and aggression in females [59]. Hence, we compared the quantitative datasets of altered proteins between male and female wildtype animals with selected autism-associated target genes. In total, we selected 257 ASD risk genes (196 after filtering for duplicates and excluding those without homologues in mouse) for the comparison. Selected ASD risk candidates were compiled from three sources: i) the SFARI autism gene database, the studies from ii) Rubeis et al. [60] and iii) Doan et al. [61] (Supplementary Figure S9 and Supplementary Table S4). Regarding the SFARI gene set, we included high confidence (category 1) and strong ASD candidates (category 2) comprising 25 and 66 candidates, respectively. From the study of Rubeis et al. we incorporated the set of 107 autosomal ASD risk genes (FDR < 0.3) [60] and from Doan et al. 41 recessive genes specifically knocked out (i.e., carrying biallelic loss-of-function (LOF) mutations) in individuals diagnosed with ASD as well as 18 genes detected in their ASD cohort either with LOF or biallelic, damaging missense mutations that have been already described as pathogenic or likely pathogenic [61]. Up to 70 gene products of the described ASD risk genes were detected in our dataset (including 21 that have been described by multiple studies; Supplementary Figure S9). Out of these, four proteins were found to be differentially expressed at the synapses of male and female mice including DDX3X (Figure 3d, Supplementary Figures S10 and S11) as well as KMT2C, MYH10 and SET (Supplementary Figure S12). Discussion The major scope of the present study was to resolve sex-specific differences in the mouse synaptic proteome across different brain regions in adult mice at a postnatal age of P42. It has been nicely shown by Gonzalez-Lozano et al. [6] for cortical mouse synapses that levels of synaptic proteins generally increase throughout brain development and converge at an adult age, whereas other proteins, e.g., involved in protein synthesis, are more likely to decrease in abundance during maturation. At P42 typical pre-and post-synaptic proteins already display stable expression levels as compared to later timepoints thus adequately representing the adult mouse synaptic protein repertoire. In general, proteomic studies on brain samples show a great variability in sample preparation [1,6,[9][10][11]21] leading to difficulties in direct comparability. In this study, we enriched proteins from both, the pre-and postsynaptic compartment of the synapse, thereby giving an upmost comprehensive view on the proteinaceous inventory of the synaptic interface. Our unbiased proteomic approach is therefore capable to identify novel sex-specific molecular targets in male and female synapses, respectively. Despite the aforementioned difficulties in comparability, our data are in line with findings of other recent proteome studies on the nervous system. Our data, for example, strongly support the findings by Mann et al. [51] and Alvares-Castelao et al. [21] that the proteome of cerebellar neurons, is highly diverging from cortical, hippocampal and striatal neuron proteomes. In 2015, Sharma et al. [51] resolved a brain region and cell-type specific mouse brain proteome. Analyzing complete brain regions without subcellular enrichment, they report highest divergence for the cerebellum (along with the optic nerve and the brain stem). Among the 10 analyzed brain regions in their study, hippocampus, striatum, prefrontal and motor cortex showed highest similarities. This is highly comprehensible due to the different ontogenetic and phylogenetic development of the rhombencephalic cerebellum and the prosencephalic cortex, hippocampus and striatum. Proteins that showed brain region-dependent expression differences in the dataset of Sharma et al. were associated with the (post)synaptic membrane and involved in processes like transmembrane transporter activity and synaptic transmission underlining the importance to further resolve the synaptic protein composition to better understand underlying neurological and synaptic processes across different regions of the brain. On the synaptic level, only a limited amount of morphological differences between sexes have been reported, yet. In human, Alsonso-Nanclares et al. [62] found that men have a significantly higher density of synaptic contacts than women in all cortical layers of the temporal neocortex. In rodents, it was shown that the density of dendritic spines in the hippocampal CA1 region and the nucleus accumbens is higher in females [63,64]. Importantly, dendritic spine density is influenced by the estrus cycle in rodents [65]. It is well known that sex steroid hormones have an impact on synaptic function and synaptogenesis/synaptic plasticity in a sex-specific way [66][67][68][69][70]. Expression and subcellular localization of nuclear and membrane-associated steroid hormone receptors is different in male and female neurons thereby leading to different responses on hormone action. Impressively, there is no evidence for a lack of steroid hormone receptors in any brain region [71]. In the present study, we actually found no significant differences in the expression of the classical and putative membrane-associated steroid hormone receptors in the synapses of either brain region. It has further been shown that calcium/calmodulin kinase kinase (CaMKK) signaling differs in male and female mice [72]. This is in accordance with our finding of a sex-specific difference of synaptic calcium/calmodulin dependent protein kinase II delta (KCC2D) levels in the hippocampus. Zettergren et al. found that myristoylated alanine rich C kinase substrate (MARCKS) protein, a cellular substrate for protein kinase C is more highly expressed in neurons of the limbic system (hypothalamus/amygdala) of neonatal female mice compared to male littermates [73]. In contrast to our study, no isolation of subcellular fractions and comparison of different brain regions was performed. Moreover, we could not find a significant difference in the synaptic amount of MARCKS between male and female. This difference could be explained by our focus on the synaptic compartment or the adult age of the animals analyzed. In our synaptic proteome dataset, we could identify sex-specific molecular changes in all brain regions analyzed. Curiously, only the Y chromosome-encoded DDX3Y protein was differentially expressed in all four regions. In mice, DDX3Y is expressed in several tissues including the brain [74]. In contrast, in humans DDX3Y is an important regulator of spermatogenesis exclusively expressed in human testis [75]. Because of the y-chromosomal heritage, the absence of DDX3Y in female brain was reasonable. Interestingly, the DDX3Y paralog DDX3X shows a significant higher expression only in the female striatum compared to male. DDX3X is a multifunctional ATP-dependent RNA helicase. Although its exact physiological function in the organism is still not fully understood, it seems to be involved in multiple steps of gene expression, such as transcription, mRNA maturation and translation. DDX3X is listed as strong ASD candidate (category 2) in the SFARI autism gene database. ASD is a heterogeneous group of neurodevelopmental disorders, characterized by early-onset deficits in social interaction and communication skills, together with restricted, repetitive behavior. Defects in DDX3X function in humans is associated with brain and behavioral abnormalities, microcephaly, facial dysmorphism, hypotonia, aggression and movement disorders and/or spasticity in female and probably in male [59,[76][77][78][79][80][81][82][83][84][85]. The finding of a sexual dimorphic autism related protein specifically in the striatum is of particular interest because defects in striatal circuitry are known to cause autism-like phenotypes [86]. Interestingly, a sexually dimorphic phenotype has further been observed in a mouse model of striatal interneuron depletion [87]. Another autism related protein, the histone methyltransferase KMT2C was found to be reduced in the hippocampal synapse of male mice. In humans, a mutation of KMT2C is associated with a clinical phenotype overlapping Kleefstra syndrome [88]. Also, the murine variant of the non-muscle heavy chain II B, encoded by the Myosin Heavy Chain 10 gene (MYH10) was found to be less expressed in the synapse from the male hippocampus. In humans, mutation of MYH10 leads to a severe CNS phenotype characterized by microcephaly, cerebral and cerebellar atrophy and severe intellectual disability [89]. The gene encoding the protein SET, which showed increased expression level in the cortical synapse in female mice, is listed as strong ASD candidate (category 2) in the SFARI autism gene database. The multitasking protein SET is a nuclear proto-oncogene [90] and involved in apoptosis [91], transcription, nucleosome assembly and histone chaperoning [92]. SET inhibits acetylation of nucleosomes, especially histone H4, by histone acetylases (HAT) [93]. This inhibition is most likely accomplished by masking histone lysines from being acetylated, and the consequence is to silence HAT-dependent transcription. Mutations in the gene encoding SET are linked to developmental delay and intellectual disabilities as well as to autosomal dominant 58 (MRD58), a form of mental retardation, characterized by significantly below average general intellectual functioning associated with delayed development, impairments in adaptive behavior, language delay and speech impairment [94][95][96]. Interestingly, SET interacts with intracellular domains of the gonadotropin-releasing hormone (GnRH) receptor and differentially regulates receptor signaling to cAMP and calcium in gonadotrope cells [97]. Notably, a recent study showed that SET expression is regulated by the neurohormone GnRH [98], providing a potential molecular basis for sex-specific differences in expression levels. Despite our findings, several questions remain to be addressed in future studies. Efforts in recent years have been made to resolve the spatial distribution of synapse types and subtypes [20,21] as well as to decipher the protein repertoire of excitatory and inhibitory synapses [9]. Although we could identify brain region-specific synaptic proteins that are differentially expressed in the synapses of male and female mice, it remains subject to future analyses to link sex-specific expression patterns to specific synapse subtypes or sublocations to gather further insights into the sex-related physiology of neuronal function. Moreover, an age-dependent analysis would further improve our understanding of sex-specific differences during neuronal development. Conclusions Taken together, our work reveals the first sex-specific synaptic proteome in mice. First, we were able to confirm former findings of a specific repertoire of synaptic proteins in different brain areas. Second, we found a set of novel proteins differentially expressed in the synapses of males and females, respectively. Importantly, the repertoire of sex-specific expressed proteins is also brain region-specific. Our findings reveal novel insights into the sex-specific differentiation of synapses thereby leading to a better understanding of the sex-specific physiology of neuronal function and behavior and the pathophysiology of neurodevelopmental and neuropsychiatric diseases in general that often carry a so-called sex bias. Supplementary Materials: The following are available online at http://www.mdpi.com/2073-4409/9/2/313/s1, Figure S1: Isolation of synaptic membranes, Figure S2: Comparison of the present dataset with the SynGO protein database, Figure S3: Enrichment of the postsynaptic scaffold protein PSD-95 indicates successful preparation of synaptic proteins, Figure S4: Correlation of protein abundances for all identified synaptic proteins in different brain regions, Figure S5: Protein correlation profiling reveals distinct expression patterns of synaptic proteins across different brain regions, Figure S6: Gene ontology analysis of brain-region specific synaptic proteomes, Figure S7: Protein interaction network of differentially regulated proteins between male and female mice in the hippocampus, Figure S8: Protein interaction network of differentially regulated proteins between male and female mice in the cerebellum, Figure S9: ASD risk gene products identified in our dataset, Figure S10: Sequence coverage of DDX3X and DDX3Y, Figure S11: Overview of identified DDX3X and DDX3Y peptides, Figure S12: Expression levels of SET and MYH10 differ in male and female animals in synaptic membranes of distinct brain regions. Table S1: List of identified synaptic proteins in the prefrontal cortex, hippocampus, striatum and cerebellum of male and female mice, Table S2: List of identified peptides of synaptic proteins derived from the prefrontal cortex, hippocampus, striatum and cerebellum of male and female mice,
v3-fos-license
2014-10-01T00:00:00.000Z
2009-03-23T00:00:00.000
16586648
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CC0", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1289/ehp0900558", "pdf_hash": "e042f8d4e60ee4a76f1039434a02072852ae3d11", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2760", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "e042f8d4e60ee4a76f1039434a02072852ae3d11", "year": 2009 }
pes2o/s2orc
Exposure to Concentrated Coarse Air Pollution Particles Causes Mild Cardiopulmonary Effects in Healthy Young Adults Background There is ample epidemiologic and toxicologic evidence that exposure to fine particulate matter (PM) air pollution [aerodynamic diameter ≤ 2.5 μm (PM2.5)], which derives primarily from combustion processes, can result in increased mortality and morbidity. There is less certainty as to the contribution of coarse PM (PM2.5–10), which derives from crustal materials and from mechanical processes, to mortality and morbidity. Objective To determine whether coarse PM causes cardiopulmonary effects, we exposed 14 healthy young volunteers to coarse concentrated ambient particles (CAPs) and filtered air. Coarse PM concentration averaged 89.0 μg/m3 (range, 23.7–159.6 μg/m3). Volunteers were exposed to coarse CAPs and filtered air for 2 hr while they underwent intermittent exercise in a single-blind, crossover study. We measured pulmonary, cardiac, and hematologic end points before exposure, immediately after exposure, and again 20 hr after exposure. Results Compared with filtered air exposure, coarse CAP exposure produced a small increase in polymorphonuclear neutrophils in the bronchoalveolar lavage fluid 20 hr postexposure, indicating mild pulmonary inflammation. We observed no changes in pulmonary function. Blood tissue plasminogen activator, which is involved in fibrinolysis, was decreased 20 hr after exposure. The standard deviation of normal-to-normal intervals (SDNN), a measure of overall heart rate variability, also decreased 20 hr after exposure to CAPs. Conclusions Coarse CAP exposure produces a mild physiologic response in healthy young volunteers approximately 20 hr postexposure. These changes are similar in scope and magnitude to changes we and others have previously reported for volunteers exposed to fine CAPs, suggesting that both size fractions are comparable at inducing cardiopulmonary changes in acute exposure settings. The U.S. Environmental Protection Agency (EPA) currently regulates particulate matter (PM) on the basis of mass in two size ranges: coarse PM [2.5-10 µm in aerodynamic diameter (PM 2.5-10 )] and fine PM (PM 2.5 ). Coarse PM typically derives from soil or abrasive mechanical processes in transportation or industry, and also can contain biogenic materials such as pollen, endotoxin, and mold spores known to have deleterious effects on human health, especially in those with pulmonary diseases such as asthma. In the western United States, coarse PM typically comprises > 50% of measured PM 10 mass on an annual basis, and approaches 70% in some areas of the Southwest. Fine PM derives primarily from combustion processes and may contain both primary particles directly emitted from specific sources and secondary particles formed by atmospheric chemistry changes over time. Wilson and Suh (1997) provide a detailed description of the occurrence and composition of coarse and fine PM and conclude that they are separate classes of pollutants and should be measured separately in epidemiology and toxicology studies. In addition to difference in chemical composition, these two size fractions deposit in different locations in the lung, with coarse PM predominantly depositing in the more proximal portion of the lung (Kim and Hu 1998). Because lung deposition and chemical composition of fine and coarse PM are generally dissimilar, PM may produce biologic activity and health effects distinct in nature and severity from those effects seen with exposure to fine PM. Epidemiology studies using time-series analyses have demonstrated significant correlations between exposure to ambient PM air pollution and increased mortality and morbidity. Although most time-series studies have demonstrated health effects to be more strongly correlated with the PM 2.5 fraction than with the PM 2.5-10 fraction (Cifuentes et al. 2000;Klemm et al. 2000;Lipfert et al. 2000;Schwartz and Neas 2000), there is some evidence for effects of coarse PM on mortality and morbidity, especially in arid regions where coarse PM concentrations are relatively high (Castillejos et al. 2000;Mar et al. 2000;Ostro et al. 2000). Epidemiologic evidence of health effects associated with coarse PM was recently reviewed (Brunekreef and Forsberg 2005). A panel study of 19 nonsmoking older adults with cardiovascular disease residing in the Coachella Valley, California, reported an association between decreased heart rate variability (HRV) and coarse PM (Lipsett et al. 2006). A panel study of 12 adult asthmatics residing in Chapel Hill, North Carolina, reported associations between HRV, blood lipids, and circulating eosinophils and coarse, but not fine, PM (Yeatts et al. 2007). Recently, virtual impactor technology (Demokritou et al. 2002;Kim et al. 2000) has been used to expose humans to concentrated fine ambient PM. These studies have reported that fine concentrated ambient particles (CAPs) can cause mild pulmonary inflammation and increased blood fibrinogen in healthy volunteers (Ghio et al. 2000, decreased HRV in healthy elderly volunteers (Devlin et al. 2002), decreased arterial oxygenation and HRV in elderly volunteers who are healthy but have chronic obstructive pulmonary disease (Gong et al. 2004a), and increased HRV and mediators of blood coagulation in healthy and asthmatic subjects (Gong et al. 2003a). Additionally, exposure of healthy volunteers to fine CAPs plus ozone caused brachial artery vasoconstriction (Brook et al. 2002) and increased diastolic blood pressure (Urch et al. 2005), and these findings were attributed primarily to CAP constituents (Urch et al. 2004). To understand more fully the pathologic effects of the coarse fraction and to determine biological plausibility of epidemiologic studies, we examined the physiologic effects of concentrated Chapel Hill coarse CAPs on several cardiac, blood, and pulmonary end points Background: There is ample epidemiologic and toxicologic evidence that exposure to fine particulate matter (PM) air pollution [aerodynamic diameter ≤ 2.5 µm (PM 2.5 )], which derives primarily from combustion processes, can result in increased mortality and morbidity. There is less certainty as to the contribution of coarse PM (PM 2.5-10 ), which derives from crustal materials and from mechanical processes, to mortality and morbidity. oBjective: To determine whether coarse PM causes cardiopulmonary effects, we exposed 14 healthy young volunteers to coarse concentrated ambient particles (CAPs) and filtered air. Coarse PM concentration averaged 89.0 µg/m 3 (range, 23.7-159.6 µg/m 3 ). Volunteers were exposed to coarse CAPs and filtered air for 2 hr while they underwent intermittent exercise in a single-blind, crossover study. We measured pulmonary, cardiac, and hematologic end points before exposure, immediately after exposure, and again 20 hr after exposure. results: Compared with filtered air exposure, coarse CAP exposure produced a small increase in polymorphonuclear neutrophils in the bronchoalveolar lavage fluid 20 hr postexposure, indicating mild pulmonary inflammation. We observed no changes in pulmonary function. Blood tissue plasminogen activator, which is involved in fibrinolysis, was decreased 20 hr after exposure. The standard deviation of normal-to-normal intervals (SDNN), a measure of overall heart rate variability, also decreased 20 hr after exposure to CAPs. conclusions: Coarse CAP exposure produces a mild physiologic response in healthy young volunteers approximately 20 hr postexposure. These changes are similar in scope and magnitude to changes we and others have previously reported for volunteers exposed to fine CAPs, suggesting that both size fractions are comparable at inducing cardiopulmonary changes in acute exposure settings. in young, healthy volunteers. This study is part of a continuing series of studies in which humans are exposed to different size fractions of Chapel Hill PM and will also allow us to compare the relative toxicity of coarse and fine PM from the same geographic location. We presented a brief comparison of humans exposed to three size fractions of Chapel Hill CAPs in condensed form in a symposium proceedings (Samet et al. 2007). Materials and Methods Study population. This was a single-blind, crossover study approved by the Committee on the Protection of the Rights of Human Subjects of the University of North Carolina at Chapel Hill School of Medicine. Before inclusion, participants were informed of the study procedures and risks, signed a statement of informed consent and were approved for participation by a U.S. EPA physician. Screening procedures included medical history, physical exam, and routine hematologic and biochemical tests. Specific exclusion criteria included any chronic medical condition or chronic medication use (except birth control pills, low-dose antibiotics for acne, or dietary supplements), significant risk factors for cardiovascular disease (e.g., high cholesterol or uncontrolled blood pressure), and current smokers or those with a significant smoking history within 1 year of study participation. Subjects suffering from seasonal allergies were not studied within 6 weeks of a symptomatic episode, and no subject was studied within 4 weeks of a respiratory tract infection. Individuals unable to discontinue substances that could potentially alter their inflammatory response (e.g., antioxidants, nonsteroidal anti-inflammatory agents) for at least 6 days before exposure were not allowed to participate. A urine pregnancy test was performed on all females during the screening process and before each exposure. Exposure of subjects to coarse CAPs. Ambient air was drawn into an inlet duct on the roof of the U.S. EPA human studies facility (~ 30 m above ground level) in Chapel Hill at 5,000 L/min and transferred downward into the building to a PM 10 inlet of the concentrator. The size selective inlet and the virtual impactors used to concentrate particles are identical to those developed by Demokritou et al. (2002). The major outward flow was 4,500 L/min for the first impaction stage and 450 L/min for the second impaction stage, giving a 50 L/min concentrated aerosol outflow. We diluted the concentrated aerosol with 150 L/min of room-temperature dilution air [relative humidity (RH) ~ 30%], which was added to provide enough airflow for the human subjects' breathing requirements. The dilution air was passed through HEPA filters and charcoal to strip it of particles and gaseous air pollutants. We assessed PM concentrations in the chamber by gravimetric analysis of filters. We placed the filters immediately upstream of the inlet duct to the chamber, about 30 in from the subject's mouth. We measured real-time concentrations upstream of the concentrator and in the chamber using a Fisher Scientific DataRam 4 monitor (Franklin, MA) and used these concentrations to calculate the concentration factor, which ranged from 4-to 10-fold. We measured particle size distributions with a 3321 APS instrument (TSI Inc., St Paul, MN). Because coarse PM is not evenly distributed in a chamber of this size, each subject was seated with their mouths about 12 in from the chamber inlet duct. Even at that close distance, the subjects inhaled only 67% of the particles measured at the filter inlet. Table 1 shows the actual dose of particles inhaled by each subject. Exposures were conducted in a specially designed Plexiglas chamber at approximately 20°C and 40% RH. Participants were randomly exposed to filtered air and concentrated PM 2.5-10 for 120 min, with intermittent exercise, on two separate occasions separated by at least 1 month. The exercise schedule consisted of 15 min of rest followed by 15 min of exercise on a recumbent bicycle, repeated four times over the 120-min exposure session, with a target ventilation of 20 L/min/m 2 during exercise. During a training session, the workload needed for each subject to achieve the target minute ventilation was calculated. Most subjects achieved the target using a work load of 75 W; however, the tension on the exercise bike was adjusted according to each subject's calculations. End point measurements. Bronchoscopy with lavage. Subjects underwent bronchoscopy with lavage approximately 20 hr after the completion of each exposure, as described in detail in Huang et al. (2006). We quantified cell numbers by counting in a hemocytometer and assessed viability by exclusion of trypan blue dye. Viability exceeded 90% in all samples. Cells were stained using DiffQuik reagents purchased from Sigma (St. Louis, MO), and we determined differential cell counts [polymorphonuclear neutrophils (PMNs), macrophages, lymphocytes, monocytes, epithelial cells, and eosinophils] by counting at least 300 cells under an oil-emersion lens. We stored bronchial lavage (BL) and bronchoalveolar lavage (BAL) fluid at -80°C, and at the end of the study we used commercially available kits to quantify levels of interleukin (IL)-6 and IL-8 (R&D Systems, Minneapolis, MN), prostaglandin E 2 (PGE 2 ; New England Nuclear, Boston, MA), α1-antitrypsin (ALPCO Diagnostics, Windham, NH), and total protein (BioRad, Hercules, CA). Spirometry. We assessed pulmonary function with a SensorMedics Vmax system (VIASYS, Conshohocken, PA). We took measurements of forced vital capacity (FVC), forced expiratory volume in 1 sec (FEV 1 ), and carbon monoxide diffusion capacity (DLCO) before, after, and approximately 20 hr after the completion of each exposure as described earlier (Ghio et al. 2000). Cellular and soluble blood components. We collected venous blood immediately before and approximately 1 hr and 20 hr after the completion of each exposure. Basic blood chemistry, complete blood count with differential, and serum catecholamines were measured by LabCorp (Burlington, NC) on the day the blood was drawn. Plasma and serum was also frozen at -80°C for later analysis. We used commercially available ELISA kits to quantify levels of C-reactive protein (ALPCO Diagnostics, Windham, NH); D-dimer and von Willebrand's factor (vWF; Diagnostica Stago, Parsippany, NY); factor VII, factor IX, prothrombin, tissue plasminogen activator (tPA), plasminogen, fibrinogen, and protein C (Enzyme Research Laboratories, South Bend, IN); and plasminogen activator inhibitor 1 (DakoCytomation, Carpenteria, CA). Heart rate variability (HRV). We collected continuous ambulatory electrocardiograms (ECGs) for approximately 24 hr using a 3100A Zymed Holter System (Phillips, Andover, MA) and processed them by standard Zymed algorithms. We sampled the ECGs at 120 Hz and stored all data on flash cards before processing. An electrocardiographic research nurse blinded to the particle exposure randomization then edited the sequence of ECG complexes to ensure proper labeling of each QRS complex. From the edited records, we assessed time domain [standard deviation of normalto-normal intervals (SDNN) and percentage of differences between adjacent normal-tonormal intervals that are > 50 msec (PNN 50 )] and frequency domain [total, low-frequency (LF; 0.03-0.15 Hz), and high-frequency (HF; 0.15-0.4 Hz) power] variables for the 5-min time periods immediately before exposure and approximately 1 hr and 20 hr after the completion of each exposure. We conducted measurements while subjects reclined quietly in a dark room for 30 min, with the final 10 min being used for HRV analysis. In addition to the 5-min intervals, we also calculated SDNN and average SDNN for the entire 24-hr ambulatory monitoring period. Statistical analysis. We assessed lung function, blood, and Holter measurements immediately before, 1 hr after, and 20 hr after each exposure. We assessed BAL/BL measurements only 20 hr postexposure. For analysis, we subtracted preexposure values from 1 hr and 20 hr postexposure values and compared the air exposure differences with the CAP exposure differences for each person. Data in the figures are percent change in these differences per 10-mg/m 3 increase in PM concentration in the exposure chamber. Because we took BAL measurements only 20 hr postexposure, we compared only those values during the statistical analysis. Concentration of coarse PM in the chamber is measured on a continuous scale and varies from subject to subject depending on the outdoor PM concentration that day. Table 1 shows the inhaled and chamber concentrations during the CAP exposure. PM concentrations in the chamber during air exposures were low, but not zero. We used linear mixed effects models (R statistical software, version 2.3.1, lmer package) to test differences in response between the CAP and filtered air exposures. More specifically, we used a random intercept model to account for the subject-level variability and estimated the slope parameter that described an expected change in response as a function of PM concentration. We summarize the estimates of slope on the basis of a 10-mg/ m 3 increase in PM concentration. We also report the mean and standard error associated with each end point. We used an α of 0.05 to determine statistical significance. Results Study population and exposure. Table 1 contains basic demographic PM concentration information. This study enrolled six female and eight male participants with a mean age of 24.9 years. All exposures began within 30 min of 0945 hours to control for diurnal variations in physiologic response. Although we constructed the concentrator to concentrate only PM 2.5-10 , there is some concentration of PM 1-2.5 (particles between 1 and 2.5 µm). "Total PM" in Table 1 refers to the combination of all particles present in the chamber. We observed substantial variation in concentrated PM exposure reflecting the natural daily variation in PM concentration outside the facility. CAP exposures took place from June to December, and, as expected, coarse PM concentrations were higher in the warmer months (Figure 1). Coarse PM caused small but significant changes in lung neutrophils and monocytes. Total cell recoveries in the BL and BAL fluids did not differ between air-and CAP-exposed individuals. Figure 2 provides data for the cell differential end points. In the BAL fraction we observed a statistically significant 10.7% increase in percent PMNs per 10 µg/m 3 of coarse PM (p = 0.0065), indicative of mild pulmonary inflammation. BAL PMN values in air-exposed individuals were 1.0 ± 0.30%. In the BL fraction, we observed a trend toward an increase in PMNs with increasing CAP concentration, although the trend was not statistically significant. We observed a small, but statistically significant, 2.0% decrease in percent monocytes in the BL fraction (p = 0.05) per 10 µg/m 3 of coarse PM. Monocyte levels in air-exposed individuals were 7.2 ± 0.8%. We observed no significant changes in any of the other cell types in either fraction. Coarse PM caused a decrease in total protein in the BAL. We observed no differences in recovery of BL or BAL fluid between air-and CAP-exposed individuals. The group average return was within 10%, so soluble components were expressed per milliliter of lavage fluid. As shown in Figure 3, soluble markers of inflammation present in BAL and BL fluids (IL-6, IL-8, PGE 2 ) were not changed after exposure to coarse CAPs for 2 hr. However, we observed a 1.8% decrease in total protein in BAL fluid (p = 0.0191) per 10 µg/m 3 of coarse PM. Protein levels in BAL fluid of air-exposed individuals were 74.2 ± 9.0 µg/mL. In an earlier study (Ghio et al. 2000), we also reported decreased protein in lavage fluid of humans exposed to fine CAPs. Coarse PM caused no changes in pulmonary function. We took lung function measurements before, immediately after, and again 20 hr after exposure to air and coarse CAPs. Relative to measurements taken before exposure, FEV 1 and FVC did not show statistically significant changes either immediately or 20 hr after exposure to coarse CAPs (Figure 4). A recent study reported changes in pulmonary diffusing capacity (DLCO) for carbon monoxide in humans exposed to ultrafine carbon black particles (Pietropaoli et al. 2004). However, in this study we saw no change in DLCO either immediately or 20 hr after CAP exposure. Coarse CAPs cause small changes in vascular factors involved in coagulation. We measured a number of soluble factors involved in clotting and coagulation before, immediately after, and again 20 hr after exposure to air or coarse CAPs. Relative to measurements taken before exposure, only tPA showed a significant change after CAP exposure ( Figure 5). tPA concentration decreased 32.9% from the mean baseline level (p = 0.01) per 10-µg/m 3 increase in CAP concentration when measured approximately 20 hr after exposure. Decreased tPA levels could potentially result in the formation of less plasmin, a compound that plays a key role in dissolving blood clots that may have formed in blood vessels. D-dimer concentration decreased 11.3% per 10 µg/m 3 of coarse PM 20 hr after exposure, but the decrease did not quite achieve statistical significance (p = 0.07). tPA and D-dimer concentrations in blood drawn before exposure were 6.5 ± 1.6 ng/mL and 349 ± 61 ng/mL, respectively. Other blood biomarkers. We also examined markers of systemic inflammatory processes, catecholamines, and lipids in the blood (data not shown). Coarse CAP exposure did not cause significant changes in C-reactive protein, catecholamines, triglycerides, or cholesterol (total, very-low-density, low-density, and high-density lipoproteins). HRV. In a previous study , we reported that elderly healthy people exposed to fine CAPs had changes in both time-and frequency-domain variables as measured by Holter recording. In this study, we applied a Holter monitor to each subject before exposure, which the subject wore for 24 hr. We measured SDNN, a marker of overall HRV, using data from the entire monitoring period. We measured PNN 50 , LF, HF, and total power during three 5-min periods when the subject was lying at rest: immediately before, immediately after, and 20 hr after exposure to air or coarse CAPs. Relative to the measurements taken before exposure, SDNN decreased 14.4% from the mean baseline level per 10-µg/m 3 increase in CAP concentration during a resting period 20 hr postexposure (p = 0.05) (Figure 6). We observed no statistically significant changes in any of the other HRV end points at either postexposure measurement and no changes in the 24-hr end points measurements. Discussion The recent development of second-generation particle concentrators has made it possible to examine the effects of real-time exposure to atmospheres in which size-fractionated PM is selectively concentrated. This study shows that an acute exposure of healthy young adults to concentrated coarse CAPs resulted in significant changes in indices of pulmonary inflammation, hemostasis, and autonomic nervous system balance. The mean coarse PM concentration (105.1 µg/m 3 ) measured at the inlet to the exposure chamber in this study is not unrealistically high and can be found in many areas throughout the world, including locations in the U.S. Southwest. Furthermore, the actual PM concentration at the mouth of the subject was only 67% of that measured at the chamber inlet, resulting in average exposures to only 70.3 µg/m 3 . Table 1 shows the actual dose of PM inhaled by each person, taking ventilation into account. Coarse PM are preferentially deposited in the proximal region of the lung. They also contain the bulk of particle-bound biological material such as lipopolysaccharide, which is a known proinflammatory agent. Therefore, we expected that coarse CAP exposure would elicit an inflammatory response (PMN influx) in the lung. Indeed, we did find a small but statistically significant increase in the percentage of PMNs in the BAL fluid. In a previous study, we exposed young, healthy volunteers to fine CAPs (PM 2.5 ), divided into four concentrations quartiles of 0 (filtered air), 47.2, 107.4, and 206.7 µg/m 3 (Ghio et al. 2000). In the present study, the mean PM concentration (105.1 µg/m 3 ) was very close to the mean concentration of the third quartile in the fine CAP study (107.4 µg/m 3 ) and resulted in an identical percentage of PMNs in the BAL fluid, suggesting that coarse and fine PM may be equally potent in inducing pulmonary inflammation. Other studies have not reported an influx of inflammatory cells into the lung after exposure of humans to CAPs. However, those studies used induced sputum to obtain cells from the respiratory tract rather than BAL, and we have found the latter is a more sensitive and less noisy method to measure lung inflammation, particularly when the percent influx of PMNs is small. We did not observe decreases in lung function in this study, nor did we observe increases in soluble markers of pulmonary inflammation such as IL-6, IL-8, or PGE 2 . This is in agreement with other studies in which both healthy humans and those with pulmonary disease have been exposed to CAPs (Ghio et al. 2000;Gong et al. 2003aGong et al. , 2003bGong et al. , 2004aGong et al. , 2008 and strengthens the notion that acute exposure to air pollution particles generally does not seem to result in substantial changes to the respiratory system. In contrast to CAP exposure, our previous studies in which humans were exposed to low levels of ozone have shown a marked decrease in pulmonary function and increase in inflammatory cells and cytokines (Devlin et al. 1991;Horstman et al. 1990), suggesting that particles and ozone exert their effects via dissimilar mechanisms. In this study, we found decreased blood plasma tPA levels 20 hr after exposure to coarse CAPs. tPA is a protein that is involved in the breakdown of blood clots by catalyzing the conversion of plasminogen to plasmin, the major enzyme responsible for clot breakdown. Decreased tPA levels could inhibit the breakdown of any clots formed by particles, thus increasing the odds of a thromboembolic event. These findings add to the growing number of studies that have found associations between PM exposure and alterations in indices of hemostasis and thrombosis. We previously reported elevations in blood fibrinogen, a fibrin precursor and acute phase reactant, in the blood of healthy volunteers 24 hr after exposure to fine CAPs . Increased blood fibrinogen has also been associated with exposure to air pollution in a number of panel studies (Chuang et al. 2007;Liao et al. 2005;Pekkanen et al. 2000;Ruckerl et al. 2007). We also reported an association between PM and vWF, a glycoprotein involved in endothelial cell activation, hemostasis, and platelet adhesion, in a panel of highway patrol troopers exposed to near-road air pollution (Riediker et al. 2004). Others have also reported associations between PM exposure and increased vWF (Liao et al. 2005), as well as plasminogen activator inhibitor (Chuang et al. 2007;Mills et al. 2005;Su et al. 2006). Taken as a whole, these studies indicate that exposure to PM has the potential to alter hemostatic balance function in the blood, favoring a prothrombogenic environment and interfering with fibrinolytic pathways. Whether these changes in hemostatic factors contribute to the triggering of cardiovascular and other thrombotic events after PM exposure remains to be established. Analysis of HRV is a noninvasive method to assess the function of the autonomic nervous system. Reduced HRV is considered a prognostic marker for adverse cardiovascular events in patients with a prior myocardial infarction. Panel studies have consistently associated fine PM exposure with decreased HRV. Gold et al. (2000) reported an association between decreased SDNN and increased PM 2.5 , as did Liao et al. (1999) and Pope et al. (1999). Previous work from our group demonstrated that healthy elderly individuals without overt cardiovascular disease exposed to fine CAPs experienced decreased SDNN and HF HRV in 5-min resting intervals immediately and 24 hr after exposure . A similar study by Gong et al. (2004a) involving healthy elderly individuals exposed to fine CAPs reported a significant decrease in SDNN 18 hr after exposure. In this study we extend our earlier findings by showing that exposure to coarse CAPs also results in decreased SDNN 20 hr after exposure. These results are in agreement with an earlier study which also reported decreased SDNN in healthy volunteers exposed to coarse CAPs (Gong et al. 2004b). They are also in agreement with a recent panel study in which we reported an association between coarse PM and decreased SDNN in asthmatics (Yeatts et al. 2007). A note of caution must be exercised in interpreting these findings. We measured multiple end points in the lung, blood, and heart. Therefore, some of the statistically significant findings may have been due to chance alone. It will be important to see whether these findings can be replicated in our own and others' studies. There has been considerable discussion about the pathways by which PM can cause acute cardiovascular changes. One of the early thoughts was that particles could cause changes to the primary target organ, the lung, which would spill over into the vascular system and secondarily affect autonomic function. However, data from numerous human and animal toxicology studies suggest that CAPs do not induce sufficient pulmonary responses to cause these kinds of secondary effects. Indeed, if this were the case, one might expect that ozone, a powerful inducer of pulmonary inflammation, would be able to cause even more substantial cardiovascular changes than PM, which has not been shown to date. Recent studies in rodents have hypothesized that ultrafine PM (PM 0.1 ; those < 0.1 µm in diameter) may actually leave the lung and directly attack the cardiovascular system (Nemmar et al. 2002;Oberdorster et al. 2002). Although this mechanism may play a role in cardiovascular effects caused by ultrafine PM, the large size of coarse PM makes it unlikely that they can pass directly into the circulation. However, it does not exclude the possibility that soluble components of coarse PM may find their way into the circulatory system. Recent in vitro experiments in our laboratory have shown that 40% of the activity of coarse PM that affects cultured airway epithelial cells resides in the water-soluble portion of the PM (data not shown). A third possibility is that PM, regardless of size, may affect the cardiovascular system through nerve impulses transmitted from the lung to the brain. Ozone-induced decrements in lung function are thought to be mediated via interaction of the pollutant with C-fibers innervating the lung in humans and dogs (Bromberg and Koren 1995;Coleridge et al. 1993;Passannante et al. 1998). Administration of β-adrenergic receptor and muscarinic receptor antagonists effectively blocked PM-induced cardiac oxidative stress in rats (Rhoden et al. 2005). Capsazepine, a selective antagonist of the vanilloid receptor present on pulmonary C-fibers, blunt PM-induced changes in cardiac oxidative stress and edema in rats (Ghelfi et al. 2008). Tunnicliffe et al. (2001) reported HRV changes but no respiratory system changes in humans exposed to sulfur dioxide, suggesting that a cardiac autonomic effect can be triggered by upper-airway irritant receptors. Conclusions The results of this study showed that young, healthy people experience mild acute physiologic effects when exposed to environmentally relevant coarse air pollution PM. The results of this study are generally consistent with those of previously published studies examining the effects of both coarse and fine PM, suggesting that both particle size fractions are roughly equivalent in inducing cardiopulmonary changes in healthy humans. However, given the large number of end points typically measured in these studies, the relatively small number of positive findings makes it important for these findings to be replicated in future studies.
v3-fos-license
2022-06-05T13:41:41.151Z
2022-06-01T00:00:00.000
249383772
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2075-4418/12/6/1391/pdf?version=1655195167", "pdf_hash": "912ec35e0e2a6a39ca0c91225b038031c0702a93", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2762", "s2fieldsofstudy": [ "Medicine" ], "sha1": "352d6a0ec8089e9a63613b79ef1f418e34ae8faf", "year": 2022 }
pes2o/s2orc
T2*-Mapping of Knee Cartilage in Response to Mechanical Loading in Alpine Skiing: A Feasibility Study Purpose: This study intends to establish a study protocol for the quantitative magnetic resonance imaging (qMRI) measurement of biochemical changes in knee cartilage induced by mechanical stress during alpine skiing with the implementation of new spring-loaded ski binding. Methods: The MRI-knee-scans (T2*-mapping) of four skiers using a conventional and a spring-loaded ski binding system, alternately, were acquired before and after 1 h/4 h of exposure to alpine skiing. Intrachondral T2* analysis on 60 defined regions of interest in the femorotibial knee joint (FTJ) was conducted. Intra- and interobserver variability and relative changes in the cartilage T2* signal and thickness were calculated. Results: A relevant decrease in the T2* time after 4 h of alpine skiing could be detected at the majority of measurement times. After overnight recovery, the T2* time increased above baseline. Although, the total T2* signal in the superficial cartilage layers was higher than that in the lower ones, no differences between the layers in the T2* changes could be detected. The central and posterior cartilage zones of the FTJ responded with a stronger T2* alteration than the anterior zones. Conclusions: For the first time, a quantitative MRI study setting could be established to detect early knee cartilage reaction due to alpine skiing. Relevant changes in the T2* time and thus in the intrachondral collagen microstructure and the free water content were observed. Introduction In alpine skiing, high peak loads affect the knee joint, and this is confirmed by the comparatively high rates of knee injury in this popular sport [1]. Skiing is practiced by all age groups, with the over-60 s making up about 20% of skiers [2]. The uptake rates in the bone scintigraphy (using the Tc-99 m MDP bone scans) for the knee joints during the active racing period were significantly higher than those during the inactive period. This indicates an increased risk of damage to the knee cartilage and the development of osteoarthritis (OA). Based on these figures, it can be assumed that many individuals who are already affected by OA regularly engage in alpine skiing. The two major macromolecular components of the extracellular cartilage matrix, Type-II collagen and proteoglycans (PG), are responsible for its biomechanical resilience, with collagen providing elastic properties [3] and PG providing viscoelastic properties [4]. Water occupies most of the interfibrillar extracellular matrix, approximately 70% of which is free to move when loaded by compressive forces [5]. OA begins with an impaired balance of cartilage metabolism, with changes in the microstructure of collagen fibers, a loss of PG, and an increase in the free water content. Increasing OA is accompanied by a progressive reduction in the free intrachondral water content. These changes lead to irreversible damage to the cartilage during the course of the disease [6]. Quantitative magnetic resonance imaging (qMRI) offers the possibility of detecting changes in the chondral biochemical microstructure before morphological changes have occurred and can therefore be used as a non-invasive biomarker of cartilage degeneration [7]. Numerous studies have examined the changes in cartilage under biomechanical load during running using qMRI (dGEMRIC, T1rho-, T2/T2*-mapping, etc.) [5,8,9]. Among these, T2*-mapping is suitable for detecting the proportion of intrachondral free water and the collagen fiber structure, thus detecting the early stages of OA [10]. The comparative advantages of T2*-mapping are the short acquisition time, the avoidance of contrast, the high spatial resolution, and the possibility of isotropic three-dimensional reconstruction [11]. Previous joint studies in alpine skiing are limited to the evaluation of elderly individuals with unilateral total knee replacement in terms of perceived pain, knee function, muscle mass, and effort [12]. Due to logistical, temporal, and technical peculiarities, scientific work investigating the relationship between alpine skiing and the microstructure of cartilage using qMRI has not yet been carried out. The aim of this preliminary feasibility study is to establish a suitable examination protocol to measure and analyze any relevant intrachondral changes in the femorotibial (FTJ) and femoropatellar joint (FPJ) by means of mechanical stress during alpine skiing using qMRI (T2*-mapping). The changes were examined on a conventional ski binding without suspension and on a new spring-mounted ski binding, which can be mounted between any alpine ski model and the ski binding. The damping plate consists of duralumin with pressed-in plain bearing sleeves and an inserted leaf spring made of austenitic stainless chromium-nickel steel ( Figure S1 in the Supplementary Files). According to the manufacturer, the damping plate absorbs up to 40% of the impact load-for example, on icy ski slopes or through transverse grooves [13]. By reducing the impact load on the joints, the damping plate is especially considered for skiers with pre-existing OA. Materials and Methods For the study carried out in the ski resort of Lech/Zürs (Austria), four experienced male alpine skiers from a South German ski club were randomly selected. The subjects were 18 (subject 1), 22 (subject 3), 32 (subject 4), and 58 (subject 5) years old and had a BMI of 22.0, 28.8, 22.6, and 24.8 kg/m 2 , respectively. The study was approved by the Ethics Committee of "blinded". Prior to enrollment, all of the subjects underwent an orthopedic-clinical basic examination and an initial morphologically oriented MRI of both knee joints. The clinical status and alignment of the lower extremities and their knee joints showed no relevant abnormalities or pathologies. This excluded arthralgia (no pressure pain, negative meniscus test mix [14]), instabilities/laxities (testing of the ligaments: negative Lachmann, (reversed) pivot shift, Lever Sign test [15]), dysfunctions (stable patellar movement), and inflammation (no swelling, warming, effusion). Morphological (PDfs: proton density fat saturated) and water-sensitive (TIRM: turbo inversion recovery magnitude) MRI sequences (Table 1) were obtained and subsequently assessed for the detection of effusion/edema and (osteo-) chondral lesions in the knee using the MRI-modified Outerbridge grading system [16]. This excluded relevant chondropathies (higher Outerbridge grade 1, [17]), meniscal lesions with surface involvement (higher grade 2, [18]), ligamentopathies, and inflammation in all subjects. The qMRI study took place on two consecutive weekends. The test subjects chose a slope profile with similar levels of difficulty on both weekends (category blue or red). Due to foehn storms (high speeds of the warm, fall wind on the leeward side of the mountains), only about 50% of the ski lifts in the ski resort were open on the first weekend. On the second weekend, about 90% of the lifts were open, which made the waiting times of the subjects at the lifts correspondingly shorter. On the first weekend, the mean air temperature was 6.3 • C Diagnostics 2022, 12, 1391 3 of 15 on the day of skiing; on the second, it was −0.7 • C. There were softer snow conditions on the slopes on the first weekend (data acquisition: Central Institute for Meteorology and Geodynamics Austria). For qMRI data acquisition, a 1.5 T MRI (Magnetom Aera, Siemens Ltd., Siemens Healthcare GmbH, 91052 Erlangen, Germany) was available at the outpatient office Lech, which was at a distance of 200 m from the ski slope. The qMRI scans with T2*-mapping (Table 1: Syngo TM MapIt FLASH T2*-GRE) using a dedicated, table fixed, eight-channel knee coil were performed for both knee joints of the subjects one day before (baseline: I-t0), immediately after 1 h (I-t1) and 4 h (I-t2) of loading by alpine skiing, and after recovery the following morning. Subjects 1 and 2 used a conventional ski binding without suspension on the first weekend (I), and the damped ski binding system on the second weekend (II). Subjects 3 and 4 used the binding systems in reverse order. On the second weekend, the same was repeated with the changed binding systems (baseline: II-t0, II-t1, II-t2). A blinding of the subjects regarding the ski binding systems could not be realized. For the quantitative biochemical cartilage analysis, T2* relaxation times were obtained from online reconstructed T2* maps by using a pixelwise, mono-exponential nonnegative least-squaresfit analysis (Syngo TM MapIt; Siemens Ltd.) [19]. The mean latency between the end of the skiing and the start of the T2* measurements was less than 10 min, and the mean MR scan duration for both knee joints was 38 min. The use of qMRI techniques for the structural analysis of (knee) articular cartilage requires zonal differentiation in order to detect specific cartilage degradation patterns and analyze causal relationships [20]. This is in line with our approach. The mean cartilage height (Ht), the T2* time per region of interest (ROI), and the mean ROI size, calculated by the areal dimension and the number of pixels per ROI, were measured and documented on an MR workstation (Syngo TM MapIt, Siemens Ltd.) by a trained scientist in the FTJ and FPJ. For this purpose, a total of 18 ROIs were defined on each knee for the cartilage layers (12 in the FTJ and 6 in the retropatellar cartilage layer) based on the methodology of the T2*-mapping of the TransEurope FootRace (TEFR) project [9], first described by Mamisch et al. [21]. As illustrated in Figure 1B-D, the ROIs were manually drawn on the slices in a way that covers nearly the entire cartilage areas. Care was taken to avoid the subchondral bone or joint fluid and to set the ROIs in the exact same positions at every examination. Table 2 shows the nomenclature of specific cartilage ROI areas. In total, four slices were implemented per FTJ ( Figure 1A), and two slices were implemented per FPJ (=60 ROIs). At five measurement times, a total of 600 ROIs per subject and 2400 ROIs for the overall study had to be created. For each ROI, the average T2*, number of pixels, and Ht were calculated from the two parallel slices and taken for further analysis. To determine the mean T2* for each layer, zone, and cartilage segment (Table 2), the mean T2* values of the specific area were pooled and calculated with regard to the ROI sizes. The time required for the cartilage T2* and Ht analysis was nearly 45 min for each side. Statistics and Testing. For the determination of the intra-and interobserver variability of the ROI sizes and the mean T2* and Ht values, the data of a randomly selected subject were again evaluated by the same scientific staff member after 6 months and additionally by a specialist in radiology. They were supervised by two radiologists with a special interest in musculoskeletal imaging and 15-25 years of experience. The intra-and inter-class correlation coefficient (ICC) was calculated for each ROI (n = 140) [22], and Bland-Altman plots were created to visualize the match for the T2* time, the number of pixels, and the Ht (95% limits of agreement (LOA): mean difference ± 1.96 standard deviation (SD)) [23]. the overall study had to be created. For each ROI, the average T2*, number of pixels, and Ht were calculated from the two parallel slices and taken for further analysis. To determine the mean T2* for each layer, zone, and cartilage segment (Table 2), the mean T2* values of the specific area were pooled and calculated with regard to the ROI sizes. The time required for the cartilage T2* and Ht analysis was nearly 45 min for each side. The coloring relatively reflects the intensity of the water signal or the water concentration, respectively, reaching from blue to green, yellow, and red. Blue: weak signal / low water content, red: strong signal / high water content. Table 2). (A): Four sagittal slices (two central layers each in the lateral and medial FTJ); (B): fused colored T2* maps (syngo™-MapIt fusion technique) of axial FLASH T2*w GRE in the FPJ with ROIs for patellar T2* and Ht measurement between the medial, central, and lateral zones; (C,D): fused colored T2* maps of sagittal FLASH T2*w GRE in the medial FTJ (C) and lateral FTJ (D) with ROIs for T2* and Ht measurement between the anterior, central, and posterior zones of the femoral and tibial cartilage segments. The coloring relatively reflects the intensity of the water signal or the water concentration, respectively, reaching from blue to green, yellow, and red. Blue: weak signal/low water content, red: strong signal/high water content. For the data documentation, statistical and descriptive analyses and graphical presentations using Office-Excel TM (release-1812, 2016, Microsoft Inc., Microsoft Corporation, Redmond WA 98052-6399, USA), SPSS TM (release-25.0, 2017, IBM TM -Statistics), and SigmaPlot TM (release-12.5., 2011, Systat Inc., Systat Software GmbH, 40699 Erkrath, Germany) were utilized, respectively. It is known from previous studies that absolute intrachondral T2/T2* and cartilage thickness values show joint related intraindividual values due to multiple influencing factors such as gender, age, weight, activities of daily living (ADL), joint anatomy, alignment, etc. [24][25][26]. Therefore, the calculated and graphed target parameters were the relative changes of T2* and Ht compared to the baseline I and II, respectively. Due to the small number of subjects, no statistical analyses were done. Results For the mean T2* time per ROI, the Intra-ICC was 0.95 (confidence interval (CI) 0.93-0.96) and the Inter-ICC was 0.92 (CI 0.89-0.94), with segmental SDs of 0.92 ms and 1.15 ms, respectively; the corresponding LOAs are shown in Figure 2. For the mean number of pixels per ROI, the Intra-ICC was 0.96 (CI 0.95-0.97) and the Inter-ICC was 0.93 (CI 0.90-0.95); the segmental SDs were 14.4 px and 16.9 px, respectively (for the LOAs, see Figure 2). For the mean Ht, an Intra-ICC of 0.92 (CI 0.88-9.4) was determined, with a segmental SD of 0.21 mm (for the LOA, see Figure 2). At baseline, the mean measured Ht in the FPJ was greater compared to that in the FTJ. Compared to baseline, the after-load measurements in the FPJ and FTJ showed a slightly lower Ht in the majority of the measurement series (cartilage segments, Table S1 in the Supplementary Files). The SD of the measured changes in cartilage height Ht was regularly below 0.22 mm (Table S1 in the Supplementary Files). At baseline, the mean measured Ht in the FPJ was greater compared to that in the FTJ. Compared to baseline, the after-load measurements in the FPJ and FTJ showed a slightly lower Ht in the majority of the measurement series (cartilage segments, Table S1 in the Supplementary Files). The SD of the measured changes in cartilage height Ht was regularly below 0.22 mm (Table S1 in the Supplementary Files). The segment-related relative changes in the T2* time are graphically shown in Figure 4a-c (for the corresponding mean T2* values, see Table S2 in the Supplementary Files). The two youngest subjects had lower mean T2* values than the other two subjects at baseline in both the FTJ and retropatellar. The mean T2* times were 28.6 ms (SD 2.9 ms) for the lateral femoral, 19.6 ms (SD 1.5 ms) for the lateral tibial, 26.9 ms (SD 2.1 ms) for the medial femoral, and 23.3 ms (SD 2.2 ms) for the medial tibial; for the retropatellar, it was 26.6 ms (SD 3.2 ms). Side-differences in the subjects with regard to the T2* values could not be detected (Table S2 in the Supplementary Files). Compared to the deep cartilage layers, the superficial layers across all of the knee segments and test persons showed a significantly higher T2* signal at all times of measurement ( Figure 5); the mean difference at the baseline measurement in the lateral FTJ was 7.9 ms (SD 2.7 ms), in the medial FTJ it was 9.2 ms (SD: 3.5 ms), and in the retropatellar it was 10.2 ms (SD: 2.1 ms). The zonal T2* comparison at baseline showed no relevant differences in the mean values due to the comparatively high SD (Table S1 in the Supplementary Files). The segment-related relative changes in the T2* time are graphically shown in Figure 4a-c (for the corresponding mean T2* values, see Table S2 in the Supplementary Files). The two youngest subjects had lower mean T2* values than the other two subjects at baseline in both the FTJ and retropatellar. The mean T2* times were 28.6 ms (SD 2.9 ms) for the lateral femoral, 19.6 ms (SD 1.5 ms) for the lateral tibial, 26.9 ms (SD 2.1 ms) for the medial femoral, and 23.3 ms (SD 2.2 ms) for the medial tibial; for the retropatellar, it was 26.6 ms (SD 3.2 ms). Side-differences in the subjects with regard to the T2* values could not be detected (Table S2 in the Supplementary Files). Compared to the deep cartilage layers, the superficial layers across all of the knee segments and test persons showed a significantly higher T2* signal at all times of measurement ( Figure 5); the mean difference at the baseline measurement in the lateral FTJ was 7.9 ms (SD 2.7 ms), in the medial FTJ it was 9.2 ms (SD: 3.5 ms), and in the retropatellar it was 10.2 ms (SD: 2.1 ms). The zonal T2* comparison at baseline showed no relevant differences in the mean values due to the comparatively high SD (Table S1 in the Supplementary Files). In all of the subjects, relevant changes in T2* time could be detected under load in both knee joints compared to the baseline (Figure 4a-c). However, in comparison, increases and decreases in T2* time were measurable in the individual subjects. No relevant mean trend could be evaluated for higher or lower T2* changes with respect to layers ( Figure 5), zones ( Figure 6), or segments or side differences. However, when looking at Figure 6, the middle and posterior zones of the femoral and tibial segments experience a higher T2* change than the anterior zones. The load-induced relative T2* changes also showed no trend-setting differences with respect to the knee joint compartments or regions. After 4 h of exposure, a decrease in T2* time compared to the baseline was observed in the majority of subjects in the FPJ and retropatellar. In this regard, however, no significant differences can be demonstrated between the measurements after 1 h or after 4 h of exposure. In all of the subjects, relevant changes in T2* time could be detected under load in both knee joints compared to the baseline (Figure 4a-c). However, in comparison, increases and decreases in T2* time were measurable in the individual subjects. No relevant mean trend could be evaluated for higher or lower T2* changes with respect to layers ( Figure 5), zones (Figure 6), or segments or side differences. However, when looking at At the recovery measurement, the T2* time in all four subjects was elevated in almost all FTJ segments but not in the retropatellar (Figure 4). In all of the subjects, relevant changes in T2* time could be detected under load in both knee joints compared to the baseline (Figure 4a-c). However, in comparison, increases and decreases in T2* time were measurable in the individual subjects. No relevant mean trend could be evaluated for higher or lower T2* changes with respect to layers ( Figure 5), zones (Figure 6), or segments or side differences. However, when looking at Figure 6, the middle and posterior zones of the femoral and tibial segments experience a higher T2* change than the anterior zones. The load-induced relative T2* changes also showed no trend-setting differences with respect to the knee joint compartments or regions. After 4 h of exposure, a decrease in T2* time compared to the baseline was observed in the majority of subjects in the FPJ and retropatellar. In this regard, however, no significant differences can be demonstrated between the measurements after 1 h or after 4 h of exposure. At the recovery measurement, the T2* time in all four subjects was elevated in almost all FTJ segments but not in the retropatellar (Figure 4). Skiing comfort was rated as high by all subjects for both the conventional ski binding and the damped binding systems. Due to the different weather conditions on both weekends, with much softer slope conditions on the first weekend, the subjective comparison of the ski binding systems for the subjects was only possible with difficulty. Regarding the T2* times, no relevant difference between the two attachment systems was detectable (Figures 4-6). Discussion The authors present an application of qMRI for the biochemical evaluation of articular knee cartilage in alpine skiers, focusing on specific T2*-mapping for the first time. The location of the MRI nearby a ski slope provided optimal conditions, since the subjects could be examined immediately after skiing without any relevant loss of time, allowing for the minimization of recovery times. Skiing comfort was rated as high by all subjects for both the conventional ski binding and the damped binding systems. Due to the different weather conditions on both weekends, with much softer slope conditions on the first weekend, the subjective comparison of the ski binding systems for the subjects was only possible with difficulty. Regarding the T2* times, no relevant difference between the two attachment systems was detectable (Figures 4-6). Discussion The authors present an application of qMRI for the biochemical evaluation of articular knee cartilage in alpine skiers, focusing on specific T2*-mapping for the first time. The location of the MRI nearby a ski slope provided optimal conditions, since the subjects could be examined immediately after skiing without any relevant loss of time, allowing for the minimization of recovery times. The ICCINTRA and ICCINTER showed high agreement for the tested parameters T2*, ROI area, and Ht. What is decisive for the interpretation of the sufficient accuracy of a measuring method, however, is the SD or the LOA (=1.96 + SD). Cartilage Ht. Compared to the baseline, a somewhat lower Ht was measured in the majority of the measurement series in the cartilage segments after ski load (Table 3). However, these measurements are not reliable because the intraobserver inaccuracy (SD and LOA, Figure 2) of the measurement method used is at least the same as the measured changes. The cartilage Ht measurement methodology should therefore be considered unsuitable after this feasibility study, as it is considered too imprecise, and any subsequent study should use other MR-based cartilage thickness or volumetric analysis methods that are more accurate and already validated [27]. With these, a comparison could then be possible with regard to the study results on cartilage Ht change during running, explaining the short-term load-induced cartilage Ht reduction, as in the orientation of the collagen fiber structure due to compression forces-especially in the transitional zone of the cartilage [5]. Cartilage T2*. In all of the subjects, the relevant cartilage T2* changes due to alpine skiing could be measured. Many studies on endurance running have shown higher loadinduced changes for the superficial compared to the deep cartilage layers [25]. However, we could not prove such regional differences regarding T2* changes. Some studies indicate that measured T2* values are directly dependent on the type of exercise and their specific cartilage loadings and therefore can lead to different T2/T2* maps [24]. Due to the recurrent knee flexion and/or prolonged downhill posture, alpine skiing has a different mechanical load on the knee compartments compared to running. Because of the movement pattern, skiing is more akin to squatting than running. When running, the compression force on the FTJ is about three times the body weight, but it is up to six times the body weight in squatting [28]. The compression force on the patella during running is about 5.6 times the body weight and up to 7.8 times the body weight in squatting [29]. As no T2 mapping studies have been performed so far for alpine skiing, the presented measurement results are not directly comparable. Regarding the literature on stair climbing, the superficial layers of the retropatellar lateral cartilage should undergo bigger T2 changes [8], which we could not regularly evaluate in any subject ( Figure 6). We were unable to evaluate any tendency for higher retropatellar T2* changes compared to the FTJ segments, but the conspicuousness of the higher load-induced T2* changes in the middle and posterior femorotibial zones coincides with the T2 map observations under axial loading due to running [8]. Regarding the intra-rater values, the measured T2* changes are relevant and therefore load-induced; however, due to the small number of cases, no regional or ski-related trends or influencing factors could be evaluated. At t2, a T2* decrease was observed in the majority of subjects in the FTJ and retropatellar. An initial intrachondral T2/T2* decrease after running has been observed for the knee joint and is described as acute cartilage reaction due to running impact [30]. Since there is a de facto shorter effective load due to waiting times at the lift and lift transport times, it seems reasonable to compare the results of this study with studies that investigated shorter loads. MR studies that examined the changes in intrachondral free water content after 30 min of running also showed an immediate T2 decrease [5,8,30,31], which is caused by the compression-induced mechanical displacement of free water from the cartilage [3,5]. Another reason for this signal decay is described by the increase in the anisotropy of superficial collagen fibers [30,32]. On the other hand, we partially observed T2* increases, which may be caused by PG degradation and the release of water molecules from molecular binding [9] or by the uptake of free water from the perichondral environment (edema theory) [33]. Therefore, a specific causal interpretation of our observed T2* changes remains open. Different mechanisms of the displacement of free intrachondral water [10], as well as the different regional and zonal reactions of collagen fiber anisotropy, seem to overlap [34]. In all of the segments, the intrachondrial T2* signal increased after overnight recovery in three of the four subjects. Other study results show an increase in the T2/T2* signal after the prolonged exposure to a marathon [35]. Increasing T2* values reflect free water dissociated from its chemical bonding to PG [10] and is related to PG content [34]. The initial stages of OA are also accompanied by microstructural changes, with an increase in the water content in the cartilage [10]. The increase in the T2* signal after exercise in our study can be interpreted either as an early sign of (reversible) cartilage degradation by transient changes in chondral homeostasis (the loss of structural anisotropy in the collagen matrix and a concomitant increase in free intrachondral water with decreasing PG content) [34] or as an excessive or compensatory uptake of free water into the cartilage [36]. Limitations of the study. The small number of subjects does not allow for statistical evaluation or testing for differences or influencing factors. Therefore, between the conventional ski binding system and the damped binding system, no significant differences in T2* times could be analyzed. Although the subjects chose a similar slope profile on both weekends, an influence of the individual driving style of the experienced skier on the load of the cartilage is conceivable. Other possible influencing factors are the different climatic and snow conditions at both weekends, as well as the short but partly varying latency between the end of the loading and the start of the MR scan. During the unloading of the chondrocytes, a complete recovery of all structural deformation was observed after 30 min [37]. The volumetric MRI analyses of the ankle cartilage showed significant initial talar cartilage volume reduction, which was restored within 30 min [38]. In this respect, it must be assumed that the loadrelated initial decrease of the T2 signal already begins to decline with the end of the loading. The time interval of the T2 measurement from the end of the load is therefore decisive for these measurements-if one wants to detect the initial cartilage reaction. The transport of the subjects from the ski lift with a wheelchair directly into the MRI would, in a follow-up study, eliminate the unpredictable influencing factor of the very short but non-specific walking load after skiing with regard to T2 signaling. Alternatively, the implementation of a T1rho-measurement would be conceivable. It detects structural changes in the PG and is considered by some authors to be more sensitive compared to T2/T2* measurements [39]. In addition, the accuracy of the established qMRI cartilage analyses is getting better as a result of the implementation of high-resolution MR protocols, so the use of appropriate systems for future comparable studies is recommended [40]. Age, activity level, and physical fitness [41] also have an impact on cartilage composition and physiology and thus on cartilage T2/T2*. While some authors assume that age does not have a relevant influence on the initial T2 response after running [5], regional differences, especially for the superficial [25] and for the deep cartilaginous layers [26], have been described. In the discussion of these inconsistent findings, it is argued that these may be due to the OA-related structural pathology, particularly cartilage damage, which becomes more common with age even when there are no radiographic signs of knee OA [42]. Therefore, the age-related cartilage T2 observed in earlier studies may be more likely to be due to cartilage pathology than to normal cartilage aging and may not be observed if the risk factors for knee OA are rigorously eliminated. However, this is exactly what Wirth et al. [26] did in their study, in which they detected age-related differences in the composition of the deep cartilaginous layer in the knee joint using T2 mapping. There could also be a positive effect of joint loading on the chondrocyte function associated with increased PG and collagen synthesis [43]. In elite runners and untrained volunteers, exercise increases PG content [44]. Increasing hydrostatic pressure upregulates PG and Type II collagen mRNA expression [45], and the de novo synthesis of PG will be initiated. So, due to the still-not-completely-clear literature review [5,26], the fact that one of the subjects was 26 to 40 years older than the other three, and the fitness level in general, the activity profile of the subjects in the run-up to the study must also be listed as a possible influencing factor on the detected initial T2 signal behavior. Therefore, the approach to a subsequent RCT study should be based on a significantly higher number of cases in order to be able to statistically verify the results or specifically investigate new protective binding systems. It would be best to homogenize influencing factors (climatically stable environmental conditions; comparable snow and slope conditions, e.g., by indoor/laboratory conditions; age-homogenized test subjects with a similar level of activity; defined skiing styles and profiles) and to optimize MR protocols regarding specificity (T1rho and T2 measurements) and accuracy (high-resolution qMRI imaging and possibly 3T). As there are still no limit values defined, which allow for the differentiation of the initial intrachondral T2/T2* signal behavior after loading with respect to physiological and pathological cartilage reaction, and because of the recognition that recurrent cartilage load may also be a chondroprotective and regenerative stimulus for articular cartilage [32,35], direct observation of only the immediate cartilage response to skiing load may well be half the truth in terms of the overall effect. Therefore, such studies should include longer-term observation periods. Conclusions In conclusion, the additional logistical effort of the presented study setting with an MRI scanner within a large ski seems to be worthwhile, as reliable measurement results in T2*-mapping immediately after alpine skiing with corresponding load-induced T2* signal changes could be quantified and documented for the first time. After alpine skiing, the T2* times at most measurement points in the knee cartilage decrease immediately after exercise and increase beyond the baseline in 12-18 h. T2*-mapping can be used as a promising, non-invasive biomarker to detect early cartilage degradation on the knee joint due to biomechanical load during alpine skiing, and, with a sufficiently high number of cases under comparable environmental conditions, it may be a method for the evaluation of different knee joint cartilage loading responses in relation to external factors influencing alpine skiing (for example, different ski binding/buffering systems). Author Contributions: All authors of this manuscript made substantial contributions to the conception, design, or acquisition of the data or to the analysis and interpretation of the data. All authors revised the manuscript critically for important intellectual content and approved the version to be Informed Consent Statement: Informed consent was obtained from all individual participants included in the study. Data Availability Statement: The data supporting the reported results can be found in the Supplemental Material. The data supporting the reported results can also be found in the hospital's PACS system.
v3-fos-license
2018-12-29T09:33:50.022Z
2017-12-20T00:00:00.000
73618130
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.intechopen.com/citation-pdf-url/56564", "pdf_hash": "8630bb0d7035d9d4cc41d6fa94230213a4d77a75", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2763", "s2fieldsofstudy": [ "Medicine" ], "sha1": "e6e0cb156b4bb3d8f0083fa9d37726c4db389896", "year": 2018 }
pes2o/s2orc
Clinical Trials in Pregnant Women with Preeclampsia Preeclampsia (PE) is the leading cause of preterm birth by medical indication when associated with premature detachment of placenta normoinserta, and Intrauterine growth restriction (IUGR) is associated with high perinatal morbidity and mortality and longterm sequelae. The main problem of PE is threefold: the diagnostic difficulty, the complicated interrelationship of the pathophysiological processes, and the vulnerability of the maternal-fetal binomial to the therapeutic interventions. The approach for management with PE is preventing its late occurrence in pregnancy. The key to preventing PE is knowledge of the factors that trigger the pathophysiological processes that culminate in the presentation of PE. Understanding the developmental characteristics of the placenta in pregnancy at high risk for PE is essential for understanding the pathophysiology and developing strategies for prevention. When deciding that the population of study is a group of pregnant women, the first ethical criteria that need to be reviewed are those aimed at the protection of the fetus. There are no specific guidelines on how to assess fetal well-being during pregnancy routinely in the clinic, and this deficiency is shifted to clinical research with pregnant women. Introduction Preeclampsia (PE) is the leading cause of preterm birth by medical indication when associated with premature detachment of placenta normoinserta, and IUGR is associated with high perinatal morbidity and mortality and long-term sequelae. It has been described that standardization in the management of health services and the use of clinical practice guidelines is associated with a reduction in adverse outcomes, and a fundamental part of the management of severe PE includes a complete evaluation of the mother and the fetus. Despite the advances in medicine, the frequency of this syndrome has not changed, and globally its incidence ranges between 2 and 10% of pregnancies. The World Health Organization (WHO) estimates that the incidence of PE is seven times higher in developing than in developed countries (2.8-0.4%). In Mexico, it is estimated that PE is a major cause of maternal and perinatal morbidity and mortality. In Jalisco alone, maternal deaths increased to 57.14% from 2011 to 2014, placing this state in fourth place at the national level in terms of maternal deaths, during 2015 [1]. Because it is a heterogeneous associated idiopathic syndrome to endothelial damage, so far there is no effective treatment that reduces the morbidity and mortality of this pathological entity, so it is necessary to reinforce prevention. In this area, only the use of calcium supplements and acetylsalicylic acid (ASA) appears to be a recommendation, albeit with controversial results [1,2]. The main problem of PE is threefold, the diagnostic difficulty, the complicated interrelationship of the pathophysiological processes, and the vulnerability of the maternal-fetal binomial to the therapeutic interventions. Pregnant women, scientifically complexed population There are various concepts about the characteristics of vulnerable populations; however, it is generally accepted that a vulnerable group is one whose ability to protect their own interest or grant their consent is physically, psychologically, or socially compromised. Since the development of ethical principles in research, children, psychiatric patients, prisoners, and pregnant women have been included in this group; however, in recent years it has been intended to remove pregnant women from this group. The National Institutes of Health (NIH) through the Office of Research on Women's Health recommended as early as 2010 that pregnant women should be considered as a scientifically complex rather than vulnerable group, this being for the reason that this group has the same capacity and autonomy for decision making as its nonpregnant counterparts, including the decision of whether or not to participate in a clinical trial [3]. Scientific complexity arises from the special physiological conditions of pregnancy and from the ethical considerations of the balance between maternal well-being and fetal well-being. Pregnancy is accompanied by important physiological changes and their knowledge is an element of great value for the proper management of the obstetric patient. Practically, all the body's system of the pregnant woman is adapted to house the product, among them are changes at the ocular, musculoskeletal, skin and mucous, hepatic, hematological, renal, and gastrointestinal levels. The most relevant changes occur at the uterine level, systemic vascular resistance is reduced due to high flow and low resistance circuit in the uteroplacental circulation. In pregnancy, uterine blood flow significantly increases to allow perfusion of intervillous placental spaces and fetal development. The trophoblast invades the uterine spiral arteries; vascular smooth muscle cells are lost and replaced by the fibrinoid material, converting them into large dilated blood vessels allowing greater perfusion of the placenta [4]. These changes pose a challenge for the researcher as they make it very difficult to define not only the possible therapeutic results of an intervention, but also to adapt the intervention to these new changes that are not present in nonpregnant women. In a pathological condition such as preeclampsia, this may represent a greater challenge, because of restrictions on research in a physiological pregnancy, ignorance or doubts about the effects of the intervention on the organism, or pathological adaptations that may affect the intervention that is intended to be performed are greater. The ethical complexity is established in the possibility that the intervention applied in key phases causes a teratogenic risk or that affects the adaptation of the product to extrauterine life, and more worrying, the possibility of long-term toxicity. This is why it is necessary that preclinical teratogenicity studies have been completed prior to the intervention in pregnant women. Also, it is recommended to start the new interventions after the 12th week of gestation, when the organogenesis is finished and finally, it is recommended to follow the fetus and newborn [5]. However, these special considerations do not seem to be sufficient, as there are currently two forms of research in the group of pregnant women: the first consists of interventions unrelated to pregnancy that may benefit only the mother [3]. It seems that the previous recommendations were formulated with this type of research, since the use of thalidomide has contemplated the possibility of developing drugs that may attenuate different discomforts during pregnancy. The clinical investigation currently has to verify that the pharmacological interventions do not cause damage to the product and not only benefit the mother. The second type of research concerns interventions that may potentially benefit the mother and her fetus [3]. This aspect is more related to the development of pharmacological interventions for pathologies in pregnancy, specifically speaking of preeclampsia, the treatments are not indicated at the same time for the mother and for the fetus. Betamimetics used to prevent preterm birth are not intended to treat the mother and may even complicate maternal health. In contrast, depending on the severity of hypertension, the drugs could have a toxic effect on the fetus. These two aspects should be considered when deciding to experiment with a new therapeutic product or scheme [5]. Fetal well-being in the clinical trial When deciding that the study population is the group of pregnant women, the first ethical criteria that need to be reviewed are those aimed at the protection of the fetus. Generally, investigations of pregnant women involving an intervention or experimental procedure such as in PE cases, should not expose the embryo or fetus to a greater risk than the minimum, except when the use of the intervention or procedure is justified for saving the life of the mother. However, in addition to a deep and sufficient knowledge of the intervention that is proposed to apply, there is no strategy to evaluate during the course of research the side effects on the product. Although maternal-fetal medicine is currently a fact, with several diagnostic imaging and biochemical resources, with established therapeutic procedures, there is no consensus on what tests are necessary to perform and monitor the product during investigations in pregnant women. Even experts do not dare to indicate any fetal diagnostic procedure, within the clinic in the management of pathological pregnancies, but it is at the discretion of the attending physician the use of some diagnostic or therapeutic techniques [6]. There are six most generalized methods to know and evaluate fetal well-being [7]: 1. Maternal evaluation of fetal activity. It consists of the count by the mother of the number of times fetal movement occurs. Although the fetal movement count is a recommendation that is made to every pregnant woman, there is no cutoff point when abnormal movement is considered abnormal, some clinicians mark the alarm in less than 10 fetal movements perceived per day, others when no movements are perceived within 2 h. This form of assessment of fetal well-being presents a false-positive rate, since it depends on the subjectivity of the mother. 2. Test without stress. It consists of the evaluation of fetal heart rate in relationship to uterine contractions. Although it has a low false-negative rate (0.19-1%), its high rate of false positives (55%) makes it a test with minimal benefits, and its counterpart, the stress test, in which it is administered by infusion intravenous oxytocin, is contraindicated in high-risk situations. 3. Biophysical profile. It is a test composed of the evaluation of five parameters, fetal heart activity, fetal respiratory movements, fetal thick movements, muscle tone, and volume of amniotic fluid. Although its false-negative rates are very low (0.07%), its false-positive rate is only lower than that of the stress-free test, and has not shown any difference in terms of fetal death, cesarean indication, and under Apgar score. In addition to being a dependent operator test, factors that may alter outcomes include hypoxemia, gestational age, steroid administration, magnesium sulfate administration, and labor; five factors that occur frequently in pregnant women with PE. 4. Modified biophysical profile. It is the combination of the stress-free test with the biophysical profile. Although it requires less time and experience for its realization, makes its result more reliable, its false-positive and -negative rates are similar to the two tests separately. 5. Fetal Doppler ultrasound. The evaluation consists in measuring by ultrasound the velocity of blood flow in the fetal vessels, usually the umbilical artery. Out of all of the above, Doppler has been evaluated with the most rigorous clinical trials and although it does not show a benefit in terms of fetal death in high-risk pregnancies, it has become an effective test in the reduction of fetal morbidity and mortality in high-risk pregnancies, being this an indication for its use. The use of Doppler in pregnant women with high risk of PE can be a predictive tool combined with serum biomarkers; this strategy is still being validated but promising. 6. Evaluation of fetal lung maturity. It consists the evaluating the presence of surfactant factor in the amniotic fluid. It is a useful evaluation when it is necessary to determine the best time to interrupt the pregnancy when the risk of continuing it is greater. Due to the fact that in PE the treatment consists of the interruption of pregnancy, to be able to prolong it until reaching the fetal maturity, becomes one of the most difficult aspects of the management to avoid the fetal morbidity-mortality, reason why it is to make sure that the fetus counts with pulmonary maturity to resist extrauterine life has become essential. Although these tests and diagnostic interventions are the most used in the clinic, the amount of imaging tests, serum markers, and procedures with maternal-fetal medicine is higher; however, many of the tests have not shown their value, they could be useful and applicable reason why they require to be studied, especially those that allow predicting the presentation of complications or diseases such as PE. Among the currently available tests are the evaluation of both fetal DNA and the cells that make up the placenta, even in an experimental way, it is possible to attenuate or increase gene expression through miRNAs, not only for diagnostic purposes, but also for possible therapeutic applications in the future. The American Congress of Obstetricians and Gynecologists states that the evaluation of fetal well-being may be appropriate for pregnancies with an increased risk of fetal involvement; however, there are no comprehensive trials demonstrating the benefit of all tests and their potential indications. On the other hand, experts recommend carrying out tests of fetal wellbeing in cases of diabetes, uterine growth restriction, and hypertension [7]. As we can see that there are no specific guidelines on how to evaluate fetal well-being during pregnancy routinely in the clinic, and this deficiency is translated into clinical research with pregnant women. As mentioned before, although one of the principles of research in pregnancy is to maintain the integrity of the product, there are neither guidelines nor recommendations on which tests to apply and when to ensure the safety of the fetus. From the above, we can infer that most clinical trials involving pregnant women have not been able to guarantee or know with certainty the fetal well-being. So how is it possible to monitor fetal well-being in a clinical trial? How can we evaluate adverse effects on the product? And if there is no strategy to assess at least fetal well-being, is it ethical to allow the participation of pregnant women in clinical trials? It is up to the researcher to decide the degree of safety with which he plans to conduct his research, and in the absence of additional tests to ensure fetal well-being, using those available is the most reasonable. However, we should not be satisfied with the analysis of the structural function to guarantee the innocuousness of an intervention, it is necessary to find strategies that in fact allow to evaluate not only the welfare, structural integrity, and fetal vitality, but also to value the whole range of possible adverse effects, both acute and chronic, that may be occurring as a result of new pharmacological interventions or procedures. Clinical research in women pregnant with PE Pregnancy is a physiological condition inherent in almost all species and life; however, it is one of the lesser known states and a field of research that just begins to grow, because at the beginning of research with pregnant women, a series of events occurred that negatively marked research in this population. Research is now making its way into the subject of pregnancy and its pathologies in order to have a better understanding of physiological processes and to reduce maternal-fetal morbidity and mortality. However, despite the intentions and efforts of researchers, little is known. In the context of PE, it has been possible to trace its origin to the inadequate invasion of the trophoblastic villi on the vascular bed of the uterine spiral arteries, little is known about the cause of this inadequate adaptation of the uteroplacental vascular system [8]. Moreover, we are in complete disbelief about why some women develop PE and others do not. There is no effective diagnostic test to predict who will have PE, the best biomarkers have poor predictive power, the best chance to achieve prevention so far arises from the combination of Doppler ultrasound with some of the serum markers, which have been implemented, nevertheless, only demonstrate efficacy once the first evident changes of PE are presented, when it is no longer possible to avoid the development of the disease [9]. A real opportunity for prevention of PE would arise from a marker that would allow us to know with great certainty, which women are at risk of having PE, even before the pregnancy is carried out. The best predictive tool we have are the risk factors that have been determined by both prospective and retrospective studies, but are only able to predict 30% of women who develop PE [9], there is even a larger group of the population that develops PE with no previous risk factors. On the other hand, from the group of women who develop PE, one part shows severe PE and another group develops eclampsia, and again it is not possible for the treating doctors to determine who and how they evolve to more serious stages. In women with severe PE, who present it before fetal viability, maternal stabilization is recommended before interruption of pregnancy. Once treatment is established, close monitoring is required to identify the presence of serious complications of PE. Despite efforts to treat PE, treatment is symptom-based and focused on controlling blood pressure. In regard to the time of delivery, gestational age should transfer to the maximum possible. However, in severe PE, in addition to antihypertensive treatment, termination of pregnancy is recommended if it is greater than 34 weeks. If the pregnancy is less than 34 weeks and the mother and product are stable, the pregnancy should be continued with administration of corticosteroids. Currently, there are multiple criteria for better management of PE, but the only cure for PE is termination of pregnancy. This results in a difficult decision for the physician and the mother because of the psychological burden, and the social and economic morbidity [8]. The results of medical interventions have failed to significantly decrease the morbidity and mortality of PE. The main reason for this failure could be the multifactorial origin of pathogenic processes that lead to the development of PE. Therefore, the approach for management of patients with PE is preventing its late occurrence in pregnancy. The key to prevention of PE is knowledge of the factors that trigger pathophysiological processes that culminate in the presentation of the PE. However, efforts to understand the origin of these processes are still poorly or incompletely understood. There is a lack of knowledge because the approach to study this population may be unethical compared with diseases of nonpregnant women [10]. The multifactorial origin of PE and difficulty of carrying out an investigation in the early stages of pregnancy, because it can endanger the mother and fetus, have made research difficult. Understanding the developmental characteristics of the placenta in pregnancy at high risk for PE is essential for understanding the pathophysiology and for developing strategies of prevention [8]. Current state of research about PE There are currently 236,008 clinical trials registered in clinicalTrials.gov, from which only 3% are focused on pregnancy, and among them 6.4% are about PE. Of all clinical trials dedicated to PE, 47.9% focus on strategies to improve treatment, 22.2% of the clinical trials aim to improve the diagnosis or its establishment in the early stages, and 16.7% aim to establish the utility of new biomarkers, for both diagnostic and monitoring. Finally, only 10.7% of the clinical trials registered until February 1, 2017 are focused on the prevention of PE (Figure 1). Another aspect that should be taken under consideration is that more than half of the clinical trials directed to PE are carried out in regions classified as first world such as Europe and North America, whereas research in the rest of the world only constitutes 40%, despite the fact that developing countries are the ones that bear the greatest burden of morbidity and mortality caused by this disease (Figure 2). In our times, PE has a worldwide relevance and it has been increasing over the years. Clinical trials with the objective of reducing the morbidity and mortality of this pathology have also increased over time. The previous chart denotes some of the terminated trials registered in clinicaltrials.gov, many of which have certain limitations that we were able to observe ( Table 1). In the study titled, "l-arginine and antioxidant vitamins during pregnancy to reduce preeclampsia", there is little coherence between the objective and the design of the study. Although it is known that the production of nitric oxide and l-arginine as the main substrate of nitric oxide synthase is involved in the pathophysiology of PE, the study design is directed at the effect of l-arginine that has on the development of PE; however, levels of l-arginine are not evaluated at any moment, neither its nitrates nor nitrites, being the reason why this design cannot help reach the hypothesis. In addition, the main inclusion criteria appeared to be having a high-risk profile for developing PE; however, high-risk factors such as diabetes, autoimmune diseases, and hypertension in pregnancy and kidney diseases are not considered as inclusion criteria, and these factors combined with a history of risk of developing PE in previous pregnancies, increase up to nine times the risk of developing PE. Another mistake that can be found in their design is noted when analyzing the main conclusion and the way the intervention was carried out, the conclusion states that the supplementation of l-arginine and vitamins reduces the incidence of PE; nevertheless, in the results it can appreciated that the group that only received the food-bar containing the vitamins did not have a significant reduction in the risk of developing PE. Meaning that the mayor contributing factor for the reduction of PE was indeed l-arginine and not the combination of l-arginine/vitamins, and these would have been more notorious if a supplementation group taking only l-arginine was added [11]. In the study titled "Usefulness of Extracorporeal Removal of sFLT-1 in Women with Very Early Severe Preeclampsia (ADENA)", at first instance we are lead to appreciate that the primary outcome of the study is about early severe PE; however, later, we appreciate that the intention is improving perinatal death as the primary outcome. The first comment worth mentioning is that using words such as "improving" in an investigation study may be to imprecise, it is better to use terms such as "reducing" for this instance. Moreover the levels of sFLT-1 are not per say an inclusion criteria for deciding whether or not to perform apheresis, even by being quantified before and after the intervention, those women with high levels of sFLT-1 could perhaps have a greater benefit, reason why stabilizing grades at first instance could help the obstetrician make a better clinical decision. Finally, although its justifiable not using a control group, this type of design (before and after), not having a reference group, leads to a lower internal validity [11]. In the study titled "Oral Progesterone and Low Dose Aspirin in the Prevention of Preeclampsia", the main inclusion criterion is having a history with preeclampsia. Nevertheless, other factors of high risk were not taken account. Even though the study propounds that a deficiency of progesterone could lead to PE and in consequence, supplementation with progesterone could reduce the incidence of PE, serum values as an indicator to identify patients whom could benefit with progesterone supplementation were not taken into account. The comparison between before and after instead of vs the placebo group is also an inconvenient [11]. In the study titled "Oral Progesterone and Low-Dose Aspirin in Preeclampsia Prevention," the main inclusion criterion is the antecedent of PE in previous pregnancies; however, as in the previous study, other factors that increase the risk are not taken into account. The study assumes that a deficiency of progesterone could be the cause of PE, this argument seems to be the rationale to reduce the incidence of PE using supplementation with progesterone; but in the study, they did not take serum values in consideration as a marker to indicate which patients could benefit from supplementation. This study, as the previous one, also lacks the comparison against a placebo group, creating the same limitations [11]. In the study entitled "Safety and Efficacy of RLX030 in Pregnant Women with Pre-Eclampsia" proposed by the company NOVARTIS did not have sufficient information to perform an analysis, because of premature termination of the study [11]. In the study entitled "CPAP in Preeclampsia", the main objective is the evaluation of fetal wellbeing using nasal continuous positive airway pressure (CPAP) as a basis to increase fetal oxygenation; however, monitoring fetal movements is a scarce strategy to evaluate fetal well-being and it could be enhanced, according to the advances in fetal medicine to allow us to get closer to knowing the well-being of the fetus. The study did not make a distinction on the severity of PE, and if a clinical benefit of using CPAP is demonstrated, a distinction on the severity might be useful for clinical decisions. Therefore, the rationale to use CPAP is not clear [11]. In several studies, narrowing gestational age as inclusion criteria perhaps increases internal validity; however, the results cannot be extrapolated to other groups [11]. It is worth noting that the protocols registered in clinical trial go through variations during the study, which go unnoticed. Transference of scientific knowledge to clinical practice persist in LAG One of the most important advantages of basic research is the possibility to transfer knowledge to improve clinical practice. However, in the case of PE, new information regarding new biomarkers and new opportunities of intervention emerge every year, but these are not implemented by the treating physicians. Moreover, clinical practice guidelines are lagging too, and many years pass before a new intervention reaches the level of recommendation within them. On the one hand, this occurs because the information that is generated seems to be isolated and fragmented, there is no body or work team or expert committee that focus their efforts on trying to solve the problem or on generating a line of research on the subject. Another part of the transforming knowledge problem is that for this to be carried out it is necessary that the information obtained may be applicable to different populations at different times and with different characteristics, this is very complex to achieve in the first place because, as mentioned previously, there is no focus group to this and separated efforts generate a bias in the study population. Another bias that impedes transfer is that the risk factors presented by each population are different in the developed countries than those in the developing world, so information generated on the one side is not necessarily applicable in other parts of the world. Because the origin of PE is not well understood, the approaches with which the different studies are developed differ, while some may determine that the cause is oxidative stress, some others may argue that the cause is genetic. The truth is that so far it is considered multifactorial and because of this the international guidelines are more discreet about which recommendations to accept, in the sense of being able to verify which actions will have an evident weight in clinical practice. Finally, one of the most worrying aspects that delay the transfer of knowledge is the lack of medical update on the subject. A very obvious example is that in practice, physicians do not intentionally seek pregnancies with a high risk of PE, and when a patient is classified with high risk, the first action of the doctor is an expectant management, without any intervention, although in the guidelines of clinical practice the administration of acetylsalicylic acid, calcium, and l-arginine is recommended, this happens because evidence of acetylsalicylic acid's efficacy in reducing the risk is contradictory, while calcium intake is reserved for those women with low risk and low calcium intake, and l-arginine, although it is part of the Canadian guide, no dosage or time is specified. Other evidence in the lack of management of the subject in some specialists is the lack of communication they generate with patients who are at high risk. Patients are not informed of their situation and expectant management "poor surveillance" continues even after the patient develops PE, which is when the symptomatic management begins, and it seems that the physicians are waiting for a complication to occur, to make the decision of taking a more active management. It is true that during the 1st weeks of the PE, there are not many recommendations, and that most focus on the final stages in which fetal viability can be achieved, but this same reason should be what drives medical doctors to have a closer monitoring in research opportunities and new information to improve the outcome of the pregnancy, remembering that once PE is presented, there is no curative treatment, beyond the interruption of pregnancy. Efforts should be directed at preventing the occurrence of PE or, failing at that, occurring late in pregnancy.
v3-fos-license
2023-09-24T15:56:13.749Z
2023-09-19T00:00:00.000
262167263
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2075-5309/13/9/2382/pdf?version=1695123450", "pdf_hash": "b94066a0afe9bfe5446316bbc24c566fd3aa6305", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2764", "s2fieldsofstudy": [ "Engineering" ], "sha1": "a0c5686b06ee38fbad7a832e16047ba9f3155b60", "year": 2023 }
pes2o/s2orc
A Study on Mechanical Performance of an Innovative Modular Steel Building Connection with Cross-Shaped Plug-In Connector : Modular steel buildings show high assembly degree and fast installation speed. The inter-module connection (IMC) is one of the key technologies that restrict the robustness of modular steel buildings. An innovative IMC with a cross-shaped plug-in connector is proposed, and the connection consists of end plates of columns, the cross-shaped plug-in connector, bolts, cover plates, and one-side bolts. The proposed IMC is easily constructed, and the cross-shaped plug-in connector can improve the shear resistance of the core area. The mechanical model of the proposed IMC is presented, and the panel zone volume modified factor and initial rotational stiffness modified factor are proposed for calculating the shear capacity of the panel zone and the initial rotational stiffness. Numerical simulation was conducted considering the influences of axial compression ratios, sections of beams and columns, and the thickness of the tenon plate of the connector. The bearing capacity of the proposed IMC was analyzed, and the values of the two factors mentioned above were calculated, and their regression formulas are presented. The results show that the sections of beams and columns and the axial compression ratios show great influences on the bearing capacity of the proposed IMC, while the thickness of the tenon of the cross-shaped plug-in connector shows almost no effect. In addition, the sections of beams and columns show great influences on the shear capacity of the panel zone, as well as the initial rotational stiffness of the proposed IMC, while the thickness of the tenon of the cross-shaped plug-in connector and the axial compression ratios show little effect and almost no effect, respectively. Furthermore, the bending moment limit of the beam end of the proposed IMC is suggested to be 0.6 times the resistance bending moment, and the proposed IMC is considered to be a rigid connection or inclined to a rigid connection The proposed IMC has good mechanical performance, and design recommendations are presented. Introduction In March 2022, the Ministry of Housing and Urban-Rural Development of the People's Republic of China issued the 14th five-year plan for building energy conservation and the green building development plan, which clarified the development goal that prefabricated buildings will account for more than 30% of new buildings by 2025.A modular building is composed of modular units, as shown in Figure 1a.It is a highly integrated prefabricated building in the advanced stage of building industrialization, and its prefabrication rate can often reach more than 85%.The modular steel building (MSB) mainly adopts cold-formed thin-walled steel structure members, and its predecessor is the container house construction system that quickly builds temporary buildings by transforming abandoned containers IMCs and expounded that the force-displacement behavior of IMCs in MSBs is established by a combination of theoretical, experimental, and numerical analyses.The simplified connection behavior for the analysis and design of the overall structure is discussed, and the existing experimental research methods for IMCs are deeply summarized.Chen et al. [4] comprehensively summarized the key technical issues related to the structural stability and robust performance of multi-story, prefabricated, volumetric, modular steel construction and put forward the technical challenges facing the development of modular buildings to high-rise buildings.An updated summary of 41 existing IMC details-which were classified based on the key component: reinforcing rod, connection blocs, bolts, selfcentering rubber slider device, and viscoelastic rubbers and SMA bolts-was presented.The research status of IMCs from different aspects and levels is also systematically reviewed [5][6][7][8][9].There are some studies on innovative IMCs at present.Chen et al. [10,11] proposed an innovative IMC configuration with an intermediate plug-in device and a beam-to-beam bolt system.Experimental tests and numerical analyses were conducted on T-shaped and cross-shaped specimens, and the static performance, hysteretic performance, skeleton curves, ductile performance, energy dissipation capacity, and stiffness degradation patterns of the IMC were obtained.The results showed that the gap between the upper and bottom columns can influence the deformation patterns and distribution of bending loads; the weld quality at the connection is critical to ensure overall safety; and stiffeners and the stiffness of the beam can also influence the performance of the connection.Deng et al. [12,13] proposed a bolted IMC with a welded cover plate.Tests on T-shaped and crossshaped specimens were conducted under a monotonic and cyclic load.The seismic performance, including the initial rotational stiffness, moment resistance, ductility, and energy dissipation, were carefully evaluated.The proposed connection was classified as a semi-rigid connection, and its mechanical model and design recommendations were presented.Zhang et al. [14] proposed a beam-to-beam IMC with a cross-shaped plug-in The inter-module connection (IMC), as shown in Figure 1b, is one of the key technologies that restrict the development of MSBs in multi-story and high-rise building systems.The reliability of the connections between modules directly affects the overall performance of the structure.Lacey et al. [1][2][3] sorted out 14 existing IMCs and 12 types of bolted IMCs and expounded that the force-displacement behavior of IMCs in MSBs is established by a combination of theoretical, experimental, and numerical analyses.The simplified connection behavior for the analysis and design of the overall structure is discussed, and the existing experimental research methods for IMCs are deeply summarized.Chen et al. [4] comprehensively summarized the key technical issues related to the structural stability and robust performance of multi-story, prefabricated, volumetric, modular steel construction and put forward the technical challenges facing the development of modular buildings to high-rise buildings.An updated summary of 41 existing IMC details-which were classified based on the key component: reinforcing rod, connection blocs, bolts, self-centering rubber slider device, and viscoelastic rubbers and SMA bolts-was presented.The research status of IMCs from different aspects and levels is also systematically reviewed [5][6][7][8][9]. There are some studies on innovative IMCs at present.Chen et al. [10,11] proposed an innovative IMC configuration with an intermediate plug-in device and a beam-to-beam bolt system.Experimental tests and numerical analyses were conducted on T-shaped and cross-shaped specimens, and the static performance, hysteretic performance, skeleton curves, ductile performance, energy dissipation capacity, and stiffness degradation patterns of the IMC were obtained.The results showed that the gap between the upper and bottom columns can influence the deformation patterns and distribution of bending loads; the weld quality at the connection is critical to ensure overall safety; and stiffeners and the stiffness of the beam can also influence the performance of the connection.Deng et al. [12,13] proposed a bolted IMC with a welded cover plate.Tests on T-shaped and cross-shaped specimens were conducted under a monotonic and cyclic load.The seismic performance, including the initial rotational stiffness, moment resistance, ductility, and energy dissipation, were carefully evaluated.The proposed connection was classified as a semi-rigid connection, and its mechanical model and design recommendations were presented.Zhang et al. [14] proposed a beam-to-beam IMC with a cross-shaped plug-in connector inside the adjacent columns, combined with hinged diagonal self-centering haunch braces.The results showed that the self-centering haunch braces significantly improved the seismic performance of the IMC and achieved functional recoverability after earthquakes.Lacey et al. [15] proposed a novel interlocking IMC, which combines structural bolts with interlocking elements.Experimental study was conducted to investigate the shear force-slip behavior, and the effects of the interlocking elements, bolt preload, hole tolerance, and fabrication and assembly tolerance on the shear behavior were evaluated and discussed.Sendanayake et al. [16,17] proposed a column-to-column IMC and conducted experimental tests on the column-tocolumn splicing joint under a monotonic and cyclic load.The experimental study revealed that the IMCs display superior dynamic behavior with respect to response parameters, such as moment-carrying capacity, energy dissipation, and ductility.Kashan et al. [18,19] proposed a new type of bolted IMC with a tenon-gusset plate as the horizontal connection and long beam bolts as the vertical connection.Numerical study results revealed that the length of the column tenon has an obvious effect while the gap between modules showed a marginal effect on the load-carrying capacity and structural behavior.A simplification of the detailed IMC was performed to analyze the seismic performance of MSBs. There are some deficiencies in the existing IMCs.From the perspective of mechanical performance: (1) The stiffness and bearing capacity of the unreinforced connection core area are insufficient.(2) The column ends of the beam-to-beam connection are not directly connected, which is easy to cause discontinuous load transfer.From the perspective of installation difficulty and cost: (1) On-site welding is more time-consuming and laborious than bolting.(2) The use of through bolts and threaded rods limits the form of beam and column sections and increases the difficulty of construction.From the perspective of research levels: (1) Theoretical research and design recommendations are inadequate. (2) Differences occur in experimental tests.For example, a single tensile or shear test is not enough to accurately reflect the real performance of the connection in the overall structure.In view of the above deficiencies, this study proposes an innovative IMC using bolted end plates of columns with a cross-shaped plug-in connector and provides its mechanical model and design recommendations in combination with finite element analysis for mechanical performance, including the flexural bearing capacity, shear capacity of the panel zone, and initial rotation stiffness.In addition, the panel zone volume modified factor and initial rotational stiffness modified factor are proposed for calculating the shear capacity of the panel zone and rotational stiffness.This study can provide the research basis and guidance for the design practice for IMCs in MSBs. Proposed Configuration The details of the proposed IMC and its assembling process are shown in Figure 2 IMCs can be classified as external, internal, and corner connections, as shown in Figure 1b.The proposed connection is a typical internal connection with the obvious features of eight columns and sixteen beams.The installation process is as follows: modules are prefabricated in the factory, with the beam and end plate welded to the column; subsequently, the specially manufactured connector is welded from several steel plates and beveled at the end of the formed tenons; then, the modules are assembled on-site, using the connector to locate and connect, and bolted vertically with high-strength bolts; finally, adjacent columns are connected horizontally with cover plates and one-side bolts. subsequently, the specially manufactured connector is welded from several steel plates and beveled at the end of the formed tenons; then, the modules are assembled on-site, using the connector to locate and connect, and bolted vertically with high-strength bolts; finally, adjacent columns are connected horizontally with cover plates and one-side bolts. Configuration Comparison Compared with the existing IMCs proposed by Chen et al. [10,11] and Deng et al. [12,13], the proposed IMC in this study has some progress.The former is a type of beamto-beam connection with a high-tensile-strength bolting system and has a cast plug-in device with square tube heads to strengthen the core area, as shown in Figure 3a.The proposed IMC in this study is a type of column-to-column connection, and it better ensures the continuity of vertical load transfer compared to the beam-to-beam connection.The proposed IMC can replace more types of beam sections than connections using a hightensile-strength bolting system.In addition, the cross-shaped plug-in connector provides a more stiffener-like effect than the plug-in device with square tube heads.The latter is an external connection with a welded cover plate, as shown in Figure 3b.The form of the external welded cover plate makes this connection unsuitable for internal connections, and it is not easy to realize the function of module disassembly and reuse.The proposed IMC in this study avoids the above inadaptability very well, and the form of the full bolt connection reduces the construction burden and achieves a higher assembly rate. The proposed IMC has the following highlights: (1) The configuration is simple, while the horizontal load and vertical load are clearly transferred through the connector. (2) The cross-shaped plug-in connector plays a role in reinforcing the core area of the connection, similar to the stiffening effect of the inner diaphragm, and fully resists the shear force around the core area.(3) The IMC is a type of column-to-column connection, and it better ensures the continuity of vertical load transfer compared to the beam-to-beam connection.(4) The cover plates and one-side bolts make the bundled columns combine to bear the loads, similar to batten plates of the lattice column, improving the transfer efficiency of the horizontal load. Configuration Comparison Compared with the existing IMCs proposed by Chen et al. [10,11] and Deng et al. [12,13], the proposed IMC in this study has some progress.The former is a type of beam-to-beam connection with a high-tensile-strength bolting system and has a cast plug-in device with square tube heads to strengthen the core area, as shown in Figure 3a.The proposed IMC in this study is a type of column-to-column connection, and it better ensures the continuity of vertical load transfer compared to the beam-to-beam connection.The proposed IMC can replace more types of beam sections than connections using a high-tensile-strength bolting system.In addition, the cross-shaped plug-in connector provides a more stiffener-like effect than the plug-in device with square tube heads.The latter is an external connection with a welded cover plate, as shown in Figure 3b.The form of the external welded cover plate makes this connection unsuitable for internal connections, and it is not easy to realize the function of module disassembly and reuse.The proposed IMC in this study avoids the above inadaptability very well, and the form of the full bolt connection reduces the construction burden and achieves a higher assembly rate. The proposed IMC has the following highlights: (1) The configuration is simple, while the horizontal load and vertical load are clearly transferred through the connector.(2) The cross-shaped plug-in connector plays a role in reinforcing the core area of the connection, similar to the stiffening effect of the inner diaphragm, and fully resists the shear force around the core area.(3) The IMC is a type of column-to-column connection, and it better ensures the continuity of vertical load transfer compared to the beam-to-beam connection.(4) The cover plates and one-side bolts make the bundled columns combine to bear the loads, similar to batten plates of the lattice column, improving the transfer efficiency of the horizontal load. Internal Force Analysis Under lateral force, such as seismic loads, the typical beam-to-column joint of multistory steel frame structure bears axial force, bending moment, and shear force concomitantly.The inflection points of the columns and beams occur at the mid-span of the member length.V can be calculated by Equation (1). Internal Force Analysis Under lateral force, such as seismic loads, the typical beam-to-column joint of multistory steel frame structure bears axial force, bending moment, and shear force concomitantly.The inflection points of the columns and beams occur at the mid-span of the member length.Figure 4 shows the bending moment distribution of the MSB frame and the internal forces around the panel zone of the proposed IMC.The core area of the connection bears bending moment M b1 , M b1 and shear force V b1 , V b2 from the beam end, and bending moment M c1 , M c2 ; shear force V c1 , V c2 ; and axial force N c1 , N c2 from the column end.V j is the shear force of the panel zone.h b and h c are, respectively, the distance from the upper flange of the floor beam to the lower flange of the ceiling beam and the distance from the left flange of the left column to the right flange of the right column.V j can be calculated by Equation (1). Internal Force Analysis Under lateral force, such as seismic loads, the typical beam-to-column joint of multistory steel frame structure bears axial force, bending moment, and shear force concomitantly.The inflection points of the columns and beams occur at the mid-span of the member length.V can be calculated by Equation (1). Panel zone Flexural Bearing Capacity The flexural bearing capacity of the beam-to-column joint is reflected in the beam-tocolumn connection, and it is also reflected in the beam's flexural bearing capacity under the principle of "the connection is stronger than the member".For the proposed IMC, assuming that the floor beam and ceiling beam bend around their respective neutral axis without combination, the flexural bearing capacity of the IMC is the sum of the two beams, calculated by Equation (2). where M u,fb , M u,cb are, respectively, the flexural bearing capacity of the floor beam and ceiling beam, calculated by Equation (3). where W n , γ are the section modulus of the beam and the plastic adoptive factor of the section, respectively, calculated according to the Chinese code for the design of steel structures (GB50017-2017) [20].f y is the yield strength of steel. Shear Capacity of the Panel Zone When the beam and column are rigidly connected, the column web has the possibility of yielding or local buckling under the action of the bending moment and shear force around the panel zone.Therefore, the shear strength and stability of the panel zone should be checked.GB50017-2017 introduced the normalized width-thickness ratio of the panel zone, ignoring the shear force and axial force from the column end, and the shear capacity of the panel zone is checked by Equation ( 4). where M b1 , M b2 are, respectively, the bending moment design values of beam ends on both sides of the panel zone.V p is the volume of the panel zone.f ps is the shear strength of the panel zone, calculated according to the normalized width-thickness ratio λ n,s .f ps varies between (4/3) f v and 0.7 f v , corresponding to λ n,s varying between 0.6 and 1.2 ( f v is the design value of the shear strength of steel). It is difficult to determine λ n,s and f ps of the proposed IMC due to the fact that the configuration is different from the conventional steel frame beam-to-column joint, and there is no corresponding design formula in the code.Cai [21] expounded the concept of the shape coefficient of the panel zone and proposed several calculation methods for the volume of several irregular panel zones.For box section columns and tubular section columns, the shape coefficient of 1.8 and π/2, respectively, are also presented in GB50017-2017.Based on the concept of the shape coefficient, a panel zone volume modified factor α is proposed in this study for the proposed IMC, calculated by Equation (5). where V pe is the effective shear resisting volume, which is calculated by Equation ( 6) when simulating the critical state of yielding of the panel zone by using the numerical analysis method.V ps is the volume of the panel zone specified in GB50017-2017, which is calculated according to the formula for box section columns.Since the proposed IMC contains eight columns, it is eight times the volume of a single column. where M b1 , M b2 are, respectively, the bending moment values of the beam ends on both sides at the yielding critical state of the panel zone.f yv is the yield shear strength of steel. Initial Rotational Stiffness The proposed IMC is in an elastic stage when it is initially stressed, and the connection deformation can be decomposed into two parts by applying the superposition principle.The first part is the deformation caused by the beam and column bending, assuming that the beam and column are rigidly connected, as shown in Figure 5a.Since the shear deformation of the beam and column is very small and can be ignored, the expression of the first part of the deformation, i.e., Equation (8), can be derived by Equation ( 7) using the unit-load method in structural mechanics. where M, M P are the bending moment of the structure under the unit load and under the external load, respectively.E, I are, respectively, Young's modulus and the bending moments of inertia of the beam or column member.θ cb is the rotation corresponding to the first part of the deformation; F is the lateral force; l b , l c are, respectively, the length of the beam and the column; E fb , E cb are, respectively, Young's modulus of the floor beam and the ceiling beam; and I fb , I cb are, respectively, the bending moments of inertia of the floor beam and the ceiling beam.M are, respectively, the bending moment values of the beam ends on both sides at the yielding critical state of the panel zone.yv f is the yield shear strength of steel. Initial Rotational Stiffness The proposed IMC is in an elastic stage when it is initially stressed, and the connection deformation can be decomposed into two parts by applying the superposition principle.The first part is the deformation caused by the beam and column bending, assuming that the beam and column are rigidly connected, as shown in Figure 5a.Since the shear deformation of the beam and column is very small and can be ignored, the expression of the first part of the deformation, i.e., Equation (8), can be derived by Equation ( 7) using the unit-load method in structural mechanics.The second part is the deformation caused by the panel zone shearing, as shown in Figure 5b.Based on the assumption of the pure shearing mechanism of the panel zone [22], the shear stiffness K s , i.e., Equation (9), can be used to calculate the expression of the second part of the deformation, i.e., Equation (11). where G is the shear modulus of the column web, calculated by Equation (10).t is the thickness of the column web. where E is Young's modulus of steel, and µ is Poisson's ratio of steel. where θ s is the rotation corresponding to the second part of the deformation. After calculating θ cb and θ s , the initial rotational stiffness K 0 of the proposed IMC can be calculated by Equation (12). However, since Equation ( 9) is based on the pure shearing mechanism, ignoring the bending deformation of the panel zone, and is derived for beam-to-column using H-shaped steel, the calculation accuracy is not satisfied.In addition, the proposed IMC in this study exhibits some semi-rigid characteristics due to the special configuration, which also affects the initial rotational stiffness.Due to the above factors, an initial rotational stiffness modified factor β is proposed in this study, calculated by Equation (13). where K e is the initial rotational stiffness of the proposed IMC obtained by numerical simulation. Finite Element Model Information To study the mechanical performance of the proposed IMC and calculate the panel zone volume modified factor α and the initial rotational stiffness modified factor β, a numerical simulation study was conducted using the general finite element analysis software ANSYS 2022 R2.A four-story MSB office is used as the structural prototype for design verification, and the most stressed IMC is extracted as the standard model.The column section is 200 × 200 × 6 (box section), and the beam section is H200 × 150 × 4.5 × 6.The length of the beam and column are, respectively, 1.5 m and 1.28 m, and the thickness of the end plate, cover plate, flange plate of the connector, and tenon plate of the connector is 10 mm.The bolts and one-side bolts are Grade 10.9 M20 high-strength bolts with standard round holes.The specific dimensions of the standard model are shown in Figure 6.To analyze the factors affecting the mechanical performance and the two proposed factors mentioned above, another eight models with different parameters were established, involving axial compression ratios, the thickness of the tenon plate, the column section, and the beam section.The specific parameters of the models are shown in Table 1.The models are numbered in the form of "C-a-b(t)-c-d", where a represents the thickness of the column web, b represents the thickness of the beam web, t represents the beam section type, c represents the thickness of the tenon plate, and d represents the axial compression ratio.For example, C-6-4.5(H)-10-0.2 represents the standard model.0.3.The trilinear kinematic hardening constitutive model is adopted for both hig strength bolts and other members, as shown in Figure 7.The yield stress y  , yield str y  , ultimate stress u  , and ultimate strain u  refer to the experimental data of referen [23], and the values are shown in Table 2.The strength grade of steel is Q355B, and the material density is 7850 kg/m 3 .The Young's modulus E and Poisson's ratio µ of steel are, respectively, 2.06 × 10 5 MPa and 0.3.The trilinear kinematic hardening constitutive model is adopted for both high-strength bolts and other members, as shown in Figure 7.The yield stress σ y , yield strain ε y , ultimate stress σ u , and ultimate strain ε u refer to the experimental data of reference [23], and the values are shown in Table 2. To obtain good solution accuracy, all parts of the finite element model use the solid element "Solid186" with mesh refinement near the panel zone.The beam, column, and end plate of a single module are combined into a whole, in the way of node-sharing topology, and all the remaining contacts are frictional contacts with a friction coefficient of 0.3 [18,19].The boundary conditions are set in accordance with the constrained state of the connection in the structure, with hinged support at the bottom of columns and sliding hinged support at the end of beams, and all out-of-plane displacements are constrained.The column top loading method in the Chinese specification for the seismic test of buildings (JGJ/T 101-2015) [24] is adopted to better consider the P-Delta effect in the actual structure.The loading step is divided into three steps: the first step is to apply the bolt preload of 155 kN for each bolt; the second step is to apply a constant axial force on the top of columns; the third step is to apply a horizontal displacement at the top of columns, monotonically loaded to 150 mm.To ensure the convergence of the calculation results, automatic time sub-steps are set for each load step, with an initial 30 steps, a minimum of 10 steps, and a maximum of 100 steps.The finite element model is shown in Figure 8.To obtain good solution accuracy, all parts of the finite element model use the solid element "Solid186" with mesh refinement near the panel zone.The beam, column, and end plate of a single module are combined into a whole, in the way of node-sharing topology, and all the remaining contacts are frictional contacts with a friction coefficient of 0.3 [18,19].The boundary conditions are set in accordance with the constrained state of the connection in the structure, with hinged support at the bottom of columns and sliding hinged support at the end of beams, and all out-of-plane displacements are constrained.The column top loading method in the Chinese specification for the seismic test of buildings (JGJ/T 101-2015) [24] is adopted to better consider the P-Delta effect in the actual structure.The loading step is divided into three steps: the first step is to apply the bolt preload of 155 kN for each bolt; the second step is to apply a constant axial force on the top of columns; the third step is to apply a horizontal displacement at the top of columns, monotonically loaded to 150 mm.To ensure the convergence of the calculation results, automatic time sub-steps are set for each load step, with an initial 30 steps, a minimum of 10 steps, and a maximum of 100 steps.The finite element model is shown in Figure 8. Finite Element Model Validation To verify the validity of the finite element model, a comparative analysis between the finite element model and the test specimen was conducted.The specimen selected for analysis is MJ5 in the research of the IMC proposed by Cao et al. [25].The specimen MJ5 is a cross-shaped bolted-cover plate IMC, and its size and material properties are similar to those of the IMC proposed in this study.The loading method and protocol are the same as those in this study.The comparative analysis between the experimental test and the numerical simulation can preferably reflect the validity of the finite element model. The finite element model of specimen MJ5 was established using the same modeling method, contact settings, boundary conditions, mesh size, and division method in 4.1, as shown in Figure 9a.In the preliminary analysis, the appropriate mesh size was determined as 10 mm of the core area and 100 mm of the non-core area with the face mapping method for bolts and the multizone method for the whole model to ensure the accuracy of the calculation results.The moment-rotation curves of specimen MJ5 were compared with the result from the finite element analysis, as shown in Figure 9b.It was observed that the ultimate capacity, stiffness, and ductility of the test specimen were well predicted by the finite element model.The maximum error of the bearing capacity is only 6%.Some minor fluctuations and inconstancies in the stiffness or capacity between the test and the numerical result were observed, which might be due to simplifications during the finite Finite Element Model Validation To verify the validity of the finite element model, a comparative analysis between the finite element model and the test specimen was conducted.The specimen selected for analysis is MJ5 in the research of the IMC proposed by Cao et al. [25].The specimen MJ5 is a cross-shaped bolted-cover plate IMC, and its size and material properties are similar to those of the IMC proposed in this study.The loading method and protocol are the same as those in this study.The comparative analysis between the experimental test and the numerical simulation can preferably reflect the validity of the finite element model. The finite element model of specimen MJ5 was established using the same modeling method, contact settings, boundary conditions, mesh size, and division method in 4.1, as shown in Figure 9a.In the preliminary analysis, the appropriate mesh size was determined as 10 mm of the core area and 100 mm of the non-core area with the face mapping method for bolts and the multizone method for the whole model to ensure the accuracy of the calculation results.The moment-rotation curves of specimen MJ5 were compared with the result from the finite element analysis, as shown in Figure 9b.It was observed that the ultimate capacity, stiffness, and ductility of the test specimen were well predicted by the finite element model.The maximum error of the bearing capacity is only 6%.Some minor fluctuations and inconstancies in the stiffness or capacity between the test and the numerical result were observed, which might be due to simplifications during the finite element analysis, such as a hexagonal head of bolts and nuts were modeled together as circular, while threads on nuts and the bolts shank were not modeled, and the space between the bolt and the hole was not considered.The failure patterns of the specimen in the finite element result and test result are shown in Figure 9c,d Load-Displacement Curves The load-displacement curves of each model and the corresponding bending moment-inter-story drift ratio curves are shown in Figure 10, where the bending moment is the product of the load and the length of the upper columns, and the inter-story drift ratio is the ratio of the horizontal displacement to the total length of the columns.Figure 10a-d represent the curves of the models corresponding to different axial compression ratios, thicknesses of the column web, thicknesses of the beam web, and thicknesses of the tenon plate, respectively, where the black lines with block legends in all figures indicate the Load-Displacement Curves The load-displacement curves of each model and the corresponding bending momentinter-story drift ratio curves are shown in Figure 10, where the bending moment is the product of the load and the length of the upper columns, and the inter-story drift ratio is the ratio of the horizontal displacement to the total length of the columns.Figure 10a-d represent the curves of the models corresponding to different axial compression ratios, thicknesses of the column web, thicknesses of the beam web, and thicknesses of the tenon plate, respectively, where the black lines with block legends in all figures indicate the results of the standard model.It is worth mentioning that the overall instability occurred in model C-4-4.5(H)-10-0.2 when loading to 119 mm drift, and the load could not be further applied because the thickness of the column web is too thin, while other models can be loaded to 150 mm.Using the equivalent elastoplastic energy method, the yield load F y ; peak load F p ; ultimate load F u (where F u = 0.85F p ); corresponding displacements D y , D p , D u ; and ductility factor µ of each model can be calculated from the curves, as shown in Table 3.According to the curves, the greatest effect on the bearing capacity of the proposed IMC is the thickness of the column web, which decreases from 8 to 6 to 4, with the peak loads decreasing by 38.3% and 72.0%, respectively; the second major effect is the thickness of the beam web, which decreases from 4.0( ) to 4.5(H) to 3.2(H), with the peak loads decreasing by 22.7% and 59.8%, respectively; the third major effect is the axial compression ratio, which rises from 0.2 to 0.25 to 0.3, with the peak loads decreasing by 12.4% and 23.4%, respectively; and the minimum effect is the thickness of the tenon plate, which decreases from 10 to 8 to 6, with the peak loads decreasing by 2.3% and 6.0%, respectively.results of the standard model.It is worth mentioning that the overall instability occurred in model C-4-4.5(H)-10-0.2 when loading to 119 mm drift, and the load could not be further applied because the thickness of the column web is too thin, while other models can be loaded to 150 mm.Using the equivalent elastoplastic energy method, the yield load shown in Table 3.According to the curves, the greatest effect on the bearing capacity of the proposed IMC is the thickness of the column web, which decreases from 8 to 6 to 4, with the peak loads decreasing by 38.3% and 72.0%, respectively; the second major effect is the thickness of the beam web, which decreases from 4.0(□) to 4.5(H) to 3.2(H), with the peak loads decreasing by 22.7% and 59.8%, respectively; the third major effect is the axial compression ratio, which rises from 0.2 to 0.25 to 0.3, with the peak loads decreasing by 12.4% and 23.4%, respectively; and the minimum effect is the thickness of the tenon plate, which decreases from 10 to 8 to 6, with the peak loads decreasing by 2.3% and 6.0%, respectively. (a) ( The column top loading method causes a significant P-Delta effect.In the study of simplified structural behaviors of IMCs proposed by Lacey et al. [26,27], the combined effect of the bending moment and axial force at the column top was considered, i.e., the P-Delta effect was considered.For the proposed IMC in this study, the second-order bending moment generated by the axial force cannot be ignored, and the relationship between the actual bending moment and the rotation of the column end output by numerical simulation is analyzed for stiffness.Figure 11 shows the moment-rotation curves of the column end considering the P-Delta effect of each model, where (a)-(d) represent the curves of the models corresponding to different axial compression ratios, thicknesses of the column web, thicknesses of the beam web, and thicknesses of the tenon plate, respectively.The initial rotational stiffness of each model calculated by the curves will be used for the initial rotational stiffness modification below.The column top loading method causes a significant P-Delta effect.In the study of simplified structural behaviors of IMCs proposed by Lacey et al. [26,27], the combined effect of the bending moment and axial force at the column top was considered, i.e., the P-Delta effect was considered.For the proposed IMC in this study, the second-order bending moment generated by the axial force cannot be ignored, and the relationship between the actual bending moment and the rotation of the column end output by numerical simulation is analyzed for stiffness.Figure 11 shows the moment-rotation curves of the column end considering the P-Delta effect of each model, where (a)-(d) represent the curves of the models corresponding to different axial compression ratios, thicknesses of the column web, thicknesses of the beam web, and thicknesses of the tenon plate, respectively.The initial rotational stiffness of each model calculated by the curves will be used for the initial rotational stiffness modification below. Stress Analysis The von Mises yield condition is used as the material yield criterion for the steel structure failure criterion, and the flow rule and kinematic hardening criterion are used after the material yields.The stress development process is similar for all models, and the standard model C-6-4.5(H)-10-0.2 is used as an example for stress analysis.When the horizontal displacement is loaded to 35 mm, the stress concentration occurs at the column flange near the beam flange, where the stress first reached the yield stress, as shown in Figure 12a, and the corresponding load is 50.56 kN.When the horizontal displacement is loaded to 80 mm, the surface of the panel zone reached the yield stress, as shown in Figure 12b, and the corresponding load is 56.11 kN.When the horizontal displacement is loaded to 130 mm, the tenon of the cross-shaped plug-in connector reached the yield stress, as shown in Figure 12c, and the corresponding load is 50.23 kN.The stress distribution of the high-strength bolts when loaded to a maximum displacement of 150 mm is shown in Figure 12d, and all bolts did not reach the yield stress, indicating that the high-strength bolts remained elastic throughout the loading process.It can be seen from the stress diagram that during the entire loading process, the damage development process of the proposed IMC can be approximately divided into three stages: (1) The stress at the beam-to-column connection increases rapidly, and the column flange near the connection yields earlier than the beam.(2) The plate of the panel zone begins to yield, and the stress increases rapidly at the four corners.( 3) The stress at the end of the tenon in extruded contact with the column and at the connection of the tenon plate and the flange plate of the cross-shaped plug-in connector reach the yield stress.The stress distribution of the whole model, the front of the panel zone, the side of the panel zone, and the cross-shaped plug-in connector corresponding to the yield load and peak load of the load-displacement curve is shown in Figure 13.The bending moment values of the beam ends on both sides at the yielding critical state of the panel zone can be output by numerical simulation, and the values of  can be calculated by Equations ( 5) and ( 6), as shown in Table 4.According to the results of  The bending moment values of the beam ends on both sides at the yielding critical state of the panel zone can be output by numerical simulation, and the values of α can be calculated by Equations ( 5) and ( 6), as shown in Table 4.According to the results of α, the greatest effect is the thickness of the beam web, varying between 3.2(H) and 4.0( ), which is positively correlated with α varying between 0.25 and 0.41; the second major effect is the thickness of the column web, varying between 4 and 8, which is positively correlated with α varying between 0.28 and 0.40; the minimum effect is the thickness of the tenon plate, varying between 6 and 10, which is positively correlated with α varying between 0.33 and 0.35; the effect of the axial compression ratio can be ignored, and α is 0.35. Calculation of the Initial Rotational Stiffness Modified Factor β The initial rotational stiffness K e can be calculated from the moment-rotation curves in Figure 11, and the β values can be calculated by Equations ( 12) and ( 13), as shown in Table 5.According to the results of β, the greatest effect is the thickness of the column web, varying between 4 and 8, which is positively correlated with β varying between 0.41 and 0.67; the second major effect is the thickness of the beam web, and due to the particularity of the box section beam, only the web thickness of the H-section beam is negatively correlated with β, and it can be concluded that the direct factor affecting β is the bending moments of inertia of the beam, that is, the bending moment of inertia I b is negatively correlated with β,where I b varies between 2091.84 cm 4 and 3886.68 cm 4 , and β varies between 0.57 and 0.55; the effect of the thickness of the tenon plate is approximately the same as that of the thickness of the column web, where the thickness of the tenon varies between 6 and 10, which is positively correlated with β varying between 0.51 and 0.55; the effect of the axial compression ratio can be ignored, and β is 0.55. Evaluation of Rotational Stiffness According to the classification of steel connections in EuroCode3 [28], connections are divided into three types: rigid, semi-rigid, and hinged.For a non-sway frame, when the ratio of the connection stiffness to the beam bending stiffness is less than 0.5, between 0.5 and 8.0, and greater than 8.0, the connection is considered to be hinged, semi-rigid, and rigid, respectively.The stiffness ratios of the nine models are calculated in Table 6.E and I are, respectively, Young's modulus and the bending moments of inertia of the beam; l is the length of the beam span; and K cb+s is the initial rotational stiffness of the models.It can be seen that the stiffness ratios of the nine models are between 1.81 and 3.34.In fact, the initial rotational stiffness defined in this study considered the bending deformation of the beam and column to better reflect the inter-story drift ratio of the structure.After deducting the contribution of the bending deformation of the beam and column, the initial rotational stiffness K s become larger than K cb+s , and the stiffness ratios are greater than 8.0, except model C-4-4.5(H)-10-0.2 with a weak column section.In summary, the proposed IMC is considered to be a rigid connection or inclined to rigid connection, which also means a higher bearing capacity under the same displacement constraints.Compared with IMC simplified as a hinged connection in some research, the mechanical performance of the proposed IMC in this study is better. Design Recommendations Through theoretical research and numerical simulation, the recommended design method is presented for the proposed IMC in this study, which can be divided into the following four parts. Calculation of Flexural Bearing Capacity The flexural bearing capacity of the proposed IMC can be calculated according to Equation (3).The proposed IMC mainly use cold-formed thin-walled steel with a relatively large width-thickness ratio of the plate, and the plastic adoptive factor γ is taken as 1.0.Therefore, the ultimate flexural bearing capacity M u is exactly the yield flexural bearing capacity M y .The results of numerical simulation show that the yield stress is first reached at the beam-to-column connection, and the bending moment at the end of the beam has not yet reached M y due to the particularity of the configuration, while the column flange reaches the yield stress one step ahead of the beam flange after that.To control the failure of the column, the bending moment of the beam end when the yield stress is reached at the beam-to-column connection is taken as the design limit, which is 0.6M y , calculated by numerical simulation results. Calculation of Shear Capacity of the Panel Zone The shear capacity of the panel zone of the proposed IMC can be calculated according to Equation ( 4), where V p = αV ps , and α is calculated by Equation ( 14) by the multiple linear regression method.α = −0.05998+ 0.03 x 1 a +(2 .16× 10 −5 ) x 2 b +0.0069 where x 1 is the thickness of the column web; x 2 is the bending moments of inertia of the beam; x 3 is the thickness of the tenon plate; and a, b, and c are 1 mm, 1 cm 4 , and 1 mm, respectively. Calculation of Initial Rotational Stiffness The initial rotational stiffness of the proposed IMC can be calculated according to Equation (12).Then, K 0 is multiplied by the modified factor β, which is calculated by Equation ( 15) by the multiple linear regression method.β = 0.13094 + 0.065 x 1 a − (9 .60 × 10 −6 ) x 2 b +0.01055 where x 1 is the thickness of the column web; x 2 is the bending moments of inertia of the beam; x 3 is the thickness of the tenon plate; and a, b, and c are 1 mm, 1 cm 4 , and 1 mm, respectively. Calculation of High-Strength Bolts The high-strength bolts connecting the upper and lower column end plates with the flange plate of the cross-shaped plug-in connector are subjected to axial force, bending moment, and shear force concomitantly, which can be calculated according to Equations ( 16) and (17), where N and M are the axial force and bending moment of the bolt group, respectively.n is the number of bolts.y i is the distance from each bolt to the centroid of the bolt group, and y 1 is the maximum value in y i .P is the preload of the high-strength bolt.N v , N t are the shear force and tensile force of a bolt, respectively, and N b v , N b t are the design values of the shear and tensile bearing capacity of a bolt, respectively. Conclusions This study proposed an innovative IMC with a cross-shaped plug-in connector.The mechanical model of the proposed IMC and the design formular of the flexural bearing capacity, shear capacity of the panel zone, and initial rotation stiffness were presented, and two modified factors are proposed to modify the shear resisting volume of the panel zone and the initial rotation stiffness.Considering the influences of axial compression ratios, sections of beams and columns, and the thickness of the tenon plate of the connector, nine finite element models were established.The mechanical performance of the proposed IMC was analyzed by numerical simulation, and the values of the two factors under different parameters were calculated.Finally, combined with the results of theoretical research and numerical simulation, the design recommendations were presented.Based on the existing research work, the following conclusions are drawn: (1) For the bearing capacity of the proposed IMC, the column section has the greatest effect, followed by the beam section, then the axial compression ratio, and the thickness of the tenon plate has little effect.Due to the P-Delta effect, the bearing capacity of all models has a significant downward trend in the later period. (2) The yielding mechanism and failure mode of the proposed IMC are as follows: the column web at the beam-to-column connection is damaged first, followed by the panel zone damage, and finally, the cross-shaped plug-in connector damage, which is attributed to the thin-walled structure of the component.Therefore, it is suggested that the bending moment limit of the beam end should be 0.6 times the resistance bending moment when the corresponding column web begins to fail. (3) For the shear capacity of the panel zone, the sections of beams and columns have a large effect.The cross-shaped plug-in connector has a great contribution to the shear bearing capacity, but the thickness of the tenon plate has little effect, while the axial compression ratio has almost no effect.Proposed formulas provide the recommended calculation method for the shear capacity of the panel zone. (4) For the initial rotational stiffness, the column section has a greater effect, followed by the beam section and the thickness of the tenon plate, while the axial compression ratio has almost no effect.Proposed formulas provide the recommended calculation method for the initial rotational stiffness.In addition, the proposed IMC is considered to be a rigid connection or inclined to rigid connection. . The connection consists of the following seven parts: (1) Columns (gray part in the figure) are made of cold-formed thin-walled square steel tubes; (2) Beams (white part in the figure), including the floor beam and ceiling beam, as shown in Figure 1b, are made of thin-walled H-shaped steel or rectangular steel tubes; (3) The end plate of the column (blue part in the figure) has an opened cross-shaped hole to match the plug-in connector; (4) The crossshaped plug-in connector (red part in the figure) is composed of a flange plate and eight tenons with an end-corner cutting for easy installation; (5) The bolts (yellow part in the figure) are high-strength bolts to connect the upper and lower columns; (6) The cover plate (green part in the figure) is used to connect the left and right columns; (7) The one-side bolts (cyan part in the figure) are used to fix the cover plates. Figure 2 . Figure 2. Details of the proposed IMC and its assembling process. Figure 2 . Figure 2. Details of the proposed IMC and its assembling process. Figure 4 M shows the bending moment distribution of the MSB frame and the internal forces around the panel zone of the proposed IMC.The core area of the connection bears bending moment b1 ; shear force c1 V , c2 V ; and axial force c1 N , c2 N from the column end.j V is the shear force of the panel zone.b h and c h are, respectively, the distance from the upper flange of the floor beam to the lower flange of the ceiling beam and the distance from the left flange of the left column to the right flange of the right column.j Figure 4 . Figure 4. Mechanical model: (a) Bending moment distribution of the MSB frame; (b) Internal forces around the panel zone of the proposed IMC. M Figure 4 shows the bending moment distribution of the MSB frame and the internal forces around the panel zone of the proposed IMC.The core area of the connection bears bending moment b1 ; shear force c1 V , c2 V ; and axial force c1 N , c2 N from the column end.j V is the shear force of the panel zone.b h and c h are, respectively, the distance from the upper flange of the floor beam to the lower flange of the ceiling beam and the distance from the left flange of the left column to the right flange of the right column.j Figure 4 . Figure 4. Mechanical model: (a) Bending moment distribution of the MSB frame; (b) Internal forces around the panel zone of the proposed IMC. Figure 4 . Figure 4. Mechanical model: (a) Bending moment distribution of the MSB frame; (b) Internal forces around the panel zone of the proposed IMC. Figure 5 .MFigure 5 . Figure 5. Deformation of the proposed IMC: (a) Bending deformation of the beam and column; (b) Shearing deformation of the panel zone. Figure 8 . Figure 8. Details of the finite element model. Figure 8 . Figure 8. Details of the finite element model. Figure 9 . Figure 9. Validation of finite element model: (a) Finite element model of specimen MJ5; (b) Momentrotation curves of test compared with finite element analysis result; (c) Failure pattens showed by finite element model; (d) Failure pattens showed by test [25]. Figure 9 . Figure 9. Validation of finite element model: (a) Finite element model of specimen MJ5; (b) Momentrotation curves of test compared with finite element analysis result; (c) Failure pattens showed by finite element model; (d) Failure pattens showed by test [25]. yFD ; and ductility factor  of each model can be calculated from the curves, as Figure 10 . Figure 10.Load-displacement curves: (a) Different axial compression ratios; (b) Different thicknesses of the column web; (c) Different thicknesses of the beam web; (d) Different thicknesses of the tenon plate. Buildings 2023 , 22 Figure 10 . Figure 10.Load-displacement curves: (a) Different axial compression ratios; (b) Different thicknesses of the column web; (c) Different thicknesses of the beam web; (d) Different thicknesses of the tenon plate. Figure 11 . Figure 11.Moment-rotation curves: (a) Different axial compression ratios; (b) Different thicknesses of the column web; (c) Different thicknesses of the beam web; (d) Different thicknesses of the tenon plate.4.3.2.Stress AnalysisThe von Mises yield condition is used as the material yield criterion for the steel structure failure criterion, and the flow rule and kinematic hardening criterion are used after the material yields.The stress development process is similar for all models, and the Figure 12 .Figure 12 . Figure 12.Stress development process: (a-d) As described in the text above.von Mises Stress (MPa) Figure 12 .Figure 13 . Figure 12.Stress development process: (a-d) As described in the text above.von Mises Stress (MPa) 4. 3 . 3 . Calculation of the Panel Zone Volume Modified Factor  Figure 13 . Figure 13.Stress distribution of the whole model, the front of the panel zone, the side of the panel zone and the cross-shaped plug-in connector corresponding to the yield load (left column of the figure) and peak load (right column of the figure). Table 1 . Parameters of the models.Note: The first row is the standard model, and represents box section. Table 1 . Parameters of the models. Note: The first row is the standard model, and □ represents box section. Table 3 . Primary performance indicators of models. Table 3 . Primary performance indicators of models. y D (mm) p F (kN•m) p D (mm) u F (kN•m) u D (mm) Note: □ represents box section. Table 4 . Calculation of the panel zone volume modified factor α. Note: represents box section. Table 5 . Calculation of the initial rotational stiffness modified factor β. Table 6 . Calculation of the ratio of connection stiffness to beam bending stiffness.
v3-fos-license
2024-03-17T16:05:05.211Z
2024-03-12T00:00:00.000
268446508
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/itees/2024/9196747.pdf", "pdf_hash": "942d7532e232c235167d8c30309049e202a3f774", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2767", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "sha1": "6de553937d84e82a9cff029cea08e7d51bfe9466", "year": 2024 }
pes2o/s2orc
A Novel Hybrid MPPT Controller for PEMFC Fed High Step-Up Single Switch DC-DC Converter . At present, there are diferent types of Renewable Energy Resources (RESs) available in nature which are wind, tidal, fuel cell, and solar. Te wind, tidal, and solar power systems give discontinuous power supply which is not suitable for the present automotive systems. Here, the Proton Exchange Membrane Fuel Stack (PEMFS) is used for supplying the power to the electrical vehicle systems. Te features of fuel stack networks are very quick static response, plus low atmospheric pollution. Also, this type of power supply system consists of high fexibility and more reliability. However, the fuel stack drawback is a nonlinear power supply nature. As a result, the functioning point of the fuel stack varies from one position to another position on the V-I curve of the fuel stack. Here, the frst objective of the work is the development of the Grey Wolf Optimization Technique (GWOT) involving a Fuzzy Logic Controller (FLC) for fnding the Maximum Power Point (MPP) of the fuel stack. Tis hybrid GWOT-FLC controller stabilizes the source power under various operating temperature conditions of the fuel stack. However, the fuel stack supplies very little output voltage which is improved by introducing the Single Switch Universal Supply Voltage Boost Converter (SSUSVBC) in the second objective. Te features of this proposed DC-DC converter are fewer voltage distortions of the fuel stack output voltage, high voltage conversion ratio, and low-level voltage stress on switches. Te fuel stack integrated SSUSVBC is analyzed by selecting the MATLAB/Simulink window. Also, the proposed DC-DC converter is tested by utilizing the programmable DC source. Introduction From the present literature survey, the availability of Nonrenewable Energy Resources (NESs) is decreasing extensively because of its disadvantages such as high catchment area for the installation, more environmental efects, a high efect on the ozone layer, direct efect on human life, and more power generation price [1].In addition, this type of power system is not suitable for rural areas.So, the current research is focusing on the RES for supplying the energy to all local as well as urban people.Te classifcation of RES is wind, geothermal energy, ocean energy, hydropower, solar energy, and bioenergy.In article [2], the authors discussed the wind power supply networks for generating electricity in the coastal regions.In this wind network, the modern turbines with very low specifc ratings plus high hub heights increase the wind energy potential.As a result, the overall wind power system installation cost is reduced [3].Here, the wind turbines capture the wind velocity for running the dual-fed induction machine.Te major problem of wind power systems is noise creation by wind turbines which may not be accepted by human beings [4].Especially, birds and bats are seriously afected by the wind power network.Also, the main challenge of the wind system is the minimization of levelized production costs.Levelized production cost of the wind system is decided based on the energy production cost concerning the economic life time of the utilized system [5].Finally, the wind systems are installed in limited places because of the potential impact on the environment. All of the wind systems' drawbacks are limited by utilizing the geothermal power supply networks.In these geothermal systems, the fuids are collected from the underground reservoirs and it is used for the conversion of water into steam.Te generated steam is sent to the turbines to run the electrical machine [6].In the literature, there are various types of geothermal power networks available such as fash steam, dry steam, and binary cycle.Here, the type of power conversion depends on the power plant design which is mainly focused on the fuid surface and its operating temperature.Te dry steam power plant takes hydrothermal fuids which are closely available in the form of steam [7].In this system, the available steam is directly sent to the rotating turbine which is directly coupled with the functioning generator for supplying the peak power to the emergency applications like shopping malls and hospitals.Te dry steam power plants are a very old type of power plant that was frst referred to by Lardarello in the nineteenth century.Similarly, the fash type of power network is a commonly used power network that collects the fuids with high functioning temperatures [8].Finally, the binary cycle geothermal power network is used for supplying power to the local consumers at low as well as high fuid operating temperature conditions [9].Te merits of geothermal systems are low-level environmental pollution, moderate sustainability, massive potential, and more reliability.However, the disadvantages of geothermal systems are minor environmental pollution, suitable for a specifc location, mostly preferable for urban areas, and high initial cost [10]. Te demerits of geothermal systems are limited by using ocean energy systems.In this ocean energy, the wave energy is captured by using the turbines and is transferred into the electrical power supply by using the bidirectional electrical machines.Te features of ocean energy systems are naturefree, unlimited availability, high potential, good reliability, and zero environmental pollution.Te demerits of this system are high installation cost and need for high scalability.In addition, the urban areas will beneft from the help of the ocean energy system.Te limitations of ocean energy systems are overcome by using the hydropower networks [11].From the literature review, there are diferent types of hydraulic systems available in nature and all of these systems utilize the kinetic energy of water fow from upstream to downstream.Here, high-pressurized storage water is utilized to achieve the kinetic energy of the water.Te features of hydrosystems are useful for peak load demand, nonpolluting sources of energy, more resilience, and low distribution power cost [12].However, the drawbacks of hydrosystems are limited by utilizing solar power systems.From the literature review, sunlight energy is available in nature excessively free of cost.Here, the sunlight insolation comes to the earth with diferent incident angles.Te solar photovoltaic panels are installed on the earth in such a way that the incident angle of sunlight irradiation is exactly perpendicular to the PV panel [13].Once the PV panel receives the sunlight insolation energy, the free electrons in the P-N type materials of the PV absorb the sun energy and start functioning to generate the electrical power supply [14]. Te working behavior of the PV is exactly similar to the normal P-N diode operation.Solar cells are developed by utilizing the various categories of advanced manufacturing technologies which are named thin flm, polycrystalline, and monocrystalline.Te thin flm-based solar cells have various types of advantages when compared to the 1 st generation silicon solar cells in terms of lighter weight, thin construction, and more fexibility [15].Due to these merits, the thin flm model solar systems are utilized in integrated residential buildings and water heating systems.In [16], the authors utilized the polycrystalline model solar cells for the large-scale solar power network installation.Here, there are multiple crystalline solar cells involved in each polycrystalline model solar panel.As a result, polycrystalline solar cells work at very low functioning temperature conditions of the sunlight system.Tese types of solar cells are used in large-scale commercial buildings for supplying electricity to consumers [17].So, most human beings are independent of the central grid for power consumption.However, thin flm and polycrystalline solar cells have drawbacks such as low sunlight energy conversion efciency, being less suitable for domestic application, and being moderately expensive [18].So, monocrystalline silicon solar panels are utilized in most places because their merits are more efciency, crystal structure, and more reliability.However, these solar modules require high implementation costs and less power production under high operating temperature conditions of the solar systems. Each solar cell voltage production is 0.95 V to 1 V which is not at all useful for any local consumers.So, the solar modules are series connected to enhance the voltage capability of the overall system.Sometimes, the peak loads require a high number of currents; then, the solar modules are placed in a parallel fashion [19].So, multiple types of solar cells are connected in parallel plus series for supplying the power to the electric vehicle applications.Te major issue of solar systems is discontinuous power supply and less useful for industrial applications.So, in this article, the fuel stack technology is utilized for the four-wheeler system for the continuous functioning of the electric vehicle network.Here, the fuel modules are diferentiated based on the usage of electrolytes in the system.Te fuel modules are classifed as Alkaline Fuel Module (AFM), Molten Carbonate Fuel Module (MCFM), Regenerative Fuel Module (RFM), Proton Exchange Membrane Fuel Module (PEMFM), Phosphoric Acid Fuel Module (PAFM), and Solid Oxide Fuel Module (SOFM).Te alkaline fuel network is utilized in the article [20] for supplying the rated voltage to the microgrid network.Now, the microgrid is gaining a lot of attention.Te microgrid involves the battery charging station, fuel stack, and various renewable energy systems [21].All the power supply networks are interfaced to one common busbar for maintaining the constant load voltage.Here, the alkaline model fuel network supplies heat combined with water and electrical power supply by utilizing the inputs oxygen plus hydrogen chemical decomposition.In this fuel stack, the electrode is placed in between the anode material and cathode material and it is manufactured by selecting the alkaline membrane [22]. Te merits of an alkaline fuel stack network are moderate efciency, very simple heat management, moderate startup time, high chemical activity, and less expensive anode and cathode materials.In addition, the internal combustion of an 2 International Transactions on Electrical Energy Systems alkaline network is easy when equated with the battery system [23].As a result, the fuel stack takes less money for power supplying to the global industries.Especially, the alkaline fuel stack networks are more useful for saving labor time and less installation space and are more suitable for peak load application.Te main applications of alkaline fuel stacks are backup power supply, highly useful for commercial and residential building applications [24].However, the alkaline fuel stacks are very intolerant to carbon dioxide because CO 2 consumes more alkaline chemical decomposition.As a result, the overall system chemical reaction and operating system efciency are reduced extensively.Te demerits of alkaline fuel cell modules are limited by applying the reversible fuel stacks.In these reversible fuel stacks, the input hydrogen fuel is obtained from pure water [25].Here, the water is split into hydrogen and oxygen ions by using other renewable energy systems like wind and solar systems.On the other hand, the reversible fuel stack gives heated steam which is again fed back to the power supply system by converting into water.Tis type of fuel stack network is more popular for emergency power supply applications.Te disadvantages of reversible fuel stacks are compensated by applying the molten carbonate fuel module.In this MCFM, the alkali metal carbonate electrolyte is used and its maximum functioning temperature capability is 650 °C.Due to this high operating temperature, the selected input fuel for the MCFM is directly fed to the electrolytic channel for generating the electrical power supply with high operating efciency [26].Te selected input chemicals to the MSFM are carbon dioxide, hydrogen, and oxygen, and the heated water is released from the output of the fuel stack.Te MCFM advantages are high operating pressure, less reversible, and good output power conversion efciency.However, this fuel stack is not suitable for portable applications and needs sealing [27].So, the proton exchange membrane fuel module is used in this work for supplying power to the electrical vehicle systems.Tis fuel stack can work at very low as well as high operating temperatures.Te major features of this PEMFM are compact design, highenergy density, and maximum values of specifc power per unit and volume.In addition, its starting speed is very high when equated with the other fuel cells.Te present demand for fuel stacks is illustrated in Figure 1, and the types of fuel stacks are represented in Table 1. All the fuel stack disadvantages are continuous variations of power due to the continuous variation of the functioning point of the fuel stack.So, there are diferent categories of Maximum Power Point Tracking (MPPT) methodologies used in the literature to stabilize the functioning point of the fuel stack which are machine learning algorithms, optimization technologies, soft computing, nature-inspired algorithms, and conventional controllers.In [29], the researchers applied the Perturb & Observe (P&O) conventional controller to identify the working point of the fuel stack interfaced bidirectional three-phase power converter.Here, initially, the working point of the fuel stack is identifed on the V-I curve of the system.Suppose the identifed functioning point of the fuel stack is on the left side of the V-I curve and then the equivalent resistance of the source system is modifed by enhancing the operating duty cycle of the converter.Otherwise, the equivalent resistance of the overall system is reduced to move the working point of the fuel stack to the actual MPP position [30].Te general merits of this controller are simple in design, less implementation cost, easy to install, and less manpower requirement.However, this controller gives more steady-state fuctuations.As a result, the overall system gets vibrated.All the renewable energy-based power converters create fuctuated voltage and power.In the Kalman flter concept, the available voltage ripples and power ripples are fed to the Kalman flter block for suppressing the fuctuations of fuel stack output power.Tis controller identifes the functioning point of the renewable energy system by utilizing the ripples of the voltage and power [31].So, it does not require any additional flters.Due to this condition, the overall system installation and manufacturing costs are limited.However, this controller is useful for only constant operating temperature conditions of the fuel stack. For fuctuated temperature conditions of the fuel stack, the researchers are referring to the nature-inspired power point tracking controllers.In this work, the grey wolf optimization technique involves an adaptive fuzzy logic controller developed for enhancing the power conversion efciency of the fuel stack [32].Te merits of this proposed nature-inspired hybrid MPPT controller are less level of dependence on fuel stack modeling, very good dynamic response, more suitable for all types of operating temperature conditions of the fuel stack, high tracking speed, fast convergence ratio, and useful for continuous peak load conditions.However, another issue with the fuel stack is the very high supply current [33][34][35][36].If the fuel stack is directly interfaced with the battery, then the overall network power supply conduction losses are increased.Due to that, the entire system's functioning efciency is reduced.From the literature survey, the power converters are applied to the electric vehicle systems to optimize the fuel stack output supply current.Te power converters are diferentiated based on the utilization of transformer and rectifer circuits.Te transformers including power converters are feedforward, push-pull fyback, and bridge-type converter.Here, isolation means the separation of the source with a converter device to protect the overall network from overvoltage [37].Te merits of isolated power converters are strong antiinterference ability, easy-to-achieve multiple outputs, easy conversion of buck and boost operation, more security, and fewer possibilities of load damage.Also, these converters are useful for wide input voltage operation [38].However, the isolated power converter networks have many disadvantages which lead to very low power transformation efciency, relatively very large volumes, more expensive, and very high design complexity.So, the current industry is focusing on the transformerless power converters which are buck-boost, Cuk, and Luo converters [39].However, these fundamental power transformation circuits have the disadvantage of less power transmission efciency and are moderately suitable for peak load conditions [40,41].So, in this article, a Wide Supply Voltage Power Converter Circuit (WSVPCC) is International Transactions on Electrical Energy Systems proposed to reduce the fuel stack output current and improve the supply voltage profle of the overall system.Te proposed power converter fed power point identifer is shown in Figure 2. From Figure 2, the power converter gives high voltage gain, low output voltage fuctuations, and very little current and voltage stress on power switches.In addition to that, this converter takes very little space for installation and its manufacturing cost is also reduced. Literature Review on Past Published Works From the literature review, the past isolated power converter circuits needed more implementation cost and needed more reliability.Also, these converter topologies require high installation space.So, transformerless power converter circuits are developed in [42] for battery charging systems with the help of solid oxide cells.Te basic buck converter circuit is utilized in the PV/fuel stack microgrid system for balancing the power in the all-distribution loads.Tese converters required only one capacitor, plus one switch for balancing the supply voltage.Similarly, a simple boost converter circuit is applied in a telecommunication network for solar battery charging, plus discharging.Te general boost converter circuit required less manufacturing cost, was easy to handle, and needed very low space for installation.However, these types of power converter circuits give less efciency for high switching frequency applications.In [43], the authors worked out the hybrid electric vehicle technology to reduce the dependency on fuel engines.Te combustion engine is interlinked with the electric drive network for regulating the power of the vehicle at various working temperature conditions of the fuel engine.Te hybrid EV network's overall efciency depends on the electric vehicle powertrain. In EVs, the permanent magnet machine is utilized along with the battery for running the EV at constant speed.In [44], the solid oxide fuel stack network is merged with the quasisource DC-DC converter topology and it is applied to the fourwheeler electric vehicle system to improve the efciency of the system.In SOFS, the electricity supply process happens by utilizing the ceramic electrolyte [45].Here, the negative oxygen ions fow from the cathode layer to the anode layer via a ceramic electrolyte.Te analysis and the specifcations of various categories of fuel stacks are explained in Table 1.In the SOFS, there is a high level of chemical decomposition happening at high operating temperature conditions of the fuel stack.As a result, the performance efciency of the overall system is improved.Also, it consists of high fuel fexibility, less carbon dioxide emissions, and a relatively very low cost of implementation when compared to the phosphoric acid fuel cell [46].Te interleaved single-phase power electronic circuit topology is applied for the high voltage rating battery-based fuel stack system for running the battery in a dual power fow direction at peak load conditions.Te lithium-ion battery state of charge and the state of discharge parameters are supplied to the incremental resistance MPPT block for optimizing the discharge state of the battery.In this MPPTmethodology, the variation of fuel stack voltage and current variables is utilized for fnding the duty ratio of the bidirectional power converter [47].Here, the incremental resistive value is positive when the controller reaches the required MPP place.Otherwise, it may go to the left side of the MPP position of the fuel stack V-I curve. Te modifed slider methodology is utilized in the hybrid diesel engine, battery, and fuel stack system for identifying the peak power point of the fuel stack, plus enhancing the dynamic response of the overall network [48].In this power production network, the interleaved multiphase power converter is utilized for equal power distribution to all local loads.In this slider controller, the fuel stack oxygen ions, resistive load voltage, and hydrogen decomposition constants are utilized as the state input variables, and the output variable is the switching signal to the fuel stack-fed rectifer circuit [49].All the rectifers generate fuctuated currents and voltages which are given to the Kalman flter block for mitigating the losses of the hybrid PV/fuel stack network.Te merits of this slider MPPT method are very easy, plus good static response.Also, it helps to optimize the fuel stack system power conduction losses.However, this slider controller may not give efcient converter output power.Te demerits of this slider controller are overcome by using Artifcial Neural Networks (ANNs) [50].International Transactions on Electrical Energy Systems International Transactions on Electrical Energy Systems Te ANN controller is developed from the human brain's behavior and the human brain consists of multiple nodes that are interlinked with each other [51].All the nodes are exchanging their information to identify the required objective.In [52], the PEMFS/battery-fed bidirectional power conversion circuit duty signal is monitored with the help of a neural network-based MPPT controller.Te merits of these neural networks are very low implementation cost, more useful for all nonlinear issues, ease of use for highly complex problems, and the capability to alter any unknown conditions [52].However, these neural networks need high convergence time, plus more complexity in the ANN structure.Te feedforward neural network is utilized in the diesel/battery/PEMFS system for controlling the converter duty signal at various atmospheric conditions of the fuel stack [53].A detailed analysis of diferent types of MPPT controllers is given in Table 2. Design and Performance Study of Fuel Cell As we know nonrenewable power system utilization is reducing drastically in electric vehicle systems because of its demerits such as large space for installation, plus high-power generation cost [59].So, renewable power systems are playing the predominant role in the present electric vehicle systems for optimizing environmental pollution and reducing grid dependence.From the literature study, there are diferent types of renewable power systems available in the society.However, most of the renewable power supply networks give discontinuous power supply.So, in this work, the fuel stack technology is referred for continuous power supply to the automotive systems.In [60], the researchers studied the phosphoric acid fuel stack-based microgrid network for giving energy to local consumers.In this microgrid, the PV/wind/fuel stacks are involved in storing the energy in batteries, and the stored energy is utilized for emergency applications.In PAFSM, the phosphoric liquid is utilized as an electrolyte, and it is highly tolerant to carbon monoxide and carbon dioxide [61].In addition, it is pollution-free and eco-friendly.Te PAFSM is very less sensitive to CO 2 and it has the capability of regeneration of heat along with electricity.Finally, the phosphoric acid fuel network consists of very low volatility.In this fuel stack, the anode accelerates the hydrogen oxidation reaction rate in phosphoric acid.In this fuel stack, the anode must and should be stable for high operating temperature conditions of the phosphoric acid.Sometimes, hydrogen starvation happens in the PAFSM and then the anode gets afected by reverse polarization.However, this fuel stack is inherently much less powerful when compared to the other fuel cells [62]. Te demerits of the phosphoric acid fuel stack are limited by using the solid oxide fuel cell.In the SOFSM, the natural gas fows through the steam reforming process for generating electricity [63].Here, the methane and oxygen chemical recombination generates carbon monoxide, carbon dioxide, water, and hydrogen.Te merits of solid oxide cells are high-functioning efciency, long-term stability, fuel fexibility, less emission, relatively low cost, and low environmental pollution.Te biggest disadvantage of SOFSM is taking more time to start functioning.However, the demerits of SOFSM are limited by using the polymer membrane fuel stack.In this PEMFSM, the proton ions are transferred from the anode chamber to the cathode chamber via a polymer electrolyte.Here, the membrane structure is in the form of a thin plastic flm and it is permeable with the proton when the membrane is saturated with water.However, it may not conduct with the electrons.Te working block diagram of the polymer membrane fuel stack and its corresponding functioning circuits are illustrated in International Transactions on Electrical Energy Systems International Transactions on Electrical Energy Systems Figures 3(a) and 3(b).From Figure 3(a), in the cathode chamber, the protons are reacted with the electrons, and oxygen for generating the heat, and water are the byproducts.In this fuel stack, the single cell voltage is defned as V FC and in this stack are "N" number of cells are utilized for meeting the peak load demand voltage (V 0L ). From Figure 3(b), the variables R 0L and R AL are the ohmic power loss of the electrode plus active region power loss of the electrode.Finally, the term R CL is identifed as concentrative power loss of the fuel stack.3. Design and Performance Study of Various MPPT Controllers From the literature study, all of the renewable energy systems' power supply is nonlinear.So, the direct power supply from the renewable energy systems to the local consumers is not possible.So, the power electronics devices are used in the renewable power supply systems for maintaining the constant load to the electric vehicle systems [64].However, the functioning point of all renewable power systems is not constant.So, there are various categories of MPPT methodologies that are applied to the renewable power supply network to identify the operating point of the fuel stack. From the literature review, the MPPT controllers are differentiated as artifcial intelligence, machine learning, nature-inspired, soft computing, and fuzzy logic controllers. In [65], the authors discussed the fundamental power point tracking methods that are suitable for constant functioning International Transactions on Electrical Energy Systems temperature conditions of the solid oxide fuel stacks.In [66], the authors proposed the multiple layers involved in neural networks for operating the polymer membrane fuel stack network under various environmental operating temperature conditions.Here, all the neural controllers were developed based on the human brain functioning conditions.Te multilayer neural network MPPT controller utilized structure is shown in Figure 5. Based on the multilayer structure, the selected input neurons in the frst layer are two which are fuel stack supply current (M (1) 1 � I FC ) and fuel stack supply voltage (M (1) 2 � V FC ).Te middle layer collects the signals from the source layer, and the middle layer neurons are 629.Due to this large number of neurons and their corresponding layers, the multilayer neural network takes more data training time, and it requires high convergence time [67].Here, the weights of the neurons are adjusted by utilizing the backpropagation algorithm which is given in (10) and (11) .Here, the terms f(n), b, k, and L are the activation function, the total number of hidden layer nodes, the hidden layer output, and the output node.Finally, the variable "v" defnes the overall neurons in between the input and output layers. w (2) vn � w (2) vn + ∆w vn , (13) After weight updating of all the neurons, there is an error existing in the output layer which is given in (16).From ( 16), the terms V required and V are the required peak voltage and available fuel stack voltage. Genetic Controller Optimized Artifcial Neural Network Controller.Te genetic optimization methodology is used in [68] to identify the functioning point of the solar and fuel stack system.Tis algorithm does not require any derivate information.Te merits of this algorithm are more exploration of search space, good fexibility, more adaptability, good parallel processing information, and global optimization.However, this algorithm has many disadvantages which are more computational complexity, high difculty in tuning parameters, more dependence on randomness, risk of premature convergence, and limited understanding of results.So, the genetic algorithm is combined with the proportional and integral block to improve the steady-state response of the system and maintain the transient stability of the overall system.In [69], the genetic controller is combined with the artifcial neural network for tracking the MPP of the hybrid wind/PV/FS power generation system with high efciency.Here, initially, the neural network collects the signals from the fuel stack and solar networks which are sunlight intensity, fuel stack power, solar power, and functioning temperature of all the sources for moving the functioning point of the overall system from the initial stage to the required MPP location.Once, the functioning point of the hybrid network stabilizes with the actual MPP point, then the hill climb starts working to generate the highly accurate nonlinear power characteristics of the system [70]. Te overall training samples considered in this neural network are 689.Te operation of a genetic algorithmdependent neural controller is given in Figure 6.From Figure 6, the generated error signal from the neural controller is monitored by applying the proportional controller. Here, the continuous changes in fuel stack temperature (T FC ), water membrane (T M ), fuel stack current (I F ), and voltage (V F ) are selected for generating the duty signal to the DC-DC converter.Te time constant of the integral controller is T ic , and fnally, the constraints of the proportional and integral controllers are S P and S i . From ( 19) and ( 20), the parameters "n", S, W, P, b, a, h, Q, μ, and ψ are the number of neurons, number of iterations, weight of neurons, middle layer constants, output layer constants, time constant of the proportional controller, and weight updating constants.Also, the parameters T ic , U, and D are input variable, hidden layer, and duty signal of the DC-DC converter. Adjustable Step Change of Fuzzy Controller with Incremental Conductance.Conventional neural networks take more time to achieve the optimal nonlinear solution of the fuel stack network because it takes more convergence time, 10 International Transactions on Electrical Energy Systems high training data are needed when the neural network involves multiple layers in the structure, and it is less suitable for continuous changes of the environmental conditions of the fuel stack network [71].So, the artifcial neural network limitations are overcome by using the fuzzy logic controller.Fuzzy logic is one type of approach which is used to process the variable towards the true value.Most of the fuzzy controllers are used for solving highly complex nonlinear problems.Te features of fuzzy systems are easy to implement, highly robust, more fexible, good interpretability, and easy to understand.However, the fuzzy logic MPPT controller may not be accurate in the MPP position.In addition, it cannot recognize the neural network and machine learning patterns.Also, it required a highly knowledgeable person to implement the fuzzy logic controller, was very difcult in tuning, less accurate in MPP tracking, and had high computational complexity [72].So, the fuzzy logic is combined with the incremental conductance method for reducing the tracking time of the fuel stack MPP.Here, initially, the fuzzy controller methodology is used for adjusting the step value of the IC controller for optimizing the oscillations across the fuel stack MPP position.Te functioning diagram of this MPPT controller is given in Figure 7. From Figure 7, the continuous variation of fuel stack current (I FC ) and voltage (V FC ) are collected and the resultant error signal is given to the incremental conductance controller.H vold and H vnew are the previously stored fuel stack V-I curve slope and present available fuel stack slope.Te terms D, ΔV, and ΔP are the converter duty signal, change of fuel stack voltage, and change of power. Fuzzy Logic Controller-Dependent Hill Climb MPPT Controller. Tere are various conventional controllers available in the literature, which are Perturb & Observe and incremental conductance controllers.However, these controllers are not useful for the rapid changes in the functioning temperature conditions of the fuel stack.So, the researchers referred to the hill climb methodology in [73] for the hybrid wind/fuel stack network power supply system to meet the peak load demand of the local consumers.However, the implementation cost of the hill climb controller is very high when equated to the other controller.So, the fuzzy logic is combined with the hill climb controller to extract the peak power from the renewable energy system.Here, the fuzzy concept is applied to fx the step size value of the hill climb controller [74].Here, the fuzzy logic block captures w 12 (2) w 13 (2) w 14 x 2 (2) (r) x 3 (2) (r) x 4 (2) (r) x 5 (2) (r) f (X 5 (2) ) Signal receiving layer Middle (hidden) layers Output layer function of activation direction of the data fow (28) Optimization of Fuzzy Logic Controller by Using Modifed Grey Wolf Optimizer.Most of the neural network-based power point tracking controllers required high training data of the polymer membrane fuel stack system [75].Also, it required well-experienced person to select the number of layers in the neural network structure.Te drawbacks of neural controllers are overcome by utilizing fuzzy systems.In the fuzzy system, the selection of a membership function is a very difcult task and its functioning efciency depends on the accuracy of membership function selection.In the literature, there are various optimization technologies that are applied to optimize the membership function values. Here, in this article, the modifed grey wolf methodology is used to improve the functioning efciency of the fuzzy controller.Te pseudocode of the modifed grey wolf controller is shown in Figure 8. From Figure 8, the selected fuel stack variables for fnding the duty signal value of the DC-DC converter are fuel stack current, fuel stack power, and fuel stack voltage.In this grey wolf method, the collected data from the fuel stack are assigned to the various wolves. Here, all the wolves start searching for the optimized duty value by interchanging their information towards the required objective identifcation.In the frst iteration, the wolves move in diferent directions with diferent velocities.After reaching certain iterations, the wolves move in one direction to fnd the optimal solution for the nonlinear problem of the fuel stack network.Finally, the grey wolf controller tries to make the system stabilize at any one of the local MPP positions of the fuel stack network.After that, the grey wolf controller gives the information to the fuzzy block as shown in Figure 9. From Figure 9, the fuzzy logic system consists of three major blocks which are fuzzifcation, inference network, and defuzzifcation network.In the fuzzifcation system, the input supply variables are transferred into fuzzy sets.Te inference network collects the fuzzy sets for generating the required output of the fuel stack.Finally, the defuzzifcation methodology is used for transferring the fuzzy outputs into crisp solutions.Te fuzzy logic starts identifying the global functioning point of the fuel stack.Here, ( 29) is used to move the functioning point of the fuel stack from the origin position of the V-I curve to the global MPP place.Sometimes, the working point of the fuel stack is on the right-hand side of the V-I curve of the fuel stack and then (30) is used to move the functioning point of the fuel stack towards the actual MPP place.From ( 29) and ( 30), the variables ψ, Power present , and Power previous are the error stabilizing factor and fuel stack powers. Development of Single Switch Universal Supply Voltage Boost Converter From the literature study, the isolated power DC-DC converters are not applied for fuel stack running electric vehicle applications because of its disadvantages such as more implementation cost, high space requirement for installation, less efciency for electric vehicle systems, and difculty in developing the switching circuit [76] 4. Working Stage of Converter (DCCM and CCM): I. In this stage, the converter works in both the stages of operation which are Discontinuous Conduction Mode (DCCM) and Continuous Conduction Mode (CCM).Tese modes of operation purely depend on the input inductor selection value.Suppose, the selected inductor L q value is more than the converter works in the continuous power supply stage.Otherwise, the utilized power converter works in discontinuous power supply mode.Here, the source is a polymer membrane fuel module for automotive applications.So, the selected input supply inductor value should be very high to work in the continuous power supply mode of operation of International Transactions on Electrical Energy Systems the converter.In this stage, the switch (S) starts working in forward bias condition and then the inductors start absorbing the source currents and voltages which are identifed as I Lq− chrg , I Lw− chrg , V Lq− chrg , and V Lw− chrg .After a certain time duration, the inductors start delivering the currents and voltages which are defned as Similarly, the capacitors' (C l , C k , C j , C h , and C k ) stored currents and voltages are and V Cg− chrg .Finally, the capacitors' discharging parameters are represented as and V Cg− dicg .Te converter inductor charging voltages are given in Figures 11(a) and 11(b).From Figure 11(a), the capacitor voltages and inductor currents are derived in (31) and (32). Working Stage of Converter (DCCM and CCM): II and III. In the second mode of converter operation, the available voltage across the MOSFET is reduced and the switch starts moving from the amplifying stage to the blocking stage. Here, the capacitors C l and C w give the energy to the load side capacitors C j , C h , and C g .From Figure 11(b), the switch-of voltage in the converter fows towards the diodes to make the diodes run in the active region condition.From Figure 12(c), all the switches and diodes are going into the discontinuous functioning stage.As a result, the polymer membrane fuel stack network supplies fuctuated power which is desirable for the four-wheeler electric vehicle network.Here, the switching voltages and currents are completely in a zero-level state.Under steady-state operation of a single switch more power conversion ratio of the DC-DC converter, the available voltage at the load side is derived in (38).Te converter operated duty is defned as D and the time duration of the converter voltage is represented as T S .Under the discontinuous functioning stage of the converter, the time duration of the converter current is T x .Te selected load parameter in the converter is the resistor (R n ) and its corresponding current and voltage are identifed as V n and I n . From the literature study, the power converter study has been done in terms of their working efciency.Here, in this article, investigation of various categories of power electronics converters has been done in terms of voltage gain availability, total number of semiconductors devices applied for designing the converter, voltage appeared across the switch, type of current fow in the converter circuit, and necessity of ground required for the power converter.In [54], the authors utilized the general nonisolated converter structure for the microgrid power supply system to improve the voltage stability of the fuel stack network.Te advantages of this converter are simple in structure, good reliability, high robustness, and more adaptability.However, this converter needs a high operating duty cycle for high voltagerating electric vehicle applications.As a result, the entire fuel stack power supply network conduction losses are increased extensively.So, the wide output voltage gain, universal supply voltage power converter circuit is utilized in this work for continuous power supply to the automotive system.Te voltage gain and current stress of the proposed converter are given in Table 5. Analysis of Simulation Results Te proposed system involves the polymer membrane fuel stack and power point tracking controller along with the high voltage gain DC-DC converter.As of now, the polymer membrane-based fuel stack modules are used for electric vehicle systems because of their features such as high temperature withstanding ability, less atmospheric pollution, easy functioning, less maintenance required, more lifetime, and easy installation.However, the polymer membrane fuel system generates nonlinear power characteristics.So, the identifcation of the functioning point of the fuel stack is quite difcult work.In addition, the available source voltage is very low which is not acceptable for industrial as well as local consumer applications.So, the hybrid power point tracking controller is introduced in this work for catching the exact position of the polymer membrane fuel stack network.Te merits of this MPPT controller are its ability in identifying MPP location quickly, fewer iteration requirement for identifying the local MPP position, better 16 International Transactions on Electrical Energy Systems dynamic response of the system, and being most suitable for rapid changes of operating temperature conditions of the fuel stack system.Here, the fuel stack current fow is optimized by utilizing the single switch power DC-DC converter. Analysis of Overall Proposed System at Static Temperature (325 K). Te selected supply side capacitors (C l , C j, C k , C h , and C g ) for the design of the power converter are 32.5 μF, 37.99 μF, 58.55 μF, 58.55 μF, and 45.37 μF, respectively.Similarly, the utilized inductor (L q and L w ) values are 260 μH, and 280 μH, respectively.Te source side inductor L q tries to suppress the distortions of fuel stack voltage and power.Te capacitor C l helps stabilize the fuel stack voltage and removes the sudden various source voltages to protect the switch "S."Here, the proposed system is studied at uniform working temperature conditions of the fuel stack which is selected as 325 K. Te converter modeling has been done by utilizing the MOSFET switch and the selected load resistor value is equal to 85 Ω. Te utilized parameters for tracking the fuel stack network MPP are fuel stack power, current, and voltage.Tese variables help linearize the overall system and optimize the duty cycle of the power DC-DC converter.Te available fuel stack supply current and fuel stack voltages are given in Figures 12(a) and 12(b).Te converter's functioning duty signal and its related current, voltage, and powers are shown in Figures 12(c International Transactions on Electrical Energy Systems small when equated to the other power point tracking controllers.Also, the design and implementation cost is very moderate when compared to the CSVHCFC.International Transactions on Electrical Energy Systems Experimental Validation of the Proposed Converter In this section, the proposed power converter is investigated for improving the load power rating of the overall system.Here, the selected power converter network is analyzed by considering the programming-based DC source device which is mentioned in Figure 14.From Figure 14, the 0-12 V transformer is used for reducing the source voltage from a higher level to a lower level to activate the TLP-250 MOSFET driver circuit.Here, the IRF-640 MOSFET device is selected for running the proposed converter under a continuous power supply mode of operation.Te MOS-FET features are high source impedance, less power absorption losses, more thermal stability, and more temperature withstanding ability.From Figure 14, the supply and load voltage and current parameters are determined by selecting the diferential voltage and current meters.Te selected switching device is protected by using the TLP-250 controller from the quick changes in supply voltages.Te switching conditions of the MOSFET are optimized by interfacing with the analog discovery device.Te MOSFET device receives the switching signals from the analog discovery device which is equal to 10%.Te supplied voltage across the gate terminal of the MOSFET is 4.617 V and the current passing through the drain terminal is 1.8942A as mentioned in Figure 15.From Figure 15, the drain voltage of the MOSFET is higher than the current passing through the device low and it attains high value when the switching voltage is low.Te utilized input voltage for this converter is 59.51 V and it is enhanced to 128.46 V with a 10% duty cycle as shown in Figure 16.Te overall setup design parameters are given in Table 7. . Conclusion Te overall GWAFC interfaced polymer membrane fuel stack system is developed by using the MATLAB software. Here, the polymer membrane fuel cell is selected because of its attractive features such as fast response, easy handling, ability to work in low as well as high-temperature conditions, more energy density, and simple construction.However, the fuel stack source voltage is very low which is not suitable for electric vehicle application.So, the new power DC-DC single switch converter is developed to enhance the system voltage to meet the required peak load demand.Te advantages of this converter are a good voltage conversion ratio, fewer power components required for implementation, easy handling, wide output voltage operation, and suitable for all renewable energy system applications.Here, the duty signal of the converter is obtained by using the grey wolf optimization-based fuzzy logic power point tracking controller.In this MPPT controller, the fuzzy membership functions are selected by applying the grey wolf controller.Te merits of this proposed MPPT controller are high fexibility, good dynamic response, easy handling, high robustness, and more reliability. Figure 2 : Figure 2: Grey wolf optimized adaptive fuzzy MPPT controller fed WSVPCC for fuel stack application. Figure 8 : Figure 8: Working pseudocode of the proposed power point tracking controller. Figure 9 :Figure 10 : Figure 9: Overall working structure of the fuzzy membership functions optimized grey wolf MPPT controller. Figure 11 : Figure 11: (a) Working of the converter under continuous power supply mode and (b) fuctuated power supply mode. Figure 13 : Figure 13: (a) Source current, (b) source voltage, (c) duty cycle, (d) load O/P current, (e) load O/P voltage, and (f ) load O/P power at dynamic temperatures. Figure 14 :Figure 15 : Figure 14: Testing of the proposed single switch power converter by using programming DC source. Figure 16 :Table 7 : Figure 16: Supplied converter voltage parameters and current parameters at 0.1. Table 2 : Te detailed investigation of various power point identifers for PEMFS-fed DC-DC converter. Te related voltages of the fuel stack are represented as V 0L , V AL , and V CL .Te generated power and voltage curves are illustrated in Figures4(a) and 4(b).Te term T F is the functioning temperature of the fuel stack.Te partial oxygen pressure and hydrogen pressure are identifed as P O2 and P H2 .Te anode humidity vapor pressure and cathode humidity vapor pressure are identifed as RH Ap and RH Cp and its related internal pressures are P Ap and P Cp .Te water pressure and current fowing through the electrode are represented as P sat H 2 O and I S .Te utilized electrode area and empirical coefcients are defned as A, z 1 , z 2 , z 3 , and z 4 .Te design constraints of the utilized fuel stack are shown in Table Table 3 : Design variables of selected fuel stack network. Lw and the voltages appearing across those elements are V Cl , V Cj , V Ck , V Ch , V Cg , V Lq , and V Lw .Finally, the voltages appearing across the switches and diodes are V S , V Dq , V Dw , V De , and V Dt .Te detailed working structure of the converter is shown in Figures10(a), 10(b), and 10(c).Te switching states of the converter are given in Table FET) is used for enhancing the fuel stack supply voltage from one level to another level.Te features of MOSFET are high voltage withstanding ability, more switching speed, less power consumption, very little power dissipation, high input impedance, more power control capability, less driver circuit implementation complexity, and high temperature withstanding ability.Te utilized diodes in this proposed converter circuit are D q , D w , D e , and D t .Similarly, the selected capacitors in the wide output voltage range DC-DC converter are C l , C j, C k , C h , and C g .Te available inductors and resistors in the converter L q , L w , and R n .When the converter starts working, the currents fowing through capacitors and inductors are I Cl , I Cj , I Ck , I Ch , I Cg , I Lq , and I Table 4 : Functioning stages of wide input voltage proposed DC-DC converter. )-12(f ) From Figures 12(a) and 12(b), the available current and voltage parameters of the fuel stack under static temperature conditions by applying the MLNNC, GCOANC, ASCFC, CSVHCFC, and GWAFC are 112.7 A, 39.84 V, 112.82 A, 40.53 V, 112.44 A, 40.76 V, 111.81 A, 42.13 V, 110.32 A, and 43.48 V, respectively.Tese high available currents and lower voltages of the fuel stacks are not useful for any local as well as industrial applications.So, the wide input supply voltage and less voltage stress-based DC-DC proposed converter is integrated with the source network and MPPT controller block for improving the load power, voltage, and current ratings.Te achieved load current, voltage across the converter output, and load power by utilizing the MLNNC, GCOANC, ASCFC, CSVHCFC, and GWAFC MPPT controllers are 8.213 A, 527.32 V, 4330.87W, 8.38 A, 528.72 V, 4430.67 W, 8.412 A, 530.33 V, 4461.13W, 8.672 A, 534.8 V, 4637.78W, 8.79 A, 535.99 V, and 4711.35W, respectively.At static functioning temperature conditions of the fuel stack, the GWAFC-fed fuel stack supply voltage stabilizing time is 0.015 sec which is very Table 5 : Analysis of various categories of power electronics converters for renewable energy systems. Table 6 : Detailed analysis of various categories of power point tracking controllers at diferent operating temperature conditions of the fuel stack.
v3-fos-license
2020-01-23T16:07:18.104Z
2020-01-23T00:00:00.000
210862556
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-020-57967-y.pdf", "pdf_hash": "718681e18732b8f42117a21b5eab6d8f809bc290", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2769", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "718681e18732b8f42117a21b5eab6d8f809bc290", "year": 2020 }
pes2o/s2orc
Variation in chemokines plasma concentrations in primary care depressed patients associated with Internet-based cognitive-behavioral therapy How the presence of inflammation has repercussions for brain function is a topic of active research into depression. Signals released from immune system-related cells, including chemokines, might be indicative of active depression and can, hypothetically, serve as biomarkers of response to interventions, both pharmacological and psychological. The objective of this study is to analyze the peripheral plasma concentrations of CXCL12, CCL11, CX3CL1 and CCL2 in a cohort of depressed primary-care patients, as well as their evolution after an internet-based cognitive-behavioral intervention. The concentrations of those chemokines were measured in 66 primary-care patients with mild and moderate depression, before and after the intervention, as well as 60 controls, using multiplex immunoassays. Concentrations of CXCL12 and CCL2 were significantly higher in the clinical sample in comparison with controls. A stable multivariate discriminative model between both groups was found. Concentrations of all chemokines decreased after the internet-based psychological intervention. These findings support the implication of chemokines in depression, even in a sample of patients with mild and moderate severity. Furthermore, they demonstrate the need for further multidisciplinary research that confirms how biomarkers such as plasma chemokines can serve as a marker for depression and are sensitive to non-pharmacological interventions. many patients remain resistant to treatment 5,6 . In response, current advances in neuroscience have identified additional mechanisms of the disorder's underlying pathophysiology, leading to alternative or additional pathogenic hypotheses and therapeutic interventions 5,7 . One of these new actors in the neurobiology of depression has emerged from studies on neuroinflammation, with the relationship between immunological-inflammatory responses and depressive symptoms becoming one of the most studied areas in current research into depression 8,9 . In this hypothesis, the activation of certain inflammatory pathways and/or immunomodulatory signaling networks is associated with the pathophysiology of depression, at least in a subset of patients 8 . Among all the proposed underlying mechanisms that associate depression with inflammation, the dysregulation of cytokines is one of the more extensively studied [10][11][12] . For instance, cytokines have been suggested as predictors of the antidepressant effect of exercise 13 , and meta-analysis has shown that certain types of antidepressants reduce pro-inflammatory factors such as C-reactive protein, tumor necrosis factor-α and interleukin −1β, showing some interaction between antidepressant medication, depression and inflammation 14 . Another review showed that the effects of antidepressant drugs have been consistently linked to decreased inflammation 15 . However, despite the extensive research into cytokines in this area, a related family of immune system-derived signaling proteins, the chemokines, has been relatively neglected 7 . Chemokines have been traditionally involved in the chemotactic function, attracting and modulating the function of mononuclear phagocytic cells to inflammatory focus, including the central nervous system (CNS), where they can attract monocytes to cortical areas related to psychiatric disorders, including depression and bipolar disorders [16][17][18] . Current research has revealed a wider function of chemokines beyond the classical chemoattractant role. Chemokines in the brain may modulate microglial cells, the CNS-resident macrophages cells. These phagocytic cells colonize the brain early in development, playing an important modulatory role in synaptic plasticity processes 19 . Chemokines are important modulators of microglial function, resulting in modulation of important plasticity events that include synaptic pruning and remodeling. In addition, chemokines participate in neurotransmission, neurogenesis and neurodevelopment. These actions might have a clear influence on the pathogenesis and clinical evolution of neurological and psychiatric disorders. According to some studies, chemokines are implicated in the pathophysiology of depression through neurotransmitter-like and neuromodulatory effects, or the regulation of axon sprouting and neurogenesis 7,20 . As an example, the chemokine CXCL12 has been demonstrated to modulate neuronal control of serotonin dorsal raphe neurons involved in depression 21 . Recent research into the involvement of some chemokines in depression has reported alterations in circulating chemokines in clinical human samples 22 . A number of cross-sectional studies have found links between depression and cytokines such as CCL2, IL8 and CCL11 23 . However, a recent review 20 points out that "chemokines with great mechanistic relevance including CXCL12 and CX3CL1 have been rarely reported in the existing human literature and should be included in future clinical studies. " Also, the results are relatively mixed and prospective studies are scarce 20 . Studies with a prospective methodology, in comparison with cross-sectional designs, provide a stricter control of potentially confounding variables, given that each subject acts as its own control. Aims of The Study In previous studies, our research group has found that some of these chemokines -specifically, CCL2, CCL11, CX3CL1, and CXCL12 -are related with depressive symptoms in patients with cocaine [24][25][26] and alcohol 27 use disorders. Following those findings, this study has two main aims: Firstly, the relationship between depression and circulating plasma concentrations of CCL2, CCL11, CX3CL1, and CXCL12 will be tested. For that purpose, the differences in the plasma concentration of those chemokines between two samples -one of primary-care patients diagnosed with depression, and another of healthy non-depressed controls -will be tested. The differential influence of sex and antidepressant medication will also be tested. Secondly, the variations of the concentrations in plasma of these molecules in those patients before and after an internet-based cognitive-behavioral therapy (iCBT) intervention will be tested. There are two reasons for studying the concentrations of chemokines in this particular type of patient: Firstly, in general, primary-care patients often suffer less severe depression and have fewer comorbidities than those who attend specialized units. Also, it is easier to find relatively naïve patients in terms of antidepressant prescription 28 -although, in the present study, we will recruit both antidepressant-naïve and ISSR-treated patients. These features are particularly interesting for directly relating the concentrations of chemokines to depressive symptoms, without the interference of potentially confounding variables. Secondly, and related to the previous argument, the fact that the patients follow a psychological intervention would improve the insight into the mechanisms of influence of chemokines in depression, independently of antidepressant medication. We hypothesized that the plasma concentrations of CCL2, CCL11, CX3CL1, and CXCL12 would be higher in the sample of depressed patients in comparison with the sample of controls. We also hypothesized that the plasma concentrations of CCL2, CCL11, CX3CL1, and CXCL12 in the sample of depressed patients would be reduced after the iCBT intervention. Material and Methods Participants and recruitment. The clinical sample (n = 66) comprised patients with a low mood-related complaint. General practitioners in primary-care settings conducted the recruitment. The patients were asked to participate in two parallel studies. The first evaluated the efficacy of a 3-month iCBT program for depressed primary-care patients, and the second studied potential biomarkers of depression. Patients that agreed to participate in both studies were included in this one. The inclusion criteria were: (a) between 18-65 years old, (b) depressive symptoms lasted at least two months, (c) major depressive disorder diagnosis, and (d) mild or moderate depression severity scores (mild: 14-19; moderate: 20-28). The exclusion criteria were the following: (a) severe mental disorder or a substance-use disorder diagnoses, (b) currently pregnant or breastfeeding, or (c) chronic infectious www.nature.com/scientificreports www.nature.com/scientificreports/ or inflammatory diseases were present. The patients were evaluated at baseline and post-treatment using a battery of questionnaires, and they were diagnosed using a structured interview (see below) by a trained clinical psychologist. Patients with prescribed antidepressant medication were allowed to take part in the studies, but this medication had to have been prescribed at least four weeks before the beginning of the studies and had to remain stable during that period. If the antidepressant medication was changed or the dosage increased, patients were excluded from both studies. Nineteen patients were on prescribed antidepressants (five citalopram, five sertraline, four fluoxetine, two trazodone, two paroxetine, and one duloxetine). The control group was formed by 60 volunteers recruited by the researchers from the hospital staff. The participants were interviewed using a screening tool to rule out the presence of psychopathological symptoms, addictions, as well as any type of medication in the last month. They were informed about the characteristics of the study and were asked to take part on a voluntary basis if they fulfilled the criteria. See Table 1 for sociodemographic and clinical data at baseline. Ethics statement. All patients and participants of the control group signed written informed consent. The current study and its recruitment protocols were approved by the Regional Research and Ethics Committee of the Hospital Regional University of Málaga. Therefore, this study was conducted in accordance with the "Ethical Principles for Medical Research Involving Human Subjects" adopted in the Declaration of Helsinki by the World Medical Association. Intervention. The psychological intervention used in this study was the internet-delivered self-help program "Smiling is fun. " This program has been developed 29 for the treatment of depressed primary-care patients among the Spanish population, and its efficacy 30 , cost-effectiveness and cost-utility 31 have been established. "Smiling is fun" consists of 10 web-delivered sequential modules with different CBT-based techniques for coping with mild and moderate depression. The modules are as follows: (1) medication management, (2) sleep hygiene, (3) motivation for change, (4) understanding emotional problems, (5) learning to move on, (6) learning to be flexible, (7) learning to enjoy, (8) learning to live, (9) living and learning, and (10) from now on, what else? The duration of the intervention was 3 months, and the participants were assessed at baseline and post-treatment (see Fig. 1). The content of the program can be found elsewhere 32 . Measures. Beck Depression Inventory-II (BDI-II) . This questionnaire is formed by 21 items that assess the severity of depression symptoms in a multiple-choice format 33 . Different studies have shown that the BDI-II has excellent internal consistency, validity, and test-retest reliability 34,35 . Structured Clinical Interview for DSM-IV Axis I Disorders-Clinician Version (SCID-CV). The SCID is a semi-structured interview that assesses Axis I disorders from the fourth version of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) 36,37 . This interview is the most frequently used instrument for the Data analytics plan. The Shapiro-Wilk test was used to determine the normality of the scores in all the variables. The main variables that were not distributed normally were converted using a base-10 logarithmic transformation. For data expressed by means and standard deviations (SD), either Student's t-tests or Mann-Whitney U tests were used to test differences between groups, depending on the normality of the scores. For data expressed in percentages, chi-square tests were used. Despite multiple comparisons being performed (differences in four chemokines), we decided not to use Bonferroni correction, since the study was exploratory. Receiver operating characteristic (ROC) analyses calculating the area under the curve (AUC) were used to identify predictors for differentiating between groups and to evaluate the predictive power of the logistic models. Binary logistic regression models were created that included the selected chemokines (predictors), and the goodness of fit of the models was tested with the Hosmer-Lemeshow test. A backward stepwise approach was used to restrict the model to the most predictive predictors. The pre-post analyses were carried out twice, one with the whole sample and an additional one with those patients who improved (sensitivity analysis). The criterion used for improvement was that the patient stepped down from moderate to mild or no depression, or from mild to no depression. P-values lower than 0.05 were considered statistically significant. Statistical analyses were conducted using IBM SPSS Statistics 22.0 (IBM, Armonk, NY, USA) and GraphPad Prism 6.01 (GraphPad Software, San Diego, CA, USA). Results Comparison between depressed patients and controls. Chemokine concentrations were compared using t-tests after a base-10 logarithmic transformation (Log 10 ), given that their distributions were not normal. Statistically significant differences in CXCL12 and CCL2 concentrations were found between depressed patients and controls ( Table 2). Given that there were age and sex differences between samples, one ANCOVA for each molecule was carried out using those plus body mass index (BMI) as covariates. Differences in CXCL12 and CCL2 remained, and the concentration of CX3CL1 was significantly higher in controls. In order to study the influence of sex, independent t-test analyses were carried out, splitting both samples by sex. CXCL12 was higher in depressed men and women in comparison with controls, and CCL2 was higher only in depressed men in comparison with controls (Tables S1 and S2). Finally, a comparison between patients medicated with antidepressants, non-medicated patients and controls was carried out using an ANOVA test, showing differences in the concentrations of CXCL12 (Table 3) between non-medicated patients and controls. www.nature.com/scientificreports www.nature.com/scientificreports/ Multivariate discriminative model between depressive and control groups. A model for the discrimination between depressed patients and controls was tested using binomial logistic regression analysis. The concentrations of all chemokines were used as predictors. Age, sex, and BMI were included in the first step. After five iterations, the model showed good calibration with a Hosmer-Lemeshow test (χ 2 = 9.212; p = 0.325). A ROC curve was drawn, and it showed a statistically significant AUC (AUC = 0.820; p < 0.001; Fig. 2a), with a cut-off score of 0.462 (sensitivity = 78.95%; specificity = 78.95%). The means of the predictive probabilities of the model between depressed patients and controls (Fig. 2b) were, respectively, 0.655 (SD = 0.030) and 0.346 (SD = 0.031). The differences between groups were statistically significant (t = 7.138; p < 0.001). The variation in chemokine concentrations was tested using a repeated-measures t-test analysis for each molecule. Statistically significant reductions between baseline and post-intervention measures were found in all molecules (Table 4). Sensitivity analyses with patients who improved (n = 30) showed the same results (Table 4). Statistically significant reductions in all chemokines were found in patients with and without antidepressant medication. Finally, splitting the sample by sex, the same reductions were found in women. In men, it was only in CX3CL1 that the decrease was not statistically significant (p = 0.057). Discussion The aim of this study was to test the relationship between depression and plasma concentrations of CCL2, CCL11, CX3CL1 and CXCL12 in mildly and moderately depressed primary-care patients, as well as the potential influence of an effective iCBT intervention on those molecules. To our knowledge, this is the first study that checks the variation in chemokine concentrations after an effective psychological intervention. Two remarkable aspects can be highlighted from the present study with regard to the biological significance of chemokines in depression. First is the demonstration that CBT-based interventions can normalize or even reduce the elevation of chemokines found in depressive patients. Because systemic inflammation can affect ascending monoamine transmitters involved in emotions and emotional learning (i.e., reward associated), CBT intervention, by reducing inflammation (including chemokines), has a biological way to improve depression-associated symptomatology 38 . In this regard, both inflammation and chemokines have been demonstrated to affect the activity and release of dopamine and serotonin in humans and experimental animals 21 . The second remarkable aspect is that the identification of the biological significance for each of the different chemokines identified might offer the possibility of helping to understand specific aspects of the biology of depression and co-morbid disorders. Further investigation is clearly needed to achieve this goal, although initial results clearly support this research line. For instance, CCL11 has been recently identified as a chemokine linking cocaine use disorders with major depression 39 . The concentrations of CCL2 and CXCL12 were significantly higher in depressed patients, compared to controls. Neither CCL11 nor CX3CL1 concentrations were altered at baseline. These findings on CCL2 supported two recent meta-analyses describing the association of depression with alterations in plasma cytokine/ chemokine profiling 10,23 . Despite the heterogeneity of the results included in those meta-analyses, the elevated concentration of CCL2 in depressed patients seems to be quite established. The results for CXCL12 also replicate a previous study that evaluates the plasma concentrations of CXCL12 of depressed patients and controls 40 . This chemokine has a tight relationship with serotonin transmission. As well as CXCL12 regulating serotonergic activity of dorsal raphe nuclei serotonergic neurons 21 , its action on peripheral T-lymphocytes is modulated by serotonin 41 . With respect to CX3CL1, contrary to what was hypothesized, its concentration was significantly higher in controls after controlling age, sex and BMI. However, this result is not entirely surprising, given that studies linking this molecule with depression are scarce 20,42 . In fact, CX3CL1 differs from other chemokines in a number of ways. For instance, in contrast to other chemokines, CX3CL1 has a membrane-bound form 7 , which is essential in neuron-glia physical interactions. In addition, CX3CL1 is present in neurons and is involved in multiple actions in the CNS, primarily in the microglial regulation state, adjusting synaptic transmission 7 . The results of splitting the clinical sample into medicated and non-medicated patients showed differences in CXCL12 between non-medicated patients and controls, and a trend between medicated patients and controls in CCL2. In both cases, the splitting of the clinical samples produced a loss in statistical power, which might have led to non-significant results. Further studies with larger samples could clarify whether there are differences in chemokine concentrations between medicated and non-medicated patients. Interestingly, despite not all chemokine concentrations being statistically different between depressed patients and controls, the logistic regression analyses using all chemokines resulted in a model with good discrimination capacity. We are still far from the use of any molecule as a reliable biomarker, but these results support the suggestion of a distinctive pattern of chemokine concentrations between depressed patients and controls. It is important to highlight that the differences found in our study are not between controls and patients with severe depression, but between patients with mild and moderate depression, and most of them without antidepressant medication and without chronic infectious or inflammatory diseases. This is a crucial issue, in our opinion, given that previous studies included mostly patients suffering severe depression and taking antidepressants 8,10 . The fact that we www.nature.com/scientificreports www.nature.com/scientificreports/ found differences between controls and mostly non-medicated patients with less severe depression supports the potential mechanistic role of chemokines in depression. As expected, depression scores decreased significantly after the iCBT intervention, similar to the findings in the randomized clinical trial conducted with this intervention 30 . In addition, chemokine concentrations significantly decreased after the iCBT intervention, even showing large effect sizes. These results were also significant and showed large effect sizes with the patients who improved after the intervention, in patients with or without antidepressant medications and in women. In men, CX3CL1 showed a trend, which could be attributed to a decrease in statistical power due to the reduction of the sample size (n = 11). These results support the suggestion that changes in depressive symptoms are associated with changes in chemokine concentrations, but also that those changes are not necessarily related to a pharmacological intervention. This is not the first study that finds neuroinflammatory changes after a psychological intervention, for instance, in cytokines 43,44 . These results also align with research suggesting that certain biomarkers might help to stratify patients that supposedly fall into the same diagnostic categories 45 . Certain initiatives, such as research domain criteria 46 , support the idea that biological markers might be used to identify treatments that could be particularly helpful for patients with specific characteristics. This study showed initial evidence supporting the existence of different chemokine concentrations in depressed patients depending on variables such as sex, or if they are taking antidepressant medication. It is well known that antidepressant medication changes inflammatory markers 15 , as does exercise 47 , psychotherapy 43 and CBT, in comparison with other psychological interventions 44 . Further research should address if these changes are found in all empirically based psychotherapies, in patients with different degrees of severity or in other disorders. This should encourage multidisciplinary research to improve our knowledge of depression and the treatments that we can offer to those patients. These results are in line with those found when comparing depressed patients and controls for CXCL12 and CCL2, but not for CCL11 and CX3CL1. The concentration of CCL11 was non-significantly higher in depressed patients in comparison with controls. However, CX3CL1 was non-significantly lower in depressed patients in comparison with controls, and reached statistical significance after controlling for age, sex and BMI. This means that the discrepancy in CCL11 could merely be a matter of statistical power, but the results for CX3CL1 seem to be in the opposite direction. This difference could be due to the particularities of CX3CL1 listed above and leads to very relevant questions that need further exploration with larger samples and, maybe, a different methodology (see below). Nonetheless, these discrepancies might be attributed to differences between the samples. As mentioned above, more prospective studies are needed 20 , because they provide stricter control of potentially confounding variables, and consequently provide sounder results. The main limitation of this study is the absence of a second measure of chemokine concentrations in the control group. However, given that the recruitment criteria assured that the referred patients had consulted for a mood complaint, that antidepressant medication was stable during the study, and that patients with somatic illnesses were excluded, we believe that the results are strong and offer compelling evidence supporting the association between chemokines and depression. Another limitation relates to the chemokines selected, which constitute a small selection of the large class of chemokine signals. Further studies are needed to demonstrate that other chemokines, such as CCL4, CXCl4, CXCL7 and CXCL8, proposed as being altered in depression are also sensitive to CBT interventions. Another limitation is that the clinical sample is too small for stratification, so potentially important secondary analyses must be replicated in further, larger studies. For instance, the differences in the concentration of CXCL12 between non-medicated depressed patients and controls should be replicated using larger samples. In addition, other potentially relevant analysis could be carried out with bigger samples, analyzing the influence of different antidepressants, depression severity or the predominant types of symptoms (somatic, cognitive or behavioral). Another limitation might be the features of the clinical sample -that is, primary-care patients with mild or moderate depression. The results cannot be directly extended to other populations and should be replicated in samples with other characteristics, as with any other scientific finding. In conclusion, we believe that to find biological correlates of depressive symptoms after a psychological intervention in a sample with non-severely depressed patients and controlling the influence of medication should encourage further, larger studies, which, hopefully, might confirm these findings and improve our knowledge of depression and the way it is treated.
v3-fos-license
2019-11-05T06:48:44.657Z
2019-11-05T00:00:00.000
207888011
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://chiromt.biomedcentral.com/track/pdf/10.1186/s12998-019-0279-2", "pdf_hash": "aa8df37f3fb8f2f34ef9700ed27d0ce3289d9bf7", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2770", "s2fieldsofstudy": [ "Medicine" ], "sha1": "aa8df37f3fb8f2f34ef9700ed27d0ce3289d9bf7", "year": 2019 }
pes2o/s2orc
Outcomes and outcomes measurements used in intervention studies of pelvic girdle pain and lumbopelvic pain: a systematic review Background Pelvic girdle pain is a common problem during pregnancy and postpartum with significant personal and societal impact and costs. Studies examining the effectiveness of interventions for pelvic girdle pain measure different outcomes, making it difficult to pool data in meta-analysis in a meaningful and interpretable way to increase the certainty of effect measures. A consensus-based core outcome set for pelvic girdle pain can address this issue. As a first step in developing a core outcome set, it is essential to systematically examine the outcomes measured in existing studies. Objective The objective of this systematic review was to identify, examine and compare what outcomes are measured and reported, and how outcomes are measured, in intervention studies and systematic reviews of interventions for pelvic girdle pain and for lumbopelvic pain (which includes pelvic girdle pain). Methods We searched PubMed, Cochrane Library, PEDro and Embase from inception to the 11th May 2018. Two reviewers independently selected studies by title/abstract and by full text screening. Disagreement was resolved through discussion. Outcomes reported and their outcome measurement instruments were extracted and recorded by two reviewers independently. We assessed the quality of reporting with two independent reviewers. The outcomes were grouped into core domains using the OMERACT filter 2.0 framework. Results A total of 107 studies were included, including 33 studies on pelvic girdle pain and 74 studies on lumbopelvic pain. Forty-six outcomes were reported across all studies, with the highest amount (26/46) in the ‘life impact’ domain. ‘Pain’ was the most commonly reported outcome in both pelvic girdle pain and lumbopelvic pain studies. Studies used different instruments to measure the same outcomes, particularly for the outcomes pain, function, disability and quality of life. Conclusions A wide variety of outcomes and outcome measurements are used in studies on pelvic girdle pain and lumbopelvic pain. The findings of this review will be included in a Delphi survey to reach consensus on a pelvic girdle pain - core outcome set. This core outcome set will allow for more effective comparison between future studies on pelvic girdle pain, allowing for more effective translation of findings to clinical practice. Supplementary information Supplementary information accompanies this paper at 10.1186/s12998-019-0279-2. Background Pelvic Girdle Pain (PGP) has been defined as "pain between the posterior iliac crest and the gluteal fold, particularly in the vicinity of the sacroiliac joints, and pain may radiate to the posterior thigh and can also occur in conjunction with/ or separately in the symphysis" [1] (pp797). In the past, it has sometimes been considered a subgroup of low back pain (LBP); however, PGP includes also pain at the pubic symphysis and is therefore considered a different entity. The term lumbopelvic pain (LPP) is a broader term that has been used to describe LBP and/or PGP without differentiation between the two groups [2]. Pelvic Girdle Pain is a common complaint during pregnancy, affecting 23 to 65% of women depending on how it is measured and defined [3,4]. Although many women recover after the birth, 17% have continuing symptoms 3 months postpartum [2] and 8.5% have not recovered 2 years postpartum [5]. In Sweden, in a cohort of 371 women with PGP, 10% of women still had symptoms 11 years after the birth [6]. In another Swedish cohort, 40.3% had long term pain in the low back or pelvic girdle area 12 years postpartum [7]. Additionally, PGP is one of the leading causes of sick leave during pregnancy [8][9][10], resulting in large economic costs to families and society. Studies examining the effectiveness of interventions for PGP measure different outcomes, making it difficult and sometimes impossible to pool data in meta-analysis to increase the certainty of effect measures [11,12]. To address this issue, an international consensus-based Core Outcome Set (COS) for PGP is being developed (registration: http://www.comet-initiative.org/studies/details/958) [13]. The systematic review presented here forms the first key part of the PGP-COS (Pelvic Girdle Pain -Core Outcome Set) study and provides a structured overview of the outcomes and outcome measurements that are used across PGP as well as LPP (since this includes PGP) intervention studies and systematic reviews. It will feed into the larger PGP-COS study by providing a preliminary list of outcomes that will be included into an online Delphi survey and face-to-face consensus meeting to identify a final COS for PGP. The objective of this systematic review was to identify and examine what outcomes are measured and reported, and how outcomes are measured, in intervention studies and systematic reviews of interventions for PGP. Methods The protocol for this systematic review was published as part of the PGP-COS study protocol [13]. Criteria for Table 1 Inclusion criteria Population Women with PGP during or after pregnancy. PGP is defined as pain between the posterior iliac crest and the inferior gluteal fold, particularly in the vicinity of the sacroiliac joints, that may radiate in the posterior thigh and can occur in conjunction with or separately in the symphysis pubis [1]. Studies that examined a population of women with PGP resulting from specific pathologies (e.g. infection, spondyloarthropathies and trauma) were excluded. Intervention Any intervention (pharmacological or non-pharmacological) aimed to treat/prevent PGP. Comparator Any comparator intervention or control. Outcome Any outcome measured to assess/monitor PGP. Study design Intervention studies (randomised or non-randomised), systematic reviews of interventions. considering papers for inclusion in the systematic review are outlined in Table 1. A second objective (To compare outcomes measured in intervention studies and systematic reviews on PGP to outcomes measured in studies on LPP) was added post hoc, since many studies that we identified in preliminary searches did not differentiate between LBP or PGP, and it was considered important to compare outcomes measured in these studies since LPP includes PGP. We analysed and have presented the results by the subgroups PGP and LPP. Search methods & study selection The following databases were searched on the 11th May 2018 (from inception): PubMed, the Cochrane Library, PEDro and Embase. Details of search terms used for each database can be found in Additional file 1. No language or time filters were applied. We screened reference lists of included studies for further relevant studies. Citations were exported to Endnote and duplicates were removed. Two review authors (FW, MO) reviewed each citation independently against the inclusion criteria in two stages: (a) title and abstract screening and (b) full text screening, using Covidence software [14]. Disagreement was resolved through discussion. Data collection and synthesis All outcomes (and their verbatim definitions) examined in the included studies were extracted by two reviewers (FW, MO) independently and their corresponding outcome measurement instruments/methods, where reported, were also recorded. The quality of outcome reporting was assessed using the six questions proposed by Harmen et al. [15] and this was conducted by two independent reviewers (FW, MO). The outcomes were then grouped into core outcome domains using the OMERACT (Outcome measures in rheumatology) filter 2.0 framework: (a) life impact; (b) resource use/economic impact; (c) pathophysiological manifestations and (d) death [16]. This framework aims to provide a structure for measuring outcomes and developing core outcome sets. Within the OMERACT framework 'adverse events' should also be flagged alongside the core domains. We therefore grouped adverse events into a separate domain [16]. The findings are synthesised and reported by these core domains, for PGP and LPP separately, for comparison. We have reported this systematic review according to the PRISMA guideline [17]. Outcomes and outcome measurements A total of 46 outcomes were identified and categorised into core domains using the OMERACT filter 2.0 framework: 'life impact', 'resource use/economic impact', 'pathophysiological manifestations' and 'death'. No outcomes were identified in the core domain 'death', but 'adverse events' outcomes were identified. Outcomes and their corresponding outcome measurements are presented separately for studies that focused on PGP or focused on LPP in the Tables 4, 5, 6 and 7. Of the 46 outcomes identified, 26 were in the life impact core domain (Table 4), five were in the resource-use/economic impact domain (Table 5), 11 were in the pathophysiological domain (Table 6), and four outcomes were classified in the adverse events domain ( Table 7). The differences in the number of outcomes reported in studies on PGP and studies on LPP by core domain are outlined in the Table 8. Notable, psychological outcomes and economic outcomes were more commonly measured in LPP studies compared to PGP studies. A further comparison of the different outcomes reported in each domain between PGP and LPP studies is outlined in Additional file 4. Discussion A large number of primary intervention studies (n = 76) and systematic reviews (n = 31) were identified. A total of 46 outcomes were measured across all studies. The majority of outcomes related to the 'life impact' core Table 7 Outcomes and outcome measurements identified in the 'Adverse events' domain for PGP and LPP respectively Adverse Events PGP studies (n = 33) LPP studies (74) Adverse events (not specified) Patient Questionnaire [115] Case reports by physio [86] Identified by trialist [12] Not specified [56,59,70,71,93,95] Questionnaire [85,104] Post-op complications Not specified [117] Fetal outcome Apgar score, birth weight, perinatal loss [84] Safety of women and children Not specified [114] domain of the OMERACT framework. This would be expected considering the nature and main symptoms of PGP and LPP. Within the life impact core domain, pain intensity was the most commonly reported outcome in both PGP and LPP studies, followed by the outcomes function and disability. Fifteen (20%) studies on LPP included psychological outcomes versus only three (9%) PGP studies. This is likely because LPP includes LBP, which has had a strong psychosocial focus within the literature the past few decades, including on aspects such as fear avoidance and catastrophising. It might be that PGP is often perceived as a transient condition related to pregnancy and researchers therefore assess fewer psychosocial factors that are involved in developing chronicity. However, not all women recover and PGP can persist postpartum [2,[5][6][7]118]. Moreover, PGP has been associated with psychological factors including emotional distress [119], depression [118,120] and anxiety [118]. Only 14 studies/reviews (13%) examined any adverse events. This is contrary to current recommendations to always assess adverse events for any intervention study or systematic review [121,122]. A range of outcome measurements were used across studies to measure certain outcomes. For example, pain intensity alone was measured using 10 different outcome measurement instruments, and function was examined using 13 different tools across the studies. This emphasises not only the need for a COS but also for consensus on how to measure the identified COS. This systematic review will contribute to a list of initial outcomes to be included in a multistakeholder, international Delphi survey to reach consensus on a PGP-COS. Subsequently, the next part of the PGP-COS study will determine 'how' best to measure the developed COS [13]. This systematic review also showed that the included intervention studies/reviews often use different terminology to describe the same outcomes. For example, when examining the measurement tools for the outcomes 'function' and 'disability', the same tools are frequently used. While some studies use the term 'function' and others 'disability', most studies do not provide a clear definition of the terms. Another example of where there is clearly inconsistency in terminology and a lack of definitions in original manuscripts is for the outcomes 'quality of life' and 'health status'. Again, the same measurement instruments tend to be used and terms seem to be used interchangeably across different studies. This observed inconsistency strengthens the rationale for the development of an agreed PGP-COS. Chiarotto et al. [123] published a COS for non-specific LBP in 2015 and, while there was some overlap in findings, the list of outcomes they identified from the LBP literature differed significantly from our findings of the outcomes measured in the PGP/LPP literature. They identified the following outcomes in LBP studies that were not identified in our review of PGP/LPP studies: death, cognitive functioning, social functioning, sexual functioning, satisfaction with social role and activities, pain quality, independence (Life impact); informal care, societal services, legal services (Resource-use/ economic impact); muscle tone, proprioception, spinal control, and physical endurance (Pathophysiological manifestations). Outcomes that we identified in this review of PGP/LPP studies but that were not identified in the review of the outcomes measured in the LBP literature [123] were: Self-efficacy, confidence, patient expectations of treatment (Life impact); and anthropometric measures (weight/height), pregnancy and maternal outcomes, surgical outcomes (Physiological manifestations). Some of the observed differences could be put down to differences between PGP and LBP. However, differences in outcomes seem largely arbitrary instead of relating to the distinguishing features of PGP and LBP. Similarly, when comparing studies examining PGP only with studies examining LPP in this systematic review, the reason for the observed discrepancies in the outcomes chosen by studies' authors are mostly unclear. This supports using the outcomes identified in this review only as an initial list for the consensus process to develop a PGP COS, allowing for other outcomes to be added by all stakeholders including patients, clinicians, researchers, service providers and policy makers. Conclusions Studies and systematic reviews examining the effectiveness of interventions for PGP and LPP assess a range of outcomes, predominantly pain intensity and disability/ function, and use a large variety of outcome measurement instruments. Few studies examine adverse events and economic outcomes. Not only do different studies often measure different outcomes, authors also rarely define outcomes and terminology for outcomes varies, making comparison of study findings very difficult. Additional file 1. Search strategy. A detailed outline of the search strategy of this systematic review including the databases searched and exact search terms used. Additional file 2. Charateristics of included studies. A detailed description of the studies/systematic reviews that were included in this systematic review. Additional file 3. Quality of reporting. The results of the assessment of the quality of reporting in the individual studies included in this systematic review. Additional file 4. Comparison of outcomes identified in PGP and LPP studies for each core domain. The outcomes that were identified in studies examining PGP only are compared to the outcomes identified in studies including patients with LPP. This comparison has been presented by core domain. Acknowledgements We would like to thank the steering committee of the PGP-COS study for their input in the protocol of this review. Authors' contributions FW designed the review protocol with input from the PGP-COS study steering committee (acknowledged below). FW conducted the literature search. FW and MO conducted the study selection, quality assessment and data extraction. MO conducted the analysis under supervision of FW. All authors drafted, read and approved the final manuscript. Availability of data and materials Not applicable. Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests.
v3-fos-license
2018-07-19T14:11:54.000Z
2018-07-19T00:00:00.000
52952420
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://advancesindifferenceequations.springeropen.com/track/pdf/10.1186/s13662-018-1847-9", "pdf_hash": "cfd0574529e17937f205cc7cbd1fdcc9cd69f135", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2771", "s2fieldsofstudy": [ "Mathematics" ], "sha1": "4d42c5b4d348de5593b8cba5a2baecaa5cfa0c98", "year": 2018 }
pes2o/s2orc
On parametric Gevrey asymptotics for initial value problems with infinite order irregular singularity and linear fractional transforms This paper is a continuation a previous work of the authors where parametric Gevrey asymptotics for singularly perturbed nonlinear PDEs has been studied. Here, the partial differential operators are combined with particular Moebius transforms in the time variable. As a result, the leading term of the main problem needs to be regularized by means of a singularly perturbed infinite order formal irregular operator that allows us to construct a set of genuine solutions in the form of a Laplace transform in time and inverse Fourier transform in space. Furthermore, we obtain Gevrey asymptotic expansions for these solutions of some order $K>1$ in the perturbation parameter. Introduction Within this paper, we focus on a family of nonlinear singularly perturbed equations which combines linear fractional transforms, partial derivatives and differential operators of infinite order of the form (1) Q(∂ z )u(t, z, ) = exp(α k t k+1 ∂ t )R(∂ z )u(t, z, ) + P (t, , {m κ,t, } κ∈I , ∂ t , ∂ z )u(t, z, ) where α, k > 0 are real numbers, Q(X), R(X), Q 1 (X), Q 2 (X) stand for polynomials with complex coefficients and P (t, , {U κ } κ∈I , V 1 , V 2 ) represents a polynomial in t, V 1 , V 2 , linear in U κ , with holomorphic coefficients w.r.t near the origin in C, where the symbol m κ,t, denotes a Moebius operator acting on the time variable through m κ,t, u(t, z, ) = u( t 1 + κ t , z, ) arXiv:1807.07453v1 [math.CV] 19 Jul 2018 for κ belonging to some finite subset I of the positive real numbers R * + . The forcing term f (t, z, ) embodies an analytic function in the vicinity of the origin relatively to (t, ) and holomorphic w.r.t z on a horizontal strip in C of the form H β = {z ∈ C/|Im(z)| < β} for some β > 0. This work is a continuation of our previous study [14] where we aimed attention at the next problem (2) Q(∂ z )∂ t y(t, z, ) = H(t, , ∂ t , ∂ z )y(t, z, ) + Q 1 (∂ z )y(t, z, )Q 2 (∂ z )y(t, z, ) + f (t, z, ) for given vanishing initial data y(0, z, ) ≡ 0, where Q 1 , Q 2 , H stand for polynomials and f (t, z, ) is built up as above. Under suitable constraints on the components of (2), by means of Laplace and inverse Fourier transforms, we constructed a set of genuine bounded holomorphic solutions y p (t, z, ), 0 ≤ p ≤ ς − 1, for some integer ς ≥ 2, defined on domains T × H β × E p , for some well selected bounded sector T with vertex at 0 and E = {E p } 0≤p≤ς−1 a set of bounded sectors whose union contains a full neighborhood of 0 in C * . On the sectors E p , the solutions y p are shown to share w.r.t a common asymptotic expansionŷ(t, z, ) = n≥0 y n (t, z) n that represents a formal power series with bounded holomorphic coefficients y n (t, z) on T ×H β . Furthermore, this asymptotic expansion turns out to be (at most) of Gevrey order 1/k , for some integer k ≥ 1 (see Definition 7 for an explanation of this terminology) that comes out in the highest order term of the differential operator H which is of irregular type in the sense of [20] and displayed as for some integer δ D ≥ 2 and a polynomial R D (X). In the case when the aperture of E p can be taken slightly larger than π/k , the function → y p (t, z, ) represents the k −sum ofŷ on E p as described in Definition 7. Through the present contribution, our purpose is to carry out a comparable statement namely the existence of sectorial holomorphic solutions and associated asymptotic expansions as tends to 0 with controlled Gevrey bounds. However, the appearance of the nonlocal Moebius operator m κ,t, changes drastically the whole picture in comparison with our previous investigation [14]. Namely, according to our approach, a leading term of finite order δ D ≥ 2 in time as above (3) is insufficient to ensure the construction of actual holomorphic solutions to our initial problem (1). We need to supplant it by an exponential formal differential operator of infinite order w.r.t t, where (t k+1 ∂ t ) (p) represents the p−th iterate of the irregular differential operator t k+1 ∂t. As a result, (1) becomes singularly perturbed of irregular type but of infinite order in time. The reason for the choice of such a new leading term will be put into light later on in the introduction. A similar regularization procedure has been introduced in a different context in the paper [4] in order to obtain entire solutions in space for hydrodynamical PDEs such as the 3D Navier Stokes equations ∂ t v(t, x) + v(t, x) · ∇v(t, x) = −∇p(t, x) − µ∆v(t, x) , ∇ · v(t, x) = 0 for given 2π−periodic initial data v(0, x) = v 0 (x 1 , x 2 , x 3 ) on R 3 , where the usual Laplacian ∆ = 3 j=1 ∂ 2 x j is asked to be replaced by a (pseudo differential) operator exp(λA 1/2 ), where λ > 0 and A stands for the differential operator −∇ 2 , whose Fourier symbol is exp(λ|k|) for k ∈ Z 3 \ {0}. The resulting problem is shown to possess a solution v(t, x) that is analytic w.r.t x in C 3 for all t > 0 whereas the solutions of the initial problem are expected to develop singularities in space. Under appropriate restrictions on the shape of (1) listed in the statement of Theorem 1, we can select 1. a set E of bounded sectors as mentioned above, which forms a so-called good covering in C * (see Definition 5), 2. a bounded sector T with bisecting direction d = 0 3. and a set of directions d p ∈ (− π 2 , π 2 ), 0 ≤ p ≤ ς − 1 organized in a way that the halflines L dp = R + exp( √ −1d p ) avoid the infinite set of zeros of the map τ → Q(im) − exp(αkτ k )R(im) for all m ∈ R, for which we can exhibit a family of bounded holomorphic solutions u p (t, z, ) on the products T × H β × E p . Each solution u p can be expressed as a Laplace transform of some order k and Fourier inverse transform (4) u p (t, z, ) = k (2π) 1 where w dp (u, m, ) stands for a function with (at most) exponential growth of order k on a sector containing L dp w.r.t u, owning exponential decay w.r.t m on R and relying analytically on near 0 (Theorem 1). Moreover, we show that the functions → u p (t, z, ) admit a common asymptotic expansionû(t, z, ) = m≥0 h m (t, z) m on E p that defines a formal power series with bounded holomorphic coefficients on T × H β . Besides, it turns out that this asymptotic expansion is (at most) of Gevrey order 1/k and leads to k−summability on E p 0 provided that one sector E p 0 has opening larger than π/k (Theorem 2). Another substantial contrast between the problems (1) and (2) lies in the fact that the real number k is asked to be less than 1. The situation k = 1 is not covered by the technics developped in this work and is postponed for future inspection. However, the special case k = 1 has already been explored by the authors for some families of Cauchy problems and gives rise to double scale structures involving 1 and so-called 1 + Gevrey estimates, see [16], [19]. Observe that if one performs the change of variable t = 1/s through the change of function u(t, z, ) = X(s, z, ) then the equation (1) is mapped into a singularly perturbed PDE combined with small shifts T κ, X(s, z, ) = X(s + κ , z, ), for κ ∈ I. This restriction concerning the Gevrey order of formal expansions of the analytic solutions is rather natural in the context of difference equations as observed by B. Braaksma and B. Faber in [5]. Namely, if A(x) stands for an invertible matrix of dimension n ≥ 1 with meromorphic coefficients at ∞ and G(x, y) represents a holomorphic function in 1/x and y near (∞, 0), under suitable assumptions on the formal fundamental matrix Y (x) of the linear equation y(x + 1) = A(x)y(x), any formal solutionŷ(x) ∈ C n [[1/x]] of the nonlinear difference equation can be decomposed as a sum of formal seriesŷ(x) = q h=1ŷ h (x) where eachŷ h (x) turns out to be k h −summable on suitable sectors for some real numbers 0 < k h ≤ 1, for 1 ≤ h ≤ q. In order to construct the family of solutions {u p } 0≤p≤ς−1 mentioned above, we follow an approach that has been successfully applied by B. Faber and M. van der Put, see [7], in the study of formal aspects of differential-difference operators such as the construction of Newton polygons, factorizations and the extraction of formal solutions and consists in considering the translation x → x + κ as a formal differential operator of infinite order through the Taylor expansion at x, see (25). In our framework, the action of the Moebius transform T → T 1+κT is seen as an irregular operator of infinite order that can be formally written in the exponential form If one seeks for genuine solutions in the form (4), then the so-called Borel map w dp (τ, m, ) is asked to solve a related convolution equation (31) that involves infinite order operators exp(−κC k (τ )) where C k (τ ) denotes the convolution map given by (28). It turns out that this operator exp(−κC k (τ )) acts on spaces of analytic functions f (τ ) with (at most) exponential growth of order k, i.e bounded by C exp(ν|τ | k ) for some C, ν > 0 but increases strictly the type ν by a quantity depending on κ, k and ν as shown in Proposition 2, (48). It is worthwhile mentioning that the use of precise bounds for the so-called Wiman special function E α,β (z) = n≥0 z n /Γ(β +αn) for α, β > 0 at infinity is crucial in the proof that the order k is preserved under the action of exp(−κC k (τ )). Notice that this function also played a central role in proving multisummability properties of formal solutions in a perturbation parameter to certain families of nonlinear PDEs as described in our previous work [15]. As a result, the presence of an exponential type term exp(αkτ k ) in front of the equation (31) and therefore the infinite order operator exp(α k t k+1 ∂ t ) as leading term of (1) seems unavoidable to compensate such an exponential growth. We mention that a similar strategy has been carried out by S.Ōuchi in [17] who considered functional equations where p ≥ 1 is an integer, a j ∈ C * and ϕ j (z),f (z) stand for holomorphic functions near z = 0. He established the existence of formal power series solutionsû(z) ∈ C[[z]] that are proved to be p−summable in suitable directions by solving an associated convolution equation of infinite order for the Borel transform of order p in analytic functions spaces with (at most) exponential growth of order p on convenient unbounded sectors. More recently, in a work in progress [9], S. Hirose, H. Yamazawa and H. Tahara are extending the above statement to more general functional PDEs such as for analytic coefficients a 1 , a 2 , f near 0 ∈ C 2 for which formal series solutionŝ can be built up that are shown to be multisummable in appropriate multidirections in the sense defined in [2]. In a wider framework, there exists a gigantic literature dealing with infinite order PDEs/ODEs both in mathematics and in theoretical physics. We just quote some recent references somehow related to our research interests. In the paper [1], the authors study formal solutions and their Borel transform of singularly perturbed differential equations of infinite order j≥0 j P j (x, ∂ x )ψ(x, ) = 0 where P j (x, ξ) = k≥0 a j,k (x)ξ k represent entire functions with appropriate growth features. For a nice introduction of the point of view introduced by M. Sato called algebraic microlocal analysis, we refer to [11]. Other important contributions on infinite order ODEs in this context of algebraic microlocal analysis can be singled out such as [12], [13]. The paper is arranged as follows. In Section 2, we remind the reader the definition of Laplace transform for an order k chosen among the positive real numbers and basic formulas for the Fourier inverse transform acting on exponentially flat functions. In Section 3, we display our main problem (11) and describe the strategy used to solve it. In a first step, we restrain our inquiry for the sets of solutions to time rescaled function spaces, see (12). Then, we elect as potential candidates for solutions Laplace transforms of order k and Fourier inverse transforms of Borel maps w with exponential growth on unbounded sectors and exponential decay on the real line. In the last step, we write down the convolution problem (31) which is asked to be solved by the map w. In Section 4, we analyze bounds for linear/nonlinear convolution operators of finite/infinite orders acting on different spaces of analytic functions on sectors. In Section 5, we solve the principal convolution problem (31) within the Banach spaces described in Sections 3 and 4 by means of a fixed point argument. In Section 6, we provide a set of genuine holomorphic solutions (104) to our initial equation (11) by executing backwards the lines of argument described in Section 3. Furthermore, we show that the difference of any two neighboring solutions tends to 0, for in the vicinity of the origin, faster than a function with exponential decay of order k. Finally, in Section 7, we prove the existence of a common asymptotic expansion of Gevrey order 1/k > 1 for the solutions mentioned above leaning on the flatness estimates reached in Section 6, by means of a theorem by Ramis and Sibuya. Laplace, Borel transforms of order k and Fourier inverse maps We recall the definition of Laplace transform of order k as introduced in [14] but here the order k is assumed to be a real number less than 1 and larger than 1/2. If z ∈ C * denotes a non vanishing complex number, we set z k = exp(k log(z)) where log(z) stands for the principal value of the complex logarithm defined as log(z) = log |z| + iarg(z) with −π < arg(z) < π. Consider a holomorphic function w : S d,δ → C that withstands the bounds : there exist C > 0 and K > 0 such that for all τ ∈ S d,δ . We define the Laplace transform of w of order k in the direction d as the integral transform where γ depends on T and is chosen in such a way that cos(k(γ − arg(T ))) ≥ δ 1 > 0, for some fixed δ 1 . The function L d k (w)(T ) is well defined, holomorphic and bounded on any sector S d,θ,R 1/k = {T ∈ C * : |T | < R 1/k , |d − arg(T )| < θ/2}, where π k < θ < π k + 2δ and 0 < R < δ 1 /K. We restate the definition of some family of Banach spaces introduced in [14]. Finally, we remind the reader the definition of the inverse Fourier transform acting on the latter Banach spaces and some of its handy formulas relative to derivation and convolution product as stated in [14]. Definition 3 Let f ∈ E (β,µ) with β > 0, µ > 1. The inverse Fourier transform of f is given by for all x ∈ R. The function F −1 (f ) extends to an analytic bounded function on the strips for all given 0 < β < β. a) Define the function m → φ(m) = imf (m) which belongs to the space E (β,µ−1) . Then, the next identity occurs. b) Take g ∈ E (β,µ) and set as the convolution product of f and g. Then, ψ belongs to E (β,µ) and moreover, Outline of the main initial value problem and related auxiliary problems We set k ∈ ( 1 2 , 1) as a real number. Let D ≥ 2 be an integer, α D > 0 be a positive real number and c 12 , c f be complex numbers in C * . For 1 ≤ l ≤ D − 1, we consider complex numbers c l ∈ C * and non negative integers d l , δ l , ∆ l , together with positive real numbers κ l > 0 submitted to the next constraints. We assume that for all 1 ≤ l ≤ D − 2. We also take for granted that We consider a sequence of functions m → F n (m, ), for n ≥ 1 that belong to the Banach space E (β,µ) for some β > 0 and µ > max(deg(Q 1 )+1, deg(Q 2 )+1) and that depend analytically on ∈ D(0, 0 ), where D(0, 0 ) denotes the open disc centered at 0 in C with radius 0 > 0. We assume that there exist constants K 0 , T 0 > 0 such that which represents a convergent series on D(0, T 0 /2) with holomorphic and bounded coefficients on H β for any given width 0 < β < β. For all 1 ≤ l ≤ D − 1, we set the polynomials A l (T, ) = n∈I l A l,n ( )T n where I l are finite subsets of N and A l,n ( ) represent bounded holomorphic functions on the disc D(0, 0 ). We put for all 1 ≤ l ≤ D − 1. By construction, f (t, z, ) (resp. a l (t, )) defines a bounded holomorphic function on D(0, r) × H β × D(0, 0 ) (resp. D(0, r) × D(0, 0 )) for any given 0 < β < β and radii r, 0 > 0 with r 0 ≤ T 0 /2. Let us introduce the next differential operator of infinite order formally defined as where (t k+1 ∂ t ) (p) stands for the p−th iterate of the differential operator t k+1 ∂ t . We consider a family of nonlinear singularly perturbed initial value problems which involves this latter operator of infinite order as leading term and linear fractional transforms for vanishing initial data u(0, z, ) = 0. Within this work, we search for time rescaled solutions of (11) of the form (12) u(t, z, ) = U ( t, z, ) Then, through the change of variable T = t, the expression U (T, z, ) is subjected to solve the next nonlinear singular problem involving fractional transforms for given initial data U (0, z, ) = 0. According to the assumption (7), there exists a real number Besides, with the help of the formula (8.7) from [21] p. 3630, we can expand the next differential operators Hence, according to (14) together with (15), we can write down the next equation for U (T, z, ), namely We now provide the definition of a modified version of some Banach spaces introduced in the papers [14], [15] that takes into account a ramified variable τ k for k given as above. Definition 4 Let S d be an unbounded sector centered at 0 with bisecting direction d ∈ R. Let ν, β, µ > 0 and ρ > 0 be positive real numbers. Let k ∈ ( 1 2 , 1) defined as above. We set F d 2) The norm Lemma 1 For β, µ given in (9), there exists ν > 0 such that the series define a function that belongs to the space F d (ν,β,µ,k,ρ) for all ∈ D(0, 0 ), for any radius ρ > 0, any sector S d for d ∈ R. Proof By Definition of the norm ||.|| (ν,β,µ,k,ρ) , we get the next upper bounds Due to the classical estimates for any real numbers m 1 ≥ 0, m 2 > 0, together with the Stirling formula (see [3], Appendix B.3) as n tends to +∞, we get two constants A 1 > 0 depending on k, ν and A 2 > 0 depending on k such that for all n ≥ 1. Therefore, if ν 1/k > A 2 /T 0 then we obtain the bounds By construction, according to the very definition of the Gamma function, the function F (T, z, ) can be represented as a Laplace transform of order k in direction d and Fourier inverse transform where the integration path L γ = R + e √ −1γ stands for a halfline with direction γ ∈ R which belongs to the set S d ∪ {0}, whenever T belongs to a sector S d,θ, with bisecting direction d, aperture π k < θ < π k + Ap(S d ) and radius with Ap(S d ) the aperture of S d for some > 0 and z appertains to a strip H β for any 0 < β < β together with ∈ D(0, 0 ). In the next step, we seek for solutions U (T, z, ) of (16) on the same domains as above that can be expressed similarly to F (T, z, ) as integral representations through Laplace transforms of order k and Fourier inverse transform Our goal is the statement of a related problem fulfilled by the expression w(τ, m, ) that is forecast to be solved in the next section among the Banach spaces introduced above in Definition 4. Overall this section, let us assume that the function w(τ, m, ) belongs to the Banach space F d (ν,β,µ,k,ρ) . We first display some formulas related to the action of the differential operators of irregular type and multiplication by monomials. A similar statement has been given in Section 3 of [14] for formal series expansions. Lemma 2 1) The action of the differential operator T k+1 ∂ T on U γ is given by 2) Let m > 0 be a real number. The action of the multiplication by T m on U γ is described through 3) The action of the differential operators Q(∂ z ) and multiplication with the resulting functions Q(∂ z )U γ maps U γ into a Laplace and Fourier transform, Proof Here we present direct analytic proofs which avoids the use of summability arguments through the Watson's lemma. The first point 1) is obtained by a mere derivation under the symbol. We turn to the second point 2). By application of the Fubini theorem we get that On the other hand, by successive path defor- By the very definition of the Gamma function and a path deformation yields As a result, according to the path deformation s = u k , we finally get which implies the identity (23). We aim our attention to the point 3). Again the Fubini theorem yields Therefore, we obtain As a result, and according to the paths deformations s = u k and v = u k , we get at last from which the identity (24) follows. 2 At the next level, we describe the action of the Moebius transform T → T 1+κ l T on U γ . It needs some preliminaries. We depart as in the work of B. Faber and M. Van der Put [7] which describes the translation x → x + κ l as a differential operator of infinite order through the Taylor expansion. Namely, for any holomorphic function f : U → C defined on an open convex set U ⊂ C containing x and x + κ l , the next Taylor formula holds If one performs the change of variable x = 1/T through the change of function f (x) = U (1/x), one obtains a corresponding formula for U (T ), where (T 2 ∂ T ) (p) represents the p−th iterate of the irregular operator T 2 ∂ T . According to our hypothesis k ∈ (1/2, 1), we can rely on Lemma 2 1)2) for the next expansions As a result, if one denotes C k the operator defined as then the expression U γ ( T 1+κ l T , z, ) can be written as Laplace transform of order k in direction d and Fourier inverse transform where the integrant is formally presented as a series of operators and C (p) k stands for the k−th order iterate of the operator C k described above. By virtue of the identities (22), (23) and (24) presented in Lemma 2 and according to the integral representation for the Moebius map acting on U γ as described above in (29), we are now in position to state the main equation that shall fulfill the expression w(τ, m, ) provided that U γ (T, z, ) solves the equation in prepared form (16), namely Action of convolution operators on analytic and continuous function spaces The principal goal of this section is to present bounds for convolution maps acting on function spaces that are analytic on sectors in C and continuous on R. As in Definition 4, S d denotes an unbounded sector centered at 0 with bisecting direction d in R and D(0, ρ) \ L − stands for a cut disc centered at 0 where L − = (−ρ, 0]. Proposition 1 Let k ∈ ( 1 2 , 1) be a real number. We set γ 2 , γ 3 as real numbers submitted to the next assumption for all τ ∈ S d , all m ∈ R. Assume moreover that for all m ∈ R, the map τ → f (τ, m) extends analytically on the cut disc D(0, ρ) \ L − and for which one can choose a constant C 1 > 0 such that whenever τ ∈ D(0, ρ) \ L − and m ∈ R. We set Then, 1) The map (τ, m) → C k,γ 2 ,γ 3 (f )(τ, m) is a continuous function on S d × R, holomorphic w.r.t τ on S d for which one can sort a constant K 1 > 0 (depending on γ 2 ,σ) such that 2) For all m ∈ R, the function τ → C k,γ 2 ,γ 3 (f )(τ, m) extends analytically on D(0, ρ) \ L − . Furthermore, the inequality Proof We first investigate the global behaviour of the convolution operator C k,γ 2 ,γ 3 w.r.t τ on the unbounded sector S d , namely the point 1). Owing to the assumed bounds (33), we get In the next part of the proof, we need to focus on sharp upper bounds for the function We move onward as in Proposition 1 of [15] but we need to keep track on the constants appearing in the bounds in order to provide accurate estimates regarding the dependence with respect to the constants γ 3 and N . In accordance with the uniform expansion e σh = n≥0 (σh) n /n! on every compact interval [0, x], x ≥ 0, we can write down the expansion According to the Beta integral formula (see Appendix B from [3]), we recall that holds for any real numbers x ≥ 0 and α > 0, β > 0. Therefore, since N + γ 3 ≥ 1 and γ 2 > −1, we can rewrite for all x ≥ 0. On the other hand, as a consequence of the Stirling formula Γ(x) ∼ (2π) 1/2 x x e −x x −1/2 as x → +∞, for any given a > 0, there exist two constants K 1.1 , K 1.2 > 0 (depending on a) such that for all x ≥ 1. As a result, we get a constants K 1.2 > 0 (depending on γ 2 ) for which for all n ≥ 0. Hence, we get a constant K 1.3 > 0 (depending on γ 2 ) for all x ≥ 0. A second application of (40), shows the existence of a constant K 1.1 > 0 (depending in γ 2 ) for which 1 (n + 1) γ 2 +1 ≤ Γ(n + 1) holds for all n ≥ 0. Subsequently, we obtain a constant K 1.4 > 0 (depending on γ 2 ) such that for all x ≥ 0. Owing to the asymptotic property at infinity of the Wiman function E α,β (z) = n≥0 z n /Γ(β+ αn), for any α, β > 0 stated in [6] p. 210 we get a constant K 1.5 > 0 (depending on γ 2 ,σ) with for all x ≥ 0. In accordance with this last inequality, by going back to our departing inequality (38), we obtain the expected bounds stated in the inequality (36), namely In a second part of the proof, we study local properties near the origin w.r.t τ . First, we can rewrite C k,γ 2 ,γ 3 by using the parametrization s = τ k u for 0 ≤ u ≤ 1. Namely, holds for all τ ∈ D(0, ρ)\L − whenever m ∈ R. Under the fourth assumption of (32) and from the construction of f (τ, m), the representation (43) induces that for all m ∈ R, τ → C k,γ 2 ,γ 3 (f )(τ, m) extends analytically on D(0, ρ) \ L − . Furthermore, granting to (34), one can deduce the bounds With the help of (39), we deduce that Proposition 2 Let k ∈ ( 1 2 , 1) be a real number. Let (τ, m) → f (τ, m) be a continuous function on S d × R, holomorphic w.r.t τ on S d for which there exist constants C 2 > 0, ν > 0 and µ > 1, β > 0 fulfilling for all τ ∈ S d , all m ∈ R. Take for granted that for all m ∈ R, the map τ → f (τ, m) extends analytically on the cut disc D(0, ρ)\L − suffering the next bounds : there exists a constant C 2 > 0 with Let κ l > 0 be a real number. We consider the operator k denotes the iterate of order p ≥ 0 of the operator C k defined as with the convention that C ). Then, 1) The map (τ, m) → (exp(−κ l C k )f )(τ, m) represents a continuous function on S d × R, holomorphic w.r.t τ on S d for which there exists a constant K 1 > 0 (depending on k,ν) such that 2) For all m ∈ R, the function τ → (exp(−κ l C k )f )(τ, m) extends analytically on D(0, ρ) \ L − . Furthermore, for all τ ∈ D(0, ρ) \ L − , all m ∈ R. Proof We proceed as in the proof of Proposition 3 of [14]. Namely, according to the norm's definition 4, we can rewrite According to the triangular inequality |m| ≤ |m − m 1 | + |m 1 | and bearing in mind the definition of the norms of f and g, we deduce Now, we get bounds from above that can be broken up in two parts In the last step of the proof, we show that C 3.2 and C 3.3 have finite values. By construction, three positive constants Q 1 , Q 2 and R can be found such that for all m, m 1 ∈ R. Hence, that is finite owing to µ > max(deg(Q 1 ) + 1, deg(Q 2 ) + 1) submitted to the constraints (52) as shown in Lemma 4 from [18]. On the other hand, which is also finite. 2 Manufacturing of solutions to an auxiliary integral equation relying on a complex parameter The main objective of this section is the construction of a unique solution of the equation (31) for vanishing initial data within the Banach spaces given in Definition 4. The first disclose further analytic assumptions on the leading polynomials Q(X) and R D (X) in order to be able to transform our problem (31) into a fixed point equation as stated below, see (101). Namely, we take for granted that there exists a bounded sectorial annulus , small aperture η Q,R D > 0 for some radii r Q,R D ,2 > r Q,R D ,1 > 1 such that for all m ∈ R. For any integer l ∈ Z, we set See Figure 1 for a configuration of the points a l (m), l ∈ Z, and the set S Q,R D related to their definition. By construction, we see that Indeed, by construction of τ k = exp(k log(τ )), this equation is equivalent to write for some h ∈ Z. According to the hypothesis r Q,R D ,1 > 1, we know that |arg(a l (m))| < π/2 and hence (65) | arg(a l (m)) k | < π 2k < π since we assume that 1 2 < k < 1. Owing to the fact that arg(τ ) belongs to (−π, π), it forces h = 0 and hence arg(τ ) = arg(a l (m))/k. We consider the set of so-called forbidden directions. We choose the aperture η Q,R D > 0 small enough in a way that for all directions d ∈ (−π/2, π/2) \ Θ Q,R D , we can find some unbounded sector S d centered at 0 with small aperture δ S d > 0 and bisecting direction d such that τ l / ∈ S d ∪ D(0, ρ) for some fixed ρ > 0 small enough and for all l ∈ Z. For all τ ∈ C \ R − , all m ∈ R, we consider the function Let d ∈ (−π/2, π/2) \ Θ Q,R D and take a sector S d and a disc D(0, ρ) as above. 1) Our first goal is to provide lower bounds for the function |H(τ, m)| when τ ∈ S d and m ∈ R. Let τ ∈ S d . Then, we can write for some well chosen l ∈ Z, where r ≥ 0 and where θ belong to some small interval I S d which is close to 0 but such that 0 / ∈ I S d . In particular, we choose I S d in a way that arg(τ l ) + θ belongs to (−π, π) for all θ ∈ I S d . Hence, owing to the fact that τ l solves (62), we can rewrite In particular, if the radius r Q,R D ,2 > r Q,R D ,1 is chosen close enough to r Q,R D ,1 , we get a constant η 1,l > 0 (depending on l) for which In a second step, we aim attention at lower bounds for large values of |τ | on S d . We first carry out some preliminary computations, namely we need to expand We assume that the segment I S d is close enough to 0 in a way that we can find a constant ∆ 1 > 0 submitted to the next inequality for all m ∈ R, all θ ∈ I S d . Besides, according to the inclusion (59), we notice that holds for all m ∈ R. As a result, collecting (71), (72) and (73) yields the lower bounds Departing from the factorization (69) we get the next estimates from below We select a real number r 1 > 0 large enough such that for all r ≥ r 1 . Under this last constraint (76), we deduce from (74) and (75) that for all r ≥ r 1 , all θ ∈ I S d , all m ∈ R. Now, in view of the decomposition (67), we get in particular that |τ | = r|τ l |. Consequently, we see that for all τ ∈ S d with |τ | ≥ r 1 |τ l |. As a result, gathering (70) and (77), together with the shape of a l (m) and τ l given in (60), (63), we obtain two constants A H,d , B H,d > 0 depending on k, S Q,R D , S d for which for all τ ∈ S d , all m ∈ R. Proposition 4 We make the next additional assumptions for all 1 ≤ l ≤ D − 1, where K 1 is a constant depending on k, ν defined in Proposition 2 1) and B H,d is selected in (78). Under the condition that the moduli |c 12 |,|c f | and |c l | for 1 ≤ l ≤ D − 1 are chosen small enough, we can find a constant > 0 for which the equation (31) has a unique solution w d (τ, m, ) in the space F d (ν,β,µ,k,ρ) controlled in norm in a way that ||w d (τ, m, )|| (ν,β,µ,k,ρ) ≤ for all ∈ D(0, 0 ), where β, µ > 0 are chosen as in (9), ν > 1 is taken as in Lemma 1, the sector S d and the disc D(0, ρ) are suitably selected in a way that τ l / ∈ S d ∪ D(0, ρ) for all l ∈ Z where τ l is displayed by (63) as described above. Proof We initiate the proof with a lemma that introduces a map related to (31) and describes some of its properties that will allow us to apply a fixed point theorem for it. Lemma 3 One can sort the moduli |c 12 |,|c f | and |c l | for 1 ≤ l ≤ D − 1 tiny in size for which a constant > 0 can be picked up in a way that the map H defined as fulfills the next features: 1) The next inclusion holds whereB(0, ) represents the closed ball of radius > 0 centered at 0 in F d (ν,β,µ,k,ρ) for all ∈ D(0, 0 ). Proof Foremost, we focus on the first property (83). Let w(τ, m) belonging to F d (ν,β,µ,k,ρ) . We take ∈ D(0, 0 ) and set > 0 such that ||w(τ, m)|| (ν,β,µ,k,ρ) ≤ . In particular, we notice that the next estimates As a consequence of Proposition 2, we get that (τ, m) → exp(−κ l C k )(w)(τ, m) defines a continuous function on S d × R, holomorphic w.r.t τ on S d and a constant K 1 > 0 (depending on k, ν) can be found such that for all τ ∈ S d , all m ∈ R. Furthermore, the application of Proposition 1 for γ 2 = n+d l,k k − 1, γ 3 = δ l − 1 with n ∈ I l grants a constant C 4 > 0 (depending on I l ,k,κ l ,d l ,δ l ,ν) with Looking back to the lower bounds (78) and having a glance at the constraints (81) allows us to reach the estimates On the other hand, Proposition 2 guarantees that for all m ∈ R, the function τ → (exp(−κ l C k )w)(τ, m) extends analytically on D(0, ρ) \ L − with the bounds for all τ ∈ D(0, ρ)\L − , all m ∈ R. As a consequence, Proposition 1 specialized for γ 2 = n+d l,k k −1, γ 3 = δ l − 1 with n ∈ I l gives raise to a constant C 4 > 0 (depending on I l ,k,κ l ,d l ,δ l ,ν,ρ) for which Keeping in mind the lower bounds (80) we notice that By clustering (87) and (89), we conclude that there exists a constant C 5 > 0 (depending on I l , k, κ l , d l , δ l , ν, ρ, S Q,R D , S d , R l , Q) with Keeping in view the bounds (86), an application of Proposition 1 for where n ∈ I l , with 1 ≤ p ≤ δ l − 1 yields a constant C 6 > 0 (depending on I l ,k,κ l ,d l ,δ l ,ν) with for all τ ∈ S d , m ∈ R and 1 ≤ p ≤ δ l − 1. Owing to the lower bounds (78) under the restriction (81), we deduce that Using the bounds (88) and calling up Proposition 1 for the values where n ∈ I l , with 1 ≤ p ≤ δ l − 1, we obtain a constant C 6 > 0 (depending on I l ,k,κ l ,d l ,δ l ,ν,ρ) for which With the help of the lower bounds (80), we deduce provided that τ ∈ D(0, ρ) \ L − and m ∈ R. By grouping (91) and (92), we deduce the existence of a constant C 7 > 0 (depending on I l , k, κ l , d l , δ l , ν, ρ, S Q,R D , S d , R l , Q) with On the other hand, taking into account the assumption (8) and the lower bounds (70) together with (80), the application of Proposition 3 induces a constant C 3 > 0 (depending on Q 1 , Q 2 , Q, µ, k, ν) and a constant η 2 > 0 (equal to η 2,l from (70)) for which Furthermore, owing to Lemma 1 and in view of the lower estimates (70), (80), we obtain a constant K f > 0 (depending on k, ν and K 0 , T 0 from (9)) and η 2 > 0 such that for all ∈ D(0, 0 ). Now, we select |c 12 |, |c f | with |c l |, 1 ≤ l ≤ D − 1 small enough in a way that one can find a constant > 0 with Finally, if one collects the norms estimates (90), (93) in a row with (94) and (95) under the restriction (96), on gets the inclusion (83). In the next part of the proof, we turn to the second feature (84). Namely, let w 1 (τ, m), w 2 (τ, m) belonging toB(0, ) inside F d (ν,β,µ,k,ρ) . From the very definition, we get in particular that the next bounds Following exactly the same steps as the sequence of inequalities (85), (86), (87), (88), (89) and (90), we observe that for the constant C 5 > 0 appearing in (90). Similarly, tracking the progression (85), (86), (88), (91), (92) and (93) yields the next bounds for all 1 ≤ p ≤ δ l − 1, where the constant C 7 > 0 shows up in (93). In order to handle the nonlinear term, we need to present the next difference in prepared form Then, in view of the assumption (8) and the lower bounds (70), (80), Proposition 3 gives raise to constants C 3 > 0 and η 2 > 0 appearing in (94) for which Now, we restrict the constants |c 12 | and |c l |, 1 ≤ l ≤ D − 1 in a way that one arrives at the next bounds At last, by assembling the estimates (97), (98) with (99) submitted to the constraints (100), one achieves the forcast shrinking property (84). Ultimately, we select |c 12 |, |c f | and |c l |, 1 ≤ l ≤ D − 1 small enough in a way that (96) and (100) are simultaneously fulfilled. Lemma 3 follows. 2 We turn back again to the proof of Proposition 4. For > 0 chosen as in Lemma 3, we set the closed ballB(0, ) ⊂ F d (ν,β,µ,k,ρ) which represents a complete metric space for the distance d(x, y) = ||x − y|| (ν,β,µ,k,ρ) . Owing to the lemma above, we observe that H induces a contractive application from (B(0, ), d) into itself. Then, according to the classical contractive mapping theorem, the map H possesses a unique fixed point that we set as w d (τ, m, ), meaning that Analytic solutions on sectors to the main initial value problem We turn back to the formal constructions realized in Section 3 by taking into consideration the solution of the related problem (31) built up in Section 5 within the Banach spaces described in Definition 4. At the onset, we remind the reader the definition of a good covering in C * and we disclose a modified version of so-called associated sets of sectors as proposed in our previous work [14]. Definition 5 Let ς ≥ 2 be an integer. For all 0 ≤ p ≤ ς −1, we set E p as an open sector centered at 0, with radius 0 > 0 such that E p ∩ E p+1 = ∅ for all 0 ≤ p ≤ ς − 1 (with the convention that E ς = E 0 ). Furthermore, we take for granted that the intersection of any three different elements of {E p } 0≤p≤ς−1 is empty and that ∪ ς−1 p=0 E p = U \ {0}, where U stands for some neighborhood of 0 in C. A set of sector {E p } 0≤p≤ς−1 with the above properties is called a good covering in C * . Definition 6 We consider a good covering E = {E p } 0≤p≤ς−1 in C * . We fix a real number ρ > 0 and an open sector T centered at 0 with bisecting direction d = 0 and radius r T > 0 and we set up a family of open sectors with aperture θ > π/k and d p ∈ [−π, π), 0 ≤ p ≤ ς − 1 represent their bisecting directions. We say that the data {{S dp,θ, 0 r T } 0≤p≤ς−1 , T , ρ} are associated to E if the next two constraints hold: 1) There exists a set of unbounded sectors S dp , 0 ≤ p ≤ ς − 1 centered at 0 with suitably chosen bisecting direction d p ∈ (−π/2, π/2) and small aperture satisfying the property that τ l / ∈ S dp ∪ D(0, ρ) for some fixed radius ρ > 0 and all l ∈ Z where τ l stand for the complex numbers defined through (63). 2) For all ∈ E p , all t ∈ T , (102) t ∈ S dp,θ, 0 r T Figure 3: Good covering in C for all 0 ≤ p ≤ ς − 1. Figure 3 shows a configuration of a good covering of three sectors, one of them of opening larger than π/k for some k close to 1. We illustrate in Figure 4 a configuration of associated sectors. In the following first principal result of the work, we build up a set of actual holomorphic solutions to the main initial value problem (11) defined on the sectors E p w.r.t . We also provide an upper control for the difference between any two neighboring solutions on E p ∩ E p+1 that turn out to be at most exponentially flat of order k. Theorem 1 Let us assume that the constraints (6), (7), (8), (9) and (59) hold. We consider a good covering E = {E p } 0≤p≤ς−1 for which a set of data {{S dp,θ, 0 r T } 0≤p≤ς−1 , T , ρ} associated to E can be singled out. We take for granted that the constants α D and κ l , 1 ≤ l ≤ D − 1 appearing in the problem (11) are submitted to the next inequalities for all 0 ≤ p ≤ ς − 1, where B H,dp is framed in the construction (78) and depends on k,S Q,R D , S dp and K 1 > 0 is a constant relying on k, ν defined in Proposition 2 1). Then, whenever the moduli |c 12 |,|c f | and |c l |, 1 ≤ l ≤ D − 1 are taken sufficiently small, a family {u p (t, z, )} 0≤p≤ς−1 of genuine solutions of (11) can be established. More precisely, each function u p (t, z, ) defines a bounded holomorphic function on the product (T ∩D(0, σ))×H β ×E p for any given 0 < β < β and suitably tiny σ > 0 (where β comes out in (9)) and can be expressed as a Laplace transform of order k and Fourier inverse transform along a halfline L γp = R + e √ −1γp ⊂ S dp ∪ {0} and where w dp (τ, m, ) stands for a function that belongs to the Banach space F dp (ν,β,µ,k,ρ) for all ∈ D(0, 0 ). Furthermore, one can choose Figure 4: A configuration associated to the good covering in Figure 3 constants K p , M p > 0 and 0 < σ < σ (independent of ) with for all ∈ E p+1 ∩ E p , all 0 ≤ p ≤ ς − 1 (owing to the convention that u ς = u 0 ). As a consequence, the Laplace transform of order k and Fourier inverse transform along a halfline L γp ⊂ S dp ∪ {0} represents 1) A holomorphic bounded function w.r.t T on a sector S dp,θ, with bisecting direction d p , aperture π k < θ < π k + Ap(S dp ), radius , where Ap(S dp ) stands for the aperture of S dp , for some real number > 0. As a result, the function u p (t, z, ) = U γp ( t, z, ) defines a bounded holomorphic function w.r.t t on T ∩ D(0, σ) for some σ > 0 small enough, ∈ E p , z ∈ H β for any given 0 < β < β, owing to the fact that the sectors E p and T from the associated data fulfill the crucial feature (102). Moreover, u p (t, z, ) solves the main initial value problem (11) on the domain described above (T ∩ D(0, σ)) × H β × E p , for all 0 ≤ p ≤ ς − 1. In the final part of the proof, we are concerned with the bounds (105). The steps of verification are comparable to the arguments displayed in Theorem 1 of [14] but we still decide to present the details for the benefit of clarity. If the conditions above are fulfilled, the vector valued Laplace transform of order k of B k (â)(τ ) in the direction d is defined by where γ depends on and is chosen in such a way that cos(k(γ − arg( ))) ≥ δ 1 > 0, for some fixed δ 1 , for all in a sector S d,θ,R 1/k = { ∈ C * : | | < R 1/k , |d − arg( )| < θ/2}, where the angle θ and radius R suffer the next restrictions, π k < θ < π k + 2δ and 0 < R < δ 1 /K. Notice that this Laplace transform of order k differs slightly from the one introduced in Definition 1 which turns out to be more suitable for the problems under study in this work. The function L d k (B k (â))( ) is called the k−sum of the formal seriesâ( ) in the direction d. It represents a bounded and holomorphic function on the sector S d,θ,R 1/k and is the unique such function that possesses the formal seriesâ( ) as Gevrey asymptotic expansion of order 1/k with respect to on S d,θ,R 1/k which means that for all π k < θ 1 < θ, there exist C, M > 0 such that ||L d k (B k (â))( ) − n−1 p=0 a p p || F ≤ CM n Γ(1 + n k )| | n for all n ≥ 1, all ∈ S d,θ 1 ,R 1/k . In the sequel, we present a cohomological criterion for the existence of Gevrey asymptotics of order 1/k for suitable families of sectorial holomorphic functions and k−summability of formal series with coefficients in Banach spaces (see [3], p. 121 or [10], Lemma XI-2-6) which is known as the Ramis-Sibuya theorem in the literature. This result is an essential tool in the proof of our second main statement (Theorem 2). Theorem (RS) Let (F, ||.|| F ) be a Banach space over C and {E p } 0≤p≤ς−1 be a good covering in C * . For all 0 ≤ p ≤ ς − 1, let G p be a holomorphic function from E p into the Banach space (F, ||.|| F ) and let the cocycle Θ p ( ) = G p+1 ( ) − G p ( ) be a holomorphic function from the sector Z p = E p+1 ∩ E p into E (with the convention that E ς = E 0 and G ς = G 0 ). We make the following assumptions. 1) The functions G p ( ) are bounded as ∈ E p tends to the origin in C, for all 0 ≤ p ≤ ς − 1. 2) The functions Θ p ( ) are exponentially flat of order k on Z p , for all 0 ≤ p ≤ ς − 1. This means that there exist constants C p , A p > 0 such that ||Θ p ( )|| F ≤ C p e −Ap/| | k for all ∈ Z p , all 0 ≤ p ≤ ς − 1. Then, for all 0 ≤ p ≤ ς − 1, the functions G p ( ) have a common formal power serieŝ G( ) ∈ F[[ ]] as Gevrey asymptotic expansion of order 1/k on E p . Moreover, if the aperture of one sector E p 0 is slightly larger than π/k, then G p 0 ( ) represents the k−sum ofĜ( ) on E p 0 . 7.2 Gevrey asymptotic expansion in the complex parameter for the analytic solutions to the initial value problem Within this subsection, we disclose the second central result of our work, namely we establish the existence of a formal power series in the parameter whose coefficients are bounded holomorphic functions on the product of a sector T with small radius centered at 0 and a strip H β in C 2 , which represent the common Gevrey asymptotic expansion of order 1/k of the actual solutions u p (t, z, ) of (11) constructed in Theorem 1. The second main result of this work can be stated as follows. as Gevrey asymptotic expansion of order 1/k. Strictly speaking, for all 0 ≤ p ≤ ς − 1, we can pick up two constants C p , M p > 0 with sup t∈T ∩D(0,σ ),z∈H β |u p (t, z, ) − n−1 m=0 h m (t, z) m | ≤ C p M n p Γ(1 + n k )| | n for all n ≥ 1, whenever ∈ E p . Furthermore, if the aperture of one sector E p 0 can be taken slightly larger than π/k, then the map → u p 0 (t, z, ) is promoted as the k−sum ofû(t, z, ) on E p 0 . Proof We focus on the family of functions u p (t, z, ), 0 ≤ p ≤ ς − 1 constructed in Theorem 1. For all 0 ≤ p ≤ ς − 1, we define G p ( ) := (t, z) → u p (t, z, ), which represents by construction a holomorphic and bounded function from E p into the Banach space F of bounded holomorphic functions on (T ∩ D(0, σ )) × H β equipped with the supremum norm, where T is a bounded sector selected in Theorem 1, the radius σ > 0 is taken small enough and H β is a horizontal strip of width 0 < β < β. In accordance with the bounds (105), we deduce that the cocycle Θ p ( ) = G p+1 ( ) − G p ( ) is exponentially flat of order k on Z p = E p ∩ E p+1 , for any 0 ≤ p ≤ ς − 1. Owing to Theorem (RS) displayed overhead, we obtain a formal power seriesĜ( ) ∈ F[[ ]] which represents the Gevrey asymptotic expansion of order 1/k of each G p ( ) on E p , for 0 ≤ p ≤ ς − 1. Besides, when the aperture of one sector E p 0 is slightly larger than π/k, the function G p 0 ( ) defines the k−sum ofĜ( ) on E p 0 as described through Definition 7. 2
v3-fos-license
2022-02-17T16:20:16.141Z
2022-02-15T00:00:00.000
246882717
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://eudl.eu/pdf/10.4108/eai.15-2-2022.173453", "pdf_hash": "19da2904b4570a764755881aaccf9b27fa8b5908", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2773", "s2fieldsofstudy": [ "Medicine", "Education", "Computer Science" ], "sha1": "99f9b3d22fb44ecfe764d5d1cc5c6c8dfc5808a3", "year": 2022 }
pes2o/s2orc
Increasing motivation for cancer treatment adherence in children through a mobile educational game: a pilot study INTRODUCTION: It is crucial to educate childhood cancer patients (CCPs) about their illness and motivate them for cancer treatment and treatment side-effects management. OBJECTIVES: This paper describes the design, development and pilot evaluation of the proposed serious game intervention with CCPs in Malaysia. METHODS: A single-centre, single-arm intervention was conducted with CCPs (n=8). Surveys were done pre-test and post-test. RESULTS: The Protection Motivation Theory was used to measure the participants' motivation. Self-reported surveys with CCPs and caregiver dyads showed a significant increase in participants’ intention to use cancer treatment. Although the increase in the intention to use daily self-care and cancer knowledge survey scores was not substantial, the post-test caregivers' feedback revealed that the game was beneficial for their children. CONCLUSION: Early results of the study have shown the intervention’s potential to boost the knowledge and motivations of CCPs. Background Childhood cancer, the number one cause of death via illness in children, has a survival rate of 80% in high-income countries with treatment [1]. However, cancer treatment is a long and challenging experience for young patients [2], resulting in many taxing treatment side effects [3][4][5]. Due to these factors, childhood cancer patients and their families may not adhere fully to treatment and may even resort to treatment abandonment [6]. Many of these treatment side- Purpose of the Study This pilot study aims to assess and explore the effectiveness of a serious game intervention to motivate children with cancer to adhere to their treatment, encourage daily self-care, and educate them about cancer and cancer treatment. The serious game intervention is developed using theory-based strategies that promote healthy behaviours. This paper reports the development and pilot evaluation of Pets vs Onco, a virtual pet mobile game for young cancer patients. Protection Motivation Theory As an important aspect of the intervention is to motivate childhood cancer patients to go through cancer treatment, the Protection Motivation Theory (PMT) [17,18] was employed to guide the intervention's development. This theory is a health belief model used to understand how and why various individuals respond to potential health threats to themselves and have been used in many health interventions to motivate health behaviours [19][20][21]. A person's intention to take up a health behaviour is affected by seven factors divided into two categories. The first category, threat appraisal, consists of perceived severity, perceived vulnerability, intrinsic and extrinsic rewards, and fear arousal. The second category is the coping appraisal, and this consists of perceived response-efficacy, perceived selfefficacy and response cost. As both childhood cancer and cancer treatment sideeffects are strong health threats, this intervention focuses on the coping appraisal aspect of the PMT. It should boost the patients' response efficacy and self-efficacy for 1) adhering to their cancer treatment to fight cancer and 2) practising daily self-care to counter cancer treatment side-effects. Social Learning Theory Another theory that was used to shape the design of the intervention is the Social Learning Theory [22]. This theory hypothesizes that people can learn from one another in a social context through imitation, observation, and modelling. The Social Learning Theory has been utilized to teach sun protection to young children through observational learning [23] via clown characters who act as role models to convey the desired health behaviour to pre-school children. Another example of observational learning is the mobile application "Veggie Maths Masters" [24], which encourages the acceptance of vegetables in young children by depicting characters enjoying vegetables. Therefore, the game intervention should utilize role models to convey good health behaviours to children. Game Design and Development Before developing the intervention, the researchers reviewed the aims and related game mechanics of digital health interventions for children with cancer [25] to inform the design of the serious game. The findings support the use of the following game mechanics: "fighting enemies" to empower young cancer patients to fight their illness, "rolemodelling" to demonstrate good health behaviours, and game mechanics that encourage continuous gameplay such as "custom avatars", "randomly generated levels" and "virtual rewards", "unlockables", and "quests and challenges". The game intervention has also been suggested to be a mobile game for its portability. The conceptual design of this game intervention [26] has established that the virtual pet is suitable for the game's intervention as it can demonstrate the importance of daily self-care. A virtual pet emulates a real pet that humans can interact with and can take the form of an electronic toy, robot, or game [27]. Players can gain the confidence to improve their awareness of taking care of their health by taking care of their virtual pet [28]. Virtual pets were used as health interventions for children to promote health management such as asthma [28] and encouraged good health habits to prevent obesity, such as increased physical exercise and eating healthily [29,30]. The Pets vs Onco game required several resources in its design and development. The primary development tools include a game engine for creating a 2D mobile game and other resources such as artwork and sounds. Unity 5 is selected as the game engine for the development, as it supports the C# language. Unity 5 was chosen as it supports the development of 2D mobile games, and is well documented, with many development tutorials available. Adobe Photoshop was used to create sprites and user interface items for the artwork required. Due to the resources constraint, some free-to-use online image resources were included. Royalty-free music, sound effects, and sound clips were obtained from online resources. Whenever required, the Audacity software was also used to edit the game's audio. As mentioned previously, the development of the current game prototype [31] was guided by two theories: the Protection Motivation Theory (PMT) and the Social Learning Theory (SLT). The key game modules of Pets vs Onco are summarized in Table 1 Figure 1a) Related Theories: PMT and SLT • Highlights the importance of taking daily care of one's health • The pet has a few health statuses that the player needs to care for, such as hunger, thirst, rest, and cleanliness • A cancer treatment status is also included, which the player can maintain through the mini-games module Mini-Games Module (See Figure 1b) Related Theories: PMT and SLT • Modelled after cancer treatments to allow the player to virtually fight cancer • The ability to fight cancer in a game can empower players to fight the disease and also boost treatment adherence [32] • The three mini-games are: 1) Onco Blastmodelled after chemotherapy, is a horizontal space shooter game where the pet shoots medicine out of a rocket ship at bad cancer cells 2) Radio Beam Attackmodelled after radiotherapy, is a whack-a-mole style game where players can tap on the bad cancer cells to blast them with a radio beam 3) Onco Slashmodelled after surgery, in this mini-game, players can draw on the screen to "slash" at a group of bad cancer cells while avoiding the good body cells • To encourage the players to reflect on their situation • Players can write new diary entries and view their past entries • This module asks the players to rate their emotional and physical health statuses • It provides a checklist of common health-related problems experienced by childhood cancer patients such as feeling hungry, having bowel movement problems, and experiencing pain • Should any of these problems be checked, the pet will give basic advice to help the players and encourage them to seek help from their family and health care providers • The ability to self-reflect and express themselves [15], and share their thoughts and experiences [33], were found to help young cancer patients to cope positively Language Module • The game is available in three languages (English, Malay and Simplified Chinese) to support the understanding of young children who are not fluent in English Pet Appearance Module • Allows the customization of the pet and pet home appearance Shop Module • Players can use the star coin virtual currency earned from mini-games to purchase more appearance options for their pet Tutorial and Help Module • A tutorial is available upon the player's first entry to the game • The pet demonstrates all the basic features to guide the player from one game module to another • A help button is available on all parts of the game to briefly explain the current screen Incentive Module • Added to encourage the player to continue coming back to the game to play it • Includes daily login bonuses, daily quests, daily gifts and game achievements Figure 1 shows the screenshot of (a) the Home Screen where the player can access various game modules and take care of their pet, and also (b) the screenshot of the mini-game selection screen where players can choose a mini-game to play. Methods The pilot study for the Pets vs Onco serious game uses a single-arm pretest-posttest design to test the effectiveness of the intervention. Eight caregiver/child dyads were recruited from a boarding home for children with cancer who have agreed to collaborate on this research. The counsellor approached parents of potential participants and asked if they would like to participate in the study. Contacts of parents who would like to take part were obtained. The selection criteria for participants are: • Being 6-17 years old • Having at least one month of cancer treatment experience before the intervention begins • Caregiver/child dyad can communicate, understand and read basic English/Malay/Simplified Chinese • Having an Android mobile device with an internet connection which the game can be installed and played on • Can play the game during the one month intervention period Patients who are in end-of-life care are excluded from this study. Dropout criteria for participants include choosing not to continue with the intervention for any reason before completion of post-intervention surveys, if the participant has passed away, or if their circumstances now restrict them from continuing the study. Due to the pandemic, the pilot evaluation was conducted digitally to maintain social distancing. During the first contact with the caregivers, they were asked to fill in the demographic survey to collect basic information about the participant, such as age, gender and how often they play mobile games. This survey was also used to determine if the potential participant met the selection criteria. Upon confirming the participant's eligibility, the researcher explained the entire study clearly to the caregivers and provided them with the information sheet. Informed parental consent was obtained for each participant. Should caregivers and participants be available, the digital pre-intervention surveys were linked to the caregivers and participants. The caregivers were then guided through the installation of the game to the participant's mobile device. Additional assistance was provided for caregivers experiencing installation difficulties by SCCS. Upon successful installation, the in-game tutorial guided the caregivers and participants in playing the game. The participants were then left to play the game at their own pace for one month. Caregivers of participants who were still with the study after one month were contacted for the post-intervention surveys. Once again, the digital surveys are linked. Upon completion, caregivers and participants were asked to provide their feedback via the end of intervention surveys. Caregivers who were willing were also asked additional follow-up questions to elaborate on their perspective of the game's impact on their children. Study Instruments and Outcomes The Protection Motivation Theory (PMT) used to inform the intervention's design was also used to create two evaluation surveys. There were two versions of the PMT surveys. One version to measure the participants' intentions, which was designed to be child friendly, and another to view the caregivers' perspective of participants' intentions. Caregivers of participants aged 12 and below were allowed to guide their children in filling the surveys. Three pre-intervention and post-intervention (pretestposttest) surveys were used in this study. [34,35], which were written with children's reading levels in mind • To measure the level of participants' knowledge of basic cancer and cancer treatment facts After the post-intervention surveys, the participants and their caregivers' feedback was obtained via the End of Intervention Questionnaire. Questions for participants asked for their rating of the game and their likes and dislikes about it. Caregivers are also invited to discuss what they like and dislike and the game's usefulness for children with cancer. Caregivers who volunteered were asked additional follow-up questions to explore their views on how the game impacted their children. Responses of the follow-up were transcribed into text for qualitative data analysis. Plan of Analysis Descriptive statistics were used to summarize the results obtained from the pre-test and post-test surveys: PMT Survey 1 (cancer treatment), PMT Survey 2 (daily self-care) and the Cancer Knowledge Survey. Individual t-tests were used to determine if there are any significant differences between the responses of the caregiver and child participants for the PMT Surveys. Paired t-tests were used for hypothesis testing to compare the results between pre-and post-intervention for PMT Survey 1, PMT Survey 2 and Cancer Knowledge Survey. For the End of Intervention Surveys and Follow up Questions for gathering feedback, the responses for yes or no questions, and the participants' ratings of how much they liked the game, were described with descriptive statistics. Qualitative feedback from both child and caregiver surveys and the follow-up done with caregivers were reviewed and coded for thematic analysis. Main themes were identified from the coding done, and the final codes were arranged as sub-themes within these main themes. Demographics As of November 2021, 11 participants have been approached for the study from a single centre. Of these participants, ten had consented to participate; however, two had dropped out due to varied circumstances. Results from the 8 participants who have completed the intervention are presented in this paper. Figure 2. Flow Diagram for Design of Study There are four male participants and four female participants. The participants' ages range from 8 to 16 years old (M = 12.38, SD = 2.87). Caregivers reported that five participants played mobile games daily, two played several times a week but not daily, and one participant played around once a month. Table 2 shows the participant overview reported by the caregivers. To monitor if the participants play the game during the intervention, Pets vs Onco can keep track of the activities performed by the participants during gameplay. These activities were compiled into an action log file which was sent to a secured server that only the researchers could access. From the action logs received, it was confirmed that all the participants played the game regularly during the intervention period. Results for Protection Motivation Theory (PMT) Surveys Difference between Caregiver and Child Survey Scores PMT Survey 1 results comparing the scores of children and caregivers' responses indicated no significant difference between the answers for 'the intention of using cancer treatment to fight cancer'. There was also no significant difference in PMT Survey 2 for 'the intention of using daily self-care to manage treatment side effects'. Hypothesis Testing Based on the individual t-tests conducted earlier, no significant differences were found between the intention values between the children and caregivers. Therefore, the one-tailed paired sample t-test was performed to compare the pre-and post-intervention results of both Protection Motivation Theory (PMT) Surveys used the average values of the responses from each caregiver and child dyad. For the PMT Survey 1 to determine the participants' intention of using cancer treatment to fight cancer, there was a significant increase in the scores for the pre-intervention results (M = 4.25, SD = 1.06) and the post-intervention results (M = 4.81, SD = 0.40); p = 0.0327. It indicates that the evidence is strong enough (p < 0.05) to suggest that the game intervention has affected the participants' motivation to keep up with their cancer treatment. On the other hand, for the PMT Survey 2 to determine the participants' intention of using daily self-care to manage treatment side effects, there was no increase in the scores for the pre-intervention results (M = 4.06, SD = 0.93) and the post-intervention results (M = 4.06, SD = 0.68); p = 0.5. It indicates insufficient evidence (p > 0.05) to suggest that the game intervention has affected the participants' motivation to use daily self-care to manage their cancer treatment side effects. Results for Cancer Knowledge Survey For the Cancer Knowledge Survey, the results of each participant for the pre-and post-intervention surveys are depicted in Table 3 below. The observations indicate a minor improvement in overall scores and the level of understanding for participants during the post-test. Another observation was a stark decrease in Participant 4 (pre-test score = 85%, post-test score = 55%). The participant's understanding of the question may have been affected by the language chosen (pre-test taken in Bahasa Melayu, post-test taken in English), resulting in the decrease. A one-tailed paired sample t-test was performed to compare the participants' pre-and post-intervention results of the Cancer Knowledge Surveys. There was no significant increase in the scores for the pre-intervention results (M = 60.63, SD = 20.95) and the post-intervention results (M = 64.38, SD = 17.81); p = 0.3172. The surveys were done online with multi-language options. Participants were free to change the language of the survey form as they liked. Due to this, Participant 4 was allowed to take the survey in a different language during the post-test. Therefore, the one-tailed paired sample t-test was repeated to compare the participants' results for the Cancer Knowledge Surveys, excluding Participant 4. The improvement in scores was more evident as seen from the scores for the preintervention results (M = 57.14, SD = 19.97) and the postintervention results (M = 65.71, SD = 18.80); p = 0.1240 is small. The evidence is not strong enough (p > 0.05) to make a conclusive recommendation that the game intervention has impacted the participants' knowledge of cancer and treatment. Increasing motivation for cancer treatment adherence in children through a mobile educational game: a pilot study 7 Descriptive Statistics All caregivers (n=8) who have answered the feedback survey agreed that Pets vs Onco was useful and helpful for children with cancer. In addition, all caregivers (n=8) decided that the game was useful and practical for their child. All caregivers who agreed to the follow-up (n=6) have agreed that they have noticed positive differences in their children after playing the game. All caregivers (n=6) agreed that the game encouraged their child to keep up with treatment and perform daily self-care. All caregivers (n=6) also decided that the game has helped improve their children's knowledge about cancer and encourage conversation and discussion about this topic between them and their child. Overall, most participants liked the game (Mean = 3.75, SD = 1.299). Out of the 8 participants, 3 (37.5%) strongly liked the game (rating of 5), 2 (25%) liked the game, 2 (25%) felt neutral about the game, while only 1 (12.5%) strongly disliked the game. When the participants were divided into two groups based on age, it can be seen that younger participants (M = 4.75, SD = 0.433) enjoyed the game much more than the older participants (M = 2.75, SD = 1.090). It may indicate that the game is better suited for a younger target audience and will require additional content to appeal to the older participants. In this sample, gender does not strongly influence the game's rating, with a mean of 3.75 for both gender groups. Thematic Analysis The responses obtained from intervention surveys and follow-up questions were reviewed and analyzed for codes. Codes identified were categorized into sub-themes and used to identify main themes. The main themes identified are: (i) Impacts on Cancer Treatment (ii) Impacts on Daily Self-care (iii) Other Benefits of the Game (iv) Likes about the Game (v) Dislikes about the Game (vi) Feedback/Suggestions for Improvement (vii) Others The main themes, sub-themes, frequency in which they were mentioned, and examples found in the thematic analysis are depicted in Table 4 below. Discussions This article describes the development and pilot evaluation of Pets vs Onco, a mobile game for educating young cancer patients about their illness and motivating them to keep up with their cancer treatment and daily self-care. Quantitative survey results showed a significant increase in the participants' intention to use cancer treatment to fight cancer. However, there was no substantial change for the participants' preference to use daily self-care to manage treatment side effects and the Cancer Knowledge Survey scores. The quantitative survey did not depict any improvement in the participants' intentions to use daily self-care; however, the caregivers' feedback showed that the game positively impacted the participants' use of daily self-care. The participants' quantitative pre-test and post-test surveys were self-reported motivation or belief in themselves to perform daily self-care, which has the tendency to be affected by their physical or emotional state at the time of the responses. On the other hand, the caregivers' feedback was based on their continuous observation of their children's abilities and attitudes towards daily self-care. The game had an average rating of M = 3.75 out of 5 stars among the 8 participants. Grouping the participants by age highlighted that young participants (M = 4.75) enjoyed the game more than older participants (M = 2.75). There was no influence on the scores when participants were divided by gender (M = 3.75) for both gender groups. The thematic analysis results show that the virtual pet game was able to help in terms of cancer treatment. Most participants and caregivers highlighted that the game had educated about cancer and cancer treatment and helped motivate participants to go through their treatment. The game also impacted the participants' daily self-care; caregivers pointed out that children became more motivated to perform daily self-care and were more independent and punctual. The game also benefited the participants in various ways, such as helping them pass the time during treatment, opening conversation about cancer and treatment, cheering the participant up, and easing their stress. Participants and caregivers liked the game for various reasons; the main reasons were the attractive graphics and benefit children with cancer. However, it was also pointed out that the game can get boring after a while and that there are words that may be difficult to understand for younger children. Improvements suggested include more mini-games attractive to young players and lessening the difficulty of the terms used to boost understanding. Limitations Our study has some limitations. For instance, as the same group of children used the same questions (Cancer Knowledge Survey) twice, a learning effect might have impacted the survey score. Also, there might be particular order effects in answering the survey that could play a role in EAI Endorsed Transactions on Pervasive Health and Technology 11 2021 -03 2022 | Volume 8 | Issue 30 | e4 8 determining the learning effects in our experiment. We believe that due to the time difference of one month between the pre and post-test, it was unlikely that participants could remember their earlier responses to the survey questions though we cannot completely ignore the learning effects. Further investigation is needed to generalise the findings of our pilot study. Conclusions and Future Work In conclusion, survey results suggests that the game Pets vs Onco can significantly increase the motivation of children with cancer to use cancer treatment to fight cancer. According to the end of feedback surveys and the follow-up questions, we can see that the game positively impacted the participants in terms of using cancer treatment and performing daily selfcare. The game was also liked by caregivers and enjoyed by the participants, especially the younger ones. Pets vs Onco will need to be extended in terms of gameplay content to address the feedback for improvement, engage the participants better and allow the game to be played for a longer period. The pilot study outcomes support future work. The current work recruited of participants from Sarawak may not generalize the impact of the game for children with cancer. Further research is needed to evaluate the game on a larger scale. The recruitment for future studies may have a waitlist control group and should also be extended to multiple locations to boost diversity and the number of samples. The game can be improved 3 "Just include the latest games the kids are interested in." "Reduce the use of difficult to understand words." Others The child is curious about cancer and treatment-related information 2 "…he's the curious sort who likes to ask questions." "…sometimes she asks me something I don't know and don't understand. I say search it on Google... [laugh]." The child had problems putting the game down 1 "Sometimes when she is told to eat, she is still busy playing this game, though." Increasing motivation for cancer treatment adherence in children through a mobile educational game: a pilot study EAI Endorsed Transactions on Pervasive Health and Technology 11 2021 -03 2022 | Volume 8 | Issue 30 | e4
v3-fos-license
2023-05-29T15:03:36.468Z
2023-05-26T00:00:00.000
258952664
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://joarps.org/index.php/ojs/article/download/194/126", "pdf_hash": "d74fa82dbcf19b07ffd8ba6c3a2c0fa19b0e697b", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2775", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "sha1": "621c98b277f2b5a25aad87228e0a6e5e246031dc", "year": 2023 }
pes2o/s2orc
Management of Saline-Sodic Soil through Press Mud and Sulfur Application for Wheat-Pearl Millet Cropping System Press mud is a nutrient-rich organic residue and elemental sulfur being a reclamation agent in combination or alone can be used for rehabilitation of salt-affected soils on wheat-pearl millet crops. The results of present study revealed that press mud and sulfur hold excellent potential to reclaim the saline-sodic soil and alleviate the salinity stress in wheat and pearl millet crops. However, integrated use of sulfur (S) and press mud (PM) demonstrated the positive effects on soil health and crop resilience. Application of S @ 50% gypsum requirement (GR) with PM @ 10 t ha -1 showed better results than all other treatments and increased the plant height, number of tillers, spike length, 1000 grain weight, straw yield and grain yield of wheat by 11.16%, 9.87%, 27.93%, 15.65%, 33.54% and 50.26% respectively. Same trend was observed in pearl millet and the plant height, number of tillers, panicle length, grain panicle -1 , 1000 grain weight, and grain yield were increased by 16.66%, 22.85%, 13.11%, 9.74%, 13.64%, and19.37% respectively over control. Integrated use of sulfur and press mud also ameliorated the soil properties and reduced the soil pH (4.57%), EC (15.26%), SAR (56.26%), and BD (10.11%) and increased HC (32.5%). Therefore, the integrated sulfur application @ 50% GR and press mud @ 10 t ha -1 are recommended as an effective reclamation strategy to manage the saline-sodic soil for better productivity of wheat and pearl millet crops. Introduction Worldwide increasing population pressure and alarming velocity of urbanization, industrialization along climate change has increased the food security crises. It is estimated that agricultural products need to be raised by 100% by 2050 to feed such a huge population (Pineda et al., 2021). This situation is forcing the farming community to exploit salt-affected soils. Globally, 1,125 million ha of cultivable land have been degraded by salt stress (Hossain, 2019). Transformation of such salt-affected soil into productive agricultural landscapes is a main challenge for scientists and need top priority in whole world to assure global food security (Abensperg et al., 2004). Reclamation of sodality stressed soils is accomplished through application of organic (farm manure, press mud, poultry manure) or inorganic (gypsum, sulfur, calcium chloride) amendments (Sheoran et al., 2021c). Press mud is a nutrient-rich organic residue of sugar industry with a production of 1.28 million tons annually in Pakistan (Khan et al., 2012). Press mud with pH of 5.0 can recalim sodic soil (Avishek, D., et al., 2018). Due to its favorable effects on soil properties, it is primarily used both as a soil reclamation agent and as soil conditioner (Shankaraiah and Murthy, 2005). Being a rich source of nitrogen (N), phosphorus (P), potash (K), copper (Cu) and zinc (Zn) it is expected to increase the soil fertility (Avishek, D., et al., 2018). Thus, press mud is nutrient rich and easily metabolizable soil reclaimant for rehabilitation of salinity induced degraded land. Integrated use of press mud with some inorganic reclaimant seems to be rational solution of arresting the sodicity problem and to sustained agriculture productivity (Basak et al., 2021). Press mud after the decomposition released the organic acid and electrolytes which mobilizes the native CaCO3 and produced soluble Ca 2+ which replaced the Na + from exchange sites (Sheoran, et al., 2021a). Conjunctive use of press mud (PM) with gypsum alleviates the drastic effects of sodicity by improving the carbon(C) sequestration, decreasing the soil pH and sustain the productivity of rice and wheat crops (Sheoran et al., 2021 b). Press mud @ 10 Mg ha -1 may increase wheat yield by 16.7% and rice yield by 18.9% in moderately sodic soils (Sheoran et al., 2021c). They recommended press mud as an efficient, easily available and affordable ameliorant that capture the sodicity problem by reducing the soil exchangeable sodium percentage (10.4-20.1%) and soil pH (1.6-3.6%) and its potential use in agriculture needs to be scaled up for soil and crop resilience. Muhammad and Khattak (2009) studied the ameliorating properties of PM @ 0, 5, 10 and 20 t ha -1 in salt-affected soil. They reported that plant height and biomass yield of maize increased with each increment of press mud. Due to its favorable effects on soil properties, they suggested the press mud as an effective reclamation agent for saline-sodic soils. Negim (2015) evaluated the reclamation efficiency of press mud and gypsum alone or in combination. Results revealed that addition of PM and gypsum in combination was more effective treatment to reduce the soil electrical conductivity (EC) and exchangeable sodium percentage (ESP) than their individual application. Sulfur is also a well-known reclamation agent for rehabilitation of degraded sodic or salinesodic soils (Jaggi et al., 2005). Sulfur can be used as an alternative amendment for gypsum (Ahmed et al., 2016). Sulfur @ 100 % GR significantly increased the grain yield of rice and wheat and improved the soil health by decreasing soil pH, electrical conductivity (EC) and sodium adsorption ratio (SAR) (Ahmed et al., 2017). According to Wei et al. (2006), Sulfur application is recommended for soil with pH over 6.6 to increase the availability of phosphorus and micro nutrients. In calcareous soil, added S is microbially oxidized into sulfuric acid which mobilizes the Ca2CO3 to form CaSO4 (El-Hady and Shaaban, 2010) that provided the soluble Ca 2+ which replace the Na + from exchange site hence reduced the soil sodicity (Abdelhamid et al., 2013). According to Stamford et al. (2002) sulfur act as soil conditioner and improved the yield of bean and cow pea by reducing the soil EC from 15.3 to 1.7 dS m -1 . Kubenkulov et al., (2013) investigated the ameliorating properties of S and recommended it as a comprehensible soil ameliorant. Favorable effects of sulfur on the properties of saltaffected soils and increased salinity tolerance have been reported in canola (Al-Solimani et al., 2010), rice and wheat (Ahmed et al., 2016). Therefore, the current field trial was aimed to investigate the comparative reclamation efficiency of PM and S in combination or alone, and to determine the optimum dose of these amendments for better yield of wheat and pearl millet crops under saline-sodic conditions. During the 2 nd week of November wheat variety ''Faisalabad 2008'' was sown in lines by rabi drill. Fertilizers @ 120-110-70 NPK kg ha -1 were applied as urea, single super phosphate (SSP) and sulphate of potash (SOP). All the phosphorus and potassium were applied at sowing, while N was applied in three splits. Adequate agronomical and management practices (irrigations, insects, diseases and weeds control) were carried out uniformly as per recommendations in all the treatments. Crop was harvested in last week of March and data regarding plant height, number of tillers, spike length, 1000 grain weight, straw and grain yield was recorded. In the same lay out, during the 1 st week of July, pearl millet variety ''Pioneer'' was sown in lines by rabi drill. Fertilizers @ 80-60-60 NPK kg ha -1 were applied. All phosphorus and potassium were applied basally, while N was applied in three splits. Crop was harvested in 1 st week of November and data regarding plant height, number of tillers, panicle length, grain panicle -1 , 1000 grain weight and grain yield was recorded. Composite soil samples were collected at the end of study and analyzed for pH of the saturation extract (pHs), electrical conductivity of soil extract (ECe), sodium adsorption ratio (SAR), hydraulic conductivity (HC) and bulk density (BD) (Richards L.A., 1954). Soil pH was measured by using pH meter (Microcomputer pH-vision cole parmer model 05669-20). Electrical conductivity was measured with the help of conductivity meter (WTW conduktometer LF 191). The Na + contents were determined by flame photometer (digiflame code DV 710) while Ca 2+ and Mg 2+ were determined titrimetrically. SAR was calculated as follows where ionic concentration of the saturation extracts is given in mmole L -1 . SAR = Na + / [(Ca 2+ + Mg 2+ )/2] 1/2 . Soil BD was measured by core method (Blake and Hartge 1986). Hydraulic conductivity was measured by using falling head hydraulic conductivity apparatus (Richards L.A., 1954). Statistical Analysis: The collected data were subjected to analysis of variance (ANOVA) and treatment means were compared through the least significance difference (LSD) test at p ≤ 0.05 (Steel et al., 1997) using STATISTIX 8.1 package software. Results Effect of press mud and sulfur on wheat crop: Application of amendments either single or in combination significantly affected the growth attributes of wheat crop (Table 1). Pooled data of three seasons showed that PM or S had positive effect on plant height of wheat crop, and a significant increase in plant height was observed when both amendments were used in combination. The tallest plants (70.40 cm) were observed where PM @ 10 t ha -1 was applied with S @ 50% GR followed by PM @ 20 t ha -1 . Whereas, control treatment (without amendment) showed the minimum plant height of 63.33 cm. Similarly, maximum number of tillers (144.67) were recorded with integrated use of PM and S (T5) and minimum tillers for wheat (131.67) were recorded in T1. Data about spike length showed that higher value for spike length (9.16 cm) was recorded for combine application of PM + S in T5, however, it was non-significant with PM @ 20 t ha -1 . At the same time, minimum spike length (7.16 cm) was observed where no amendment was applied. Data regarding the 1000-grain weight in Table 2 displayed that maximum grain weight (29.10 cm) was divulged with PM @ 10 t ha -1 + S @ 50% GR but statically nonsignificant with PM @ 20 t ha -1 . PM and S significantly improved grain and straw yield of wheat crop and effect was more remarkable where both amendments were applied collectively. Maximum straw (2.11 t ha -1 ) and grain (2.81 t ha -1 ) yield were produced where S @ 50% GR and PM @ 10 t ha -1 were applied in combination, however, these values were significant only from control and no significant difference was noted in the treatments where these amendments were applied individually. Effect of press mud and sulfur on pearl millet crop: Data of succeeding pearl millet crop revealed that growth and yield parameters response positively to added sulfur and press mud either alone or in combination (Table 3). Higher value for the plant height (219.67 cm) was noted with addition of PM @ 20 t ha -1 which was non-significant from PM @ 10 t ha -1 + S @ 50% GR and PM @ 15 t ha -1 + S @ 25% GR. On contrary, minimum plant height of 178 cm was documented in control (no amendment). Similarly, maximum number of tillers (43.33) were produced with application of PM @ 20 t ha -1 statistically at par with PM @ 10 t ha -1 + S @ 50% GR, whereas, minimum tillers (35) were recorded in control. Combined application of sulfur (S @ 50% GR) and press mud (PM @ 10 t ha -1 ) significantly increased panicle length (28.73 cm) and grain panicle -1 (1890) than all other treatments, while minimum values for panicle length (25.40 cm) and grain panicle -1 (1606.7) were recorded in the treatment where no amendment was applied i.e., control. Results (Table 4) also showed that maximum 1000 grain weight (11.73 g) and grain yield (1.65 t. ha -1 ) were produced with integrated use of PM @ 10 t ha -1 and S @ 50% GR, statistically non-significant with PM @ 20 t ha -1 . At the same time minimum 1000 grain weight (9.38 g) and grain yield (1.29 t ha -1 ) were noted in T1 (without any amendment). Effect of press mud and sulfur on soil properties: Sulfur and press mud application either alone or in combination significantly improved the soil properties of saline-sodic field as depicted in Table 5. With respect to soil pH maximum decrease of 5.23% over its initial value was documented with PM @ 20 t ha -1 while combined use of S and PM decreases the pH by 4.57 % as compared to initial value at the start of study. Minimum reduction (1.78%) in pH was observed in control. Similarly, maximum reduction of 15.26% and 56.26% was observed for EC and SAR respectively with combined use of S @ 50% GR and PM @ 10 t ha -1 . Soil physical properties in the term of bulk density (BD) and hydraulic conductivity (HC) were substantially improved with addition of sulfur and press mud. Co-application of PM @ 10 t ha -1 and S @ 50% GR decreased the BD by 10.11% over its initial value, whereas, minimum decrement of 1.78% was observed in T1 (Fig. 1). HC was also increased remarkably with application of amendments and maximum increase (32.5%) was recorded in the treatments that received the S @ 50% GR and PM @ 10 t ha -1 (T5) and S @ 100% GR (T3). While minimum increase (5%) was observed in control. Table 2: Effect of press mud and sulfur on growth of wheat (average of three seasons) Table 3: Effect of press mud and sulfur on growth of pearl millet (average of three seasons) Table 4: Effect of press mud and sulfur on growth of pearl millet (average of three seasons) 2. Effect of press mud and sulfur on hydraulic conductivity (cm hr -1 ) of soil at the end of study Discussion Removal of Na + out of root zone through organic or inorganic amendments is the most familiar methods for reclamation of sodic or saline sodic soils (Feizi et al., 2010). In current study, results revealed that varying levels of PM and S significantly increased the growth and yield of wheat and pearl millet than nonamended soil, however, effects were more pronounced with integrated use of PM and S than their alone application. Pooled data of three seasons showed that S @ 50% GR + PM @ 10 t ha -1 proved more superior to increased plant height, number of tillers, 1000 grain weight and grain yield of wheat and pearl millet crops. This improved growth and yield performance of wheat and pearl millet crops in the treatment receiving combined application PM and S can be explained by the ameliorative and nutritional properties of these amendments that counteract the detrimental effects of salinity and sodicity. Press mud being a rich source of calcium and phosphate (Thai et al., 2015) organic matter, total nitrogen (Said et al., 2010) zinc and copper (Avishek, D., et al., 2018) is anticipated to increase the soil fertility. Due to its favorable effects on soil health and microbial activity, it is also considered as a good soil conditioner (Shankaraiah and Murthy, 2005). The addition of press mud increased the organic matter and nutrient availability and reduced the uptake of toxic ions resulting in improved growth and yield of crops (Azhar et al., 2019). Application of PM @ 15 t ha -1 with recommended dose of chemical fertilizer increased the yield of sugarcane up to 21% (Shankaraiah and Murthy, 2005). Similarly, Imran et al. (2021) observed an increase of 77% in wheat grain yield in salt-affected soil with application of PM @ 15 g kg -1 soil. Furthermore, press mud is metabolizable amendments and generate CO2 and organic acid on decomposition that dissolved the native CaCO3 and produced soluble Ca 2+ which replaced the Na + from exchange sites and ultimately reduced the soil sodification (Sheoran et al., 2021). Press mud due to its chelation ability adsorbed the toxic metals (Mahmood, 2010) and reduces the uptake of toxic Na consequently improves the crop growth and productivity (Saleem et al., 2015). Our results are in aligning of Muhammad and Khattak (2011) who reported an increase of 27 to 36% in wheat grain yield with application press mud, gypsum and gypsum + press mud in saline-sodic soil. Sulfur is one of the essential macro nutrients required for plant growth in same amount as P (Ali et al., 2008). For the optimum crop yield it is very important to supply a balance fertilization of S along with other essential nutrients (Jez, 2008) as it is involved in synthesis of chlorophyll and vitamins (Kacar and Katkat, 2007;Abdallah et al., 2010). S application in salt-affected soils improves the quality and quantity of produce by increasing the uptake of NPK, Ca and Zn and inhibits the uptake of toxic Na and Cl (Badr uz Zaman et al., 2002;Mahmood et al., 2009). In calcareous soil S is microbially oxidize into sulfuric acid which mobilize the Ca2CO3 to form CaSO4 (El-Hady and Shaaban, 2010) that provides the soluble Ca 2+ to replace the Na from exchange site thus consequently reduces the soil sodicity (Abdelhamid et al., 2013) and improves the soil health and produce the favorable conditions conducive for crop growth. Ameliorative role of sulfur in saline-sodic soils have been reported in sunflower (Badr uz Zaman et al., 2002) wheat (Ali et al., 2012) maize (Manesh et al., 2013) rice and wheat (Ahmed et al., 2016(Ahmed et al., , 2017. Integrated use of S and PM substantially improved the soil properties and a sharp decline in salinity indices (EC, pH and SAR) was observed. Soil pH is an indicator of plant growth medium depicting the phase and fate of nutrients and salinity/sodicity status, thus any change in soil pH is very important. Results of current study revealed that all the amendments lowered the final value of soil pH, however, integrated use of S and PM proved more better in lowering the pH and maximum reduction of 4.57% was divulged in the treatments receiving S @ 50% GR and PM @ 10 t ha -1 . Elemental S is believed a cost-effective ameliorant for reducing the pH value of growth medium (Roig et al., 2004;Tarek et al., 2013) and this low value of pH may be justified by the generation of H2SO4 in calcareous soil due to the oxidation of added S (Singh et al., 2006). In addition, release of Ca from CaSO4 which replace the Na + from exchange sites are the major reasons for substantial decrease of SAR and EC (Kubenkulov et al., 2013;Abdelhamid et al., 2013). Furthermore, press mud release the organic acid and discharge H + ion that play a key role in neutralizing the soil alkalinity (Sheoran et al., 2021c) and reduce soil pH, EC and SAR. Our results are strengthened by the previous findings of Ahmed et al. (2016;2017) that sulfur application is a very effective reclamation strategy in improving the grain yield of rice-wheat crops and soil health of saline-sodic soil. The reduction in sodicity and salinity was also reflected in soil physical properties e.g., bulk density and hydraulic conductivity. Usually, saline sodic soils are dispersed and compact (high BD) due to dominant Na + . Experimental soil had BD of 1.68 Mg m -3 indicating its compactness, which was substantially improved with application of amendments and maximum reduction of 10.11% was observed with combine application of S @ 50% GR and PM @ 10 t ha -1 . Similarly, hydraulic conductivity also increased manifolds at the end of study and most effective treatment was S @ 50% GR and PM @ 10 t ha -1 with 32.5% increase over its initial value. Increased value of HC and reduction in BD with S and PM may be associated with supplementation of additional organic matter through press mud (Basak et al., 2021) leading to improve soil physical properties. Press mud increase the aggregate stability, soil porosity (Marinari et al., 2000;Clark et al., 2007) and gypsum synthesized by added sulfur improve flocculation of dispersed soil (Qadir et al., 2002) and enhance removal of Na (Yadav et al., 2009) which in turn reduce the soil compactness (low BD) and increase the hydraulic conductivity. The gypsum marked more SO4-S contents in soil than elemental S that might be due to its binding nature that has long lasting effects on soil. Well drained, light textured soils in high rainfall areas have low SO4-S content and require S fertilization for optimum crop production. (Yunas et al., 2010). Combine application of inorganic and organic sources induced swift reclamation, healthier plant growth and positively influence the soil health (Shaaban et al., 2013;Khalil, A., et al., 2015, Anwar zaka et al., 2018. Conclusion Scaling up the use of organic and inorganic resources seems to be rational solution for restoration of sodicity induction degraded lands. Integrated use of sulfur and press mud ameliorated the soil properties of saline sodic soil and reduced the soil pH, EC, SAR, BD and increased HC that may improve growth and yield of wheat and pearl millet.. Addition of S to soil is imperative for obtaining higher yields particularly high S demanding oilseed crops. However, integrated application of S @ 50% GR and PM @ 10 t ha -1 demonstrated more positive effects on soil health and crop resilience. Impacts of elemental S applied under various
v3-fos-license
2017-07-28T21:51:16.542Z
2015-03-19T00:00:00.000
15089095
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0120603&type=printable", "pdf_hash": "be7b3e2b969ce2768cef221a1f4fcd83fb8477ca", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2776", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "sha1": "be7b3e2b969ce2768cef221a1f4fcd83fb8477ca", "year": 2015 }
pes2o/s2orc
Ligand Binding to the FA3-FA4 Cleft Inhibits the Esterase-Like Activity of Human Serum Albumin The hydrolysis of 4-nitrophenyl esters of hexanoate (NphOHe) and decanoate (NphODe) by human serum albumin (HSA) at Tyr411, located at the FA3-FA4 site, has been investigated between pH 5.8 and 9.5, at 22.0°C. Values of K s, k +2, and k +2/K s obtained at [HSA] ≥ 5×[NphOXx] and [NphOXx] ≥ 5×[HSA] (Xx is NphOHe or NphODe) match very well each other; moreover, the deacylation step turns out to be the rate limiting step in catalysis (i.e., k +3 << k +2). The pH dependence of the kinetic parameters for the hydrolysis of NphOHe and NphODe can be described by the acidic pK a-shift of a single amino acid residue, which varies from 8.9 in the free HSA to 7.6 and 7.0 in the HSA:NphOHe and HSA:NphODe complex, respectively; the pK>a-shift appears to be correlated to the length of the fatty acid tail of the substrate. The inhibition of the HSA-Tyr411-catalyzed hydrolysis of NphOHe, NphODe, and 4-nitrophenyl myristate (NphOMy) by five inhibitors (i.e., diazepam, diflunisal, ibuprofen, 3-indoxyl-sulfate, and propofol) has been investigated at pH 7.5 and 22.0°C, resulting competitive. The affinity of diazepam, diflunisal, ibuprofen, 3-indoxyl-sulfate, and propofol for HSA reflects the selectivity of the FA3-FA4 cleft. Under conditions where Tyr411 is not acylated, the molar fraction of diazepam, diflunisal, ibuprofen, and 3-indoxyl-sulfate bound to HSA is higher than 0.9 whereas the molar fraction of propofol bound to HSA is ca. 0.5. At Lys199, the substrate (e.g., acetylsalicylic acid, trinitrobenzeno-sulfonates, and penicillin) is cleaved in two products; while one product is released, the other one binds covalently to the Lys199 residue [1,4,7]. Although the molecular mechanism underlying the Lys199 acetylation is unknown, it seems that its ability to attack the substrate is due to the proximity of the Lys195 residue, these two residues playing a combined and comparable chemical role. In fact, the basic form of Lys199 is likely connected to the acid form of Lys195 through a network of H-bonding water molecules with a donor-acceptor character. The presence of these water bridges is relevant for stabilizing the configuration of the FA7 site and/or promoting a potential Lys195-Lys199 proton-transfer process [6]. Since Lys199 is placed at the entrance of the FA7 site (i.e., Sudlow's site I), ligand binding inhibits the Lys199-dependent esterase activity [3,8]. The catalytic mechanism involving the Tyr411 residue appears to be substrate-dependent. Of note, the hydrolysis of the most suitable substrate 4-nitrophenyl propionate leads to the release of both 4-nitrophenol and propionate [9]. This mechanism also applies to the hydrolysis of N-trans-cinnamoylimidazoles [10] and 4-nitrophenil esters of amino acids [11]. However, the Tyr411-assisted hydrolysis of NphOAc and NphOMy leads to the release of 4-nitrophenol and to Tyr411-acetylation and-myristoylation, respectively [12,13]. The strong nucleophilic nature of the phenolic oxygen of the Tyr411 residue is due to the close proximity of the Arg410 guanidine moiety that electrostatically stabilizes the reactive anionic form of the Tyr411 residue [5,14]. Since both the Arg410 and Tyr411 residues are placed in the FA3-FA4 site (i.e., Sudlow's site II), ligand binding inhibits the HSA esterase activity [3,9,12,13]. Remarkably, the esterase activity of HSA could play a role in the inactivation of several toxins including organophosphorus compounds [3]. Present study largely extends previous investigations concerning the hydrolysis of 4-nitrophenyl esters by HSA [9,[12][13][14]. In particular, kinetics of the HSA pseudo-enzymatic hydrolysis of 4-nitrophenyl hexanoate (NphOHe) and 4-nitrophenyl decanoate (NphODe) have been investigated between pH 5. 8 . The rationale behind this selection is to investigate how the FA tail length affects the pK a values of the ionizing group that modulates the catalysis. Furthermore, diazepam, diflunisal, ibuprofen, 3-indoxyl-sulfate, and propofol have been reported to inhibit competitively the HSA-Tyr411-catalyzed hydrolysis of NphOHe, NphODe, and 4-nitrophenyl myristate (NphOMy) (see [12] and present study). Remarkably, the molar fraction of diazepam, diflunisal, ibuprofen, and 3-indoxyl-sulfate bound to not acylated HSA is higher than 0.9 whereas the molar fraction of propofol bound to HSA is ca. 0.5. the "dead-time" of the rapid-mixing stopped-flow apparatus). Moreover, the rate of NphOH release from NphOHe and NphODe catalyzed by HSA-Tyr411 is unaffected by the addition of NphOH (up to 1.0×10 -4 M) in the reaction mixtures (data not shown), indicating that the acylation step is essentially irreversible. If NphOH had affected the HSA-Tyr411 catalyzed hydrolysis of NphOHe and NphODe, the classical product (i.e., NphOH) inhibition behavior would have been observed. When Values of K s and k +2 for the HSA-Tyr411-catalyzed hydrolysis of NphOHe and NphODe (see Table 1) were obtained from the hyperbolic plots of k app as a function of the HSA concentration ( Fig. 2, panels C and D) according to equation (2) [9,12,13]: , the reaction of NphOHe and NphODe with HSA displays a mono-exponential time course (Fig. 2, panels E and F). Values of the pseudo-first-order rate constant for the HSA-Tyr411-catalyzed hydrolysis of NphOHe and NphODe (i.e., of NphOH release; k obs ) were obtained according to equation (3) [9,12,13]: When k +2 ! 5×k +3 , the differential equations arising from Fig. 1 may be solved [12,13,16,17] to describe the time course of NphOH release at the early stages of the reaction. The resulting expression is given in eqs (4)-(6) [12,13,16,17]: The minimum three step-mechanism for the HSA-Tyr411-catalyzed hydrolysis of NphOHe, NphODe, and NphOMy. HSA is the substrate-free protein, NphOXx is the substrate, HSA:NphOXx is the reversible protein-substrate complex, HSA-OXx is considered to be an ester formed between the acyl moiety of the substrate and the O atom of the Tyr411 phenoxyl group [14], XxOH is hexanoate or decanoate or myristate, K s is the pre-equilibrium constant for the formation of the HSA:NphOXx complex, k +2 is the first-order acylation rate constant, and k +3 is the first-order deacylation rate constant. Xx indicates Ac or He or De or My. doi:10.1371/journal.pone.0120603.g001 and As predicted from eqs (4) Table), indicating that the HSA: NphOXx:NphOH stoichiometry is 1:1:1. Moreover, the time course of the "burst" phase of NphOH release is a first-order process for more than 95% of its course ( Table 1) were determined from hyperbolic plots of k obs versus [NphOXx] (Fig. 2, panels G and H) according to equation (6) [16,17]. Under all the experimental conditions, the y-intercept of the hyperbola described by equation (6) was < 2×10 -6 s -1 , thus indicating that the value of k +3 is at least 100-fold smaller than that of k obs obtained at the lowest NphOHe and NphODe concentration (i.e., k +3 < 2×10 -6 s -1 ). According to linked functions [12,13,[16][17][18], the pH dependence of K s reflects the acidic pK a -shift of a single amino acid residue from free HSA (i.e., pK unl ) to the HSA:NphOHe and HSA:NphODe complexes (i.e., pK lig ). Moreover, the pH dependence of k +2 and k +2 /K s reflects the acid-base equilibrium of one apparent ionizing group in the HSA:NphOHe and HSA: NphODe complexes (i.e., pK lig ) and in free HSA (i.e., pK unl ), respectively. As expected [12,13,[16][17][18], the pK a value of free HSA (i.e., pK unl ) is independent of the substrate whereas the pK a values of the HSA:NphOHe and HSA:NphODe complexes (i.e., pK lig ) depend on the substrate ( Table 2). Discussion The hydrolysis of NphOAc, NphOHe, NphODe, and NphOMy by HSA-Tyr411 (see [12][13][14][15] and present study) is reminiscent of that observed for acylating agents with proteases [27]. In fact, NphOAc [13], NphOHe (present study), NphODe (present study), and NphOMy [12] act as suicide substrates of HSA-Tyr411, values of the deacylation rate constant for all four substrates (i.e., k +3 ) being lower by several orders of magnitude than those of the acylation rate constant (i.e., k +2 ). Remarkably, HSA acylation appears to modulate ligand binding. In fact, HSA acylation by aspirin [28] increases the affinity of phenylbutazone and inhibits bilirubin The PDB ID codes of HSA:diazepam, :diflunisal, :ibuprofen, :3-indoxyl-sulfate, and :propofol complexes are 2BXE, 2BXF, 2BXG, 2BXH, and 1E7A, respectively [8,30]. The pictures were drawn with the UCSF-Chimera package [46]. For details, see text. doi:10.1371/journal.pone.0120603.g006 Inhibition of the Esterase-Like Activity of HSA and prostaglandin binding, thus accelerating the clearance of prostaglandins, which represents an additional mechanism of the aspirin anti-inflammatory action [29]. Kinetics for the hydrolysis of NphOHe and NphODe by HSA are pH dependent, reflecting the acidic pK a shift of an apparently single ionizing group of HSA upon substrate binding. This could reflect the reduced solvent accessibility of the Tyr411 residue, representing the primary esterase site of HSA (see [9,[12][13][14]), although long range effects could not be ruled out. The Tyr411 catalytic residue is located in the FA3-FA4 cleft, which is made by an apolar region forming the FA3 site and a polar patch contributing the FA4 site. The polar patch is centered on the Tyr411 side chain and includes Arg410, Lys414, and Ser489 residues [8,30,31]. The inspection of the three-dimensional structure of the ligand-free HSA [32] and of the molecular model of the HSA:4-nitrophenyl propionate complex [9] suggests that the observed pH effects (Fig. 3) could reflect the acidic pK a shift of the Tyr411 residue upon substrate binding. This would render more stable the negative charge on the phenoxyl O atom of Tyr411, which appears to be hydrogen bonded to the carbonyl O atom of 4-nitrophenyl propionate [9], potentiating its nucleophilic role as an electron donor in the pseudo-esterase activity of HSA. Of note, the acidic shift of the pK a value of the ionizing group affecting catalysis from 8.9±0.1 in ligand free-HSA to 8.1±0.2, 7.6±0.2, 7.0±0.2, and 6.8±0.2 in the HSA:NphOAc, HSA:NphOHe HSA: NphODe, and HSA:NphOMy complexes (see Table 2), respectively, depends on the length of the fatty acid tail. Therefore, it appears as the increase of the FA tail length brings about the progressive reduction of the water solvent accessibility, thus enhancing the hydrophobicity of the catalytic site and leading to a decreased pK a of the ionizing group modulating the catalysis. Of note, the pH dependence of the Tyr411-associated esterase activity parallels the pH-dependent neutral-to-basic allosteric transition of HSA [3]. Diazepam, diflunisal, ibuprofen, 3-indoxyl-sulfate, and/or propofol inhibit competitively the hydrolysis of NphOAc [13], NphOHe (present study), NphODe (present study), and NphOMy (see [12] and present study) (Fig. 5) by impairing the accessibility of 4-nitroplenyl esters to the Tyr411 catalytic center. In particular, diazepam, diflunisal, ibuprofen, and 3-indoxyl-sulfate bind to the center of the FA3-FA4 cleft, with one O atom being hydrogen bonded to the Tyr411 OH group (Fig. 6). On the other hand, propofol binds to the apolar region of the FA3-FA4 cleft with the phenolic OH group making a hydrogen bond with the carbonyl O atom of Leu430. Moreover, the aromatic ring of the propofol is sandwiched between the Asn391 and Leu453 side chains. Furthermore, one of the two isopropyl groups of propofol makes several apolar contacts at one end of the pocket, whereas the other is solvent exposed at the cleft entrance making close contacts with Asn391, Leu407, Arg410, and Tyr411 (Fig. 6) [8,30,31]. The different K I values for diazepam, diflunisal, ibuprofen, 3-indoxyl-sulfate, and propofol binding to HSA (Fig. 5) agree with the selectivity of the FA3-FA4 cleft of HSA, which can be ascribed to the presence of a basic polar patch located at one end of the apolar FA3-FA4 cleft. Remarkably, diazepam, diflunisal, ibuprofen, and 3-indoxyl-sulfate are oriented with at least one O atom in the vicinity of the polar patch. On the other hand, the single polar hydroxyl group in the center of propofol does not interact with the polar patch of the FA3-FA4 cleft. Moreover, the FA3-FA4 cleft appears to adopt different ligand-dependent shapes, thus paying different free energy contributions for structural rearrangements [8]. Since the plasma levels of diflunisal, ibuprofen, and 3-indoxyl-sulfate (see above) are higher than values of K I for ligand binding to HSA by about 100 folds (see Fig. 4 and Fig. 5), the molar fraction of diflunisal, ibuprofen, and 3-indoxyl-sulfate bound to HSA is higher than 0.9, according to equation, (11). Although the commonly reported diazepam and propofol plasma levels (see above) are lower than the corresponding values of K I for drug binding to HSA (see Fig. 4 and Fig. 5) by about 5 and 100 folds, respectively, the plasma HSA concentration (see above) is higher than K I for by about 70 and 2 folds, respectively. According to equation (11), the molar fraction of diazepam and propofol bound to HSA in plasma is higher than 0.9 and 0.5, respectively. As a whole, data here reported highlight the role of drugs diazepam, diflunisal, ibuprofen, and propofol as well as of the uremic toxin 3-indoxyl-sulfate to inhibit competitively the pseudo-esterase activity of HSA, Tyr411 representing the nucleophile. This aspect is appropriate since HSA acylation appears to modulate ligand binding [28,29] and the detoxification of several compounds [2,3]. Last, HSA not only acts as a carrier and as a detoxifier but also displays transient drug-and toxin-based properties, representing a case for "chronosteric effects" [45]. This opens the scenario toward the possibility of a drug-and toxin-dependent multiplicity of roles for HSA.
v3-fos-license
2020-07-16T09:06:27.620Z
2020-01-01T00:00:00.000
220734303
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09139220.pdf", "pdf_hash": "d447194aa4912c265c65b367716199c198546529", "pdf_src": "IEEE", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2779", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "sha1": "f32b42b6879ec7bfefa8d61bd430a213ac33abf0", "year": 2020 }
pes2o/s2orc
A Novel Hybrid Cryptosystem for Secure Streaming of High Efficiency H.265 Compressed Videos in IoT Multimedia Applications In this modernistic age of innovative technologies like big data processing, cloud computing, and Internet of things, the utilization of multimedia information is growing daily. In contrast to other forms of multimedia, videos are extensively utilized and streamed over the Internet and communication networks in numerous Internet of Multimedia Things (IoMT) applications. Consequently, there is an immense necessity to achieve secure video transmission over modern communication networks due to the third-party exploitation and falsification of transmitted and stored digital multimedia data. The present methods for secure communication of multimedia content between clouds and mobile devices have constraints in terms of processing load, memory support, data size, and battery power. These methods are not the optimum solutions for large-sized multimedia content and are not appropriate for the restricted resources of mobile devices and clouds. The High-Efficiency Video Coding (HEVC) is the latest and modern video codec standard introduced for efficiently storing and streaming of high-resolution videos with suitable size and higher quality. In this paper, a novel hybrid cryptosystem combining DNA (Deoxyribonucleic Acid) sequences, Arnold chaotic map, and Mandelbrot sets is suggested for secure streaming of compressed HEVC streams. Firstly, the high-resolution videos are encoded using the H.265/HEVC codec to achieve efficient compression performance. Subsequently, the suggested Arnold chaotic map ciphering process is employed individually on three channels (Y, U, and V) of the compressed HEVC frame. Then, the DNA encoding sequences are established on the primary encrypted frames resulted from the previous chaotic ciphering process. After that, a modified Mandelbrot set-based conditional shift process is presented to effectively introduce confusion features on the Y, U, and V channels of the resulted ciphered frames. Massive simulation results and security analysis are performed to substantiate that the suggested HEVC cryptosystem reveals astonishing robustness and security accomplishment in contrast to the literature cryptosystems. I. INTRODUCTION Internet of Things (IoT) systems have enormous computation and processing costs, and deliver massive amounts of multimedia data, specifically upon storage utilizing cloud The associate editor coordinating the review of this manuscript and approving it for publication was Parul Garg. systems [1]. Therefore, the new expansion in the processing resources of smart devices has developed intelligent IoT services, supporting the connection of distributed nodes to analyze, perceive and collect essential multimedia data from the surrounding environment [2]. Wireless multimedia networks are part of these IoT-supported services, which comprises visual sensors (cameras) that monitor certain actions from various intersecting observations by continuously gaining video frames, thus creating a huge amount of multimedia data with considerable redundancy. It is commonly approved in the research community of multimedia communication applications and services that the collected multimedia data should be pre-processed to obtain the important and informative content before multimedia streaming [3]. So, it is unpreferable to transmit the visual data through the communication channels without processing (e.g. compression), this is unrealistic due to energy and bandwidth limitations. Therefore, there is a mandatory need for an efficient compression process for multimedia data before their streaming over bandwidth-limited communication channels. The standard of HEVC is the most modern video codec [4], which is utilized for compressing videos, especially high-resolution videos. Thus, it can offer sufficient characteristics customized to various IoT multimedia services and applications. In contrast with its antecedent H.264/AVC (Advanced Video Coding) video codec, the HEVC codec accomplishes 50% compression ratio with great bit rate reduction by exploiting its improved prediction features of temporal and spatial estimation processes [5]. The rapid improvement of communication networks and Internet technologies produces further digital multimedia content. Thus, the privacy and security of the multimedia data are of utmost prominence with the growth in veracity, volume, and velocity of the developed multimedia services and applications. The cryptography process conventionally acts as a vital and essential role to protect multimedia data [6]- [10]. In the cryptography process, multimedia data are ciphered to be converted from an intelligible form to an unintelligible one. Therefore, after the ciphering process, multimedia data become worthless for adversaries and intruders, and consequently, they are maintained and protected [11], [12]. In preceding years, numerous cryptography schemes have been suggested, however, most of them have some restrictions. Some schemes are extremely robust but have high processing and computational cost, and some schemes are energy efficient and extraordinarily uncomplicated but do not deliver adequate security performance [13]- [23]. Digital images and video frames have a high relationship and correlation amongst neighboring pixels. Therefore, the preceding introduced traditional cryptography schemes, like AES (Advanced Encryption Standard), DES (Data Encryption Standard), and RSA (Rivest-Shamir-Adleman) [6], [9], are not appropriate for achieving multimedia ciphering with great security efficiency and robustness performance. Lately, various categories of multimedia ciphering schemes based on cellular automata, optical transform, Fourier transform, chaotic systems, magic cube, wavelet transform, etc. have been examined and researched [7], [8], [10], [17], [19], [22], [23]. These cryptosystem categories are divided into two classes of cryptography algorithms based on the methodology employed to model the ciphering and deciphering procedures. A chaotic cryptosystem is the application of the arithmetic and statistical principles of chaos maps to generate the chaotic sequences that are employed and exploited in cryptography schemes. The chaotic-based multimedia ciphering principally comprises two phases: the confusion (permutation) phase and the diffusion phase [9]. In the permutation phase, the pixel locations are arbitrarily substituted without modifying the authentic values of pixels. So, this phase creates an undetectable video frame for intruders. But the video frame is not incredibly secure with only performing this permutation phase since it may be retrieved by the intruders and adversaries if they repeatedly attempt. So, to enhance privacy, the diffusion phase is urgently required. It principally aims to exchange the pixel values in the whole video frame with other values. Also, the diffusion process can be carried out through certain functions on the pixels of the video frame to sequentially modify their values through some random values of chaotic sequences generated from the utilized chaotic maps. For further achieving trustworthy security and privacy, the confusion and diffusion phases are iterated a specific number of times. The randomness feature of the chaotic maps makes it proper and recommended for the services and applications of multimedia cryptosystems [6], [8], [10]. The utilization of chaos principles in the cryptography process is firstly researched by Robert Matthews in 1989, they have gained much attention, but long-time interests about their execution speed and security continue to restrict their implementation issues. Several chaos-based video and image ciphering techniques have been suggested by numerous researchers all over the world [13], [15], [16], [21]- [30]. Hamidouche et al. [24] suggested a real-time selective HEVC cryptography approach based on the chaos map. In the proposed approach, two distinct chaotic maps are employed named the STM (Skew Tent Map) and the PWLCM (Piecewise Linear Chaotic Map). The presented approach scrambles a group of sensitive parameters in HEVC frames with a lower complexity and delay overheads. Also, it accomplishes both formats conforming video ciphering needs and constant bitrate. Valli and Ganesan [25] introduced a chaos-based video cryptography scheme utilizing Ikeda time-delay system and chaotic maps. The proposed cryptography scheme comprises two chaos-based video ciphering methods. The first one is the superior-dimensional 12D chaos map and the second one is chaos-based Ikeda DDE (Delay Differential Equation) which is appropriate for constructing a real-time reliable and secure symmetric video ciphering process. So, recently, the chaos-based cryptography algorithms get extra attention among researchers. They are effective in accomplishing increased speed and extremely guaranteed multimedia ciphering because of its wonderful characteristics, such as ergodicity, mixing, randomness, and higher sensitivity to control factors and preliminary conditions. The DNA-based cryptosystem is an additional area of cryptography promising with the analysis of DNA encoding rules for creating secure image and video ciphering systems with low processing and long encryption key. Maniyath and Kaiselvan [26] presented a DNA-based cryptosystem for multimedia communication over insecure channels. In the presented cryptosystem, successive XORed operations with DNA calculations are employed to attain more robustness and security. Zhang et al. [27] suggested a coupled map lattices and DNA-based cryptography scheme based on the spatio-temporal chaos cryptographic features and characteristics. The DNA computing policy and one-time pad ciphering strategy are exploited to improve the sensitivity performance against differential, plaintext, statistical, and brute-force attacks. Wang et al. [28] suggested a Lorenz map and DNA permutations-based image ciphering algorithm. In the presented algorithm, the chaos pseudo-random sequences generated from the 3D Lorenz map are utilized for the ciphering process with long and more secret keys. Also, the DNA subtraction/addition and permutation operations are introduced to completely break pixels correlations and bit planes of the original image to achieve higher sensitivity and resistance against brute-force, differential, and statistical attacks. Consequently, DNA encoding-based cryptography procedure has many attractive characteristics for multimedia cryptosystems like massive storage, vast parallelism, and extreme-minimal power consumption. Lots of scientists and researchers have merged the merits of DNA encoding and chaos algorithms to extremely improve the privacy and security of multimedia communication and streaming [30]. In this paper, a novel cost-effective HEVC cryptosystem combining DNA encoding sequences, Arnold chaotic map, and modified Mandelbrot sets is introduced. The key achievement of this paper is to build a DNA based chaos HEVC cryptosystem, which can resist the whole categories of conventional kinds of multimedia attacks. Also, the suggested cryptosystem enhances the whole of the assessment security parameters so that the compressed HEVC frames can be streamed efficiently having no possibility of being exposed/deciphered by the intruders and adversaries. Furthermore, the suggested cryptosystem has a large keyspace, so it is robust against brute-force attacks. Moreover, one of the main features of the introduced cryptosystem that it can encrypt any size of HEVC frames. The rest of the article is coordinated as follows. Section II explains a variety of related works and its vulnerabilities. The preliminary works related to the suggested cryptosystem are discussed in section III. Section IV presents the suggested HEVC cryptosystem. Comparison analysis and experimental security results are investigated in section V. Section VI depicts the conclusions and future suggestions work. II. REVIEW OF RELATED WORKS Cryptography algorithms such as IDEA (International Data Encryption Algorithm), DES, AES, and RSA are not appropriate for multimedia ciphering due to two main considerations: (1) superfluous pixel values and (2) high relationship and correlation amongst pixels in images and video frames [31]. Therefore, numerous cryptography algorithms utilizing DNA encoding and chaotic maps aiming to encrypt images and videos securely and robustly are introduced by several academicians and researchers. A summary of the most recent image and video ciphering techniques is provided in this section. In [32], the authors suggested an enhanced hybrid data hiding and ciphering approach for information protection of HEVC streams. The proposed hybrid approach exploits the syntax elements of the sign of motion vector difference (MVD), the sign of quantized transform coefficient (QTC), and the magnitude of MVD of the compressed HEVC streams for data hiding and ciphering processes. The main advantage of the suggested approach is that it saves the format compliance of the transmitted bitstreams and keeping the video bit rate unaffected. Also, it introduces higher embedding capacity with efficient extraction of embedded and encrypted information. In [33], an efficient HEVC ciphering scheme for scalable video transmission is introduced. The introduced scheme encrypts the content-adaptive binary arithmetic coding (CABAC) parameters of the coded block flag, macro-block types, transform coefficient (TCs), delta quantization parameters (dQPs) and MVD of the compressed HEVC streams. A simple Exclusive OR process based on pseudo-random number generator is employed for encryption purposes. The suggested ciphering scheme has the merit of reducing the bandwidth and ciphering latency. Tew et al. [34] suggested region-of-interest-based three different types of ciphering schemes for the significant values of the binary bin symbols, suffixes in chosen coding tree unit (CTU), and skip transforms signals of the encoded slices of the HEVC streams. The suggested schemes are employed without introducing parsing overhead throughout the encryption and decryption procedures. Yang et al. [35] introduced improved format compliance ciphering technique to encrypt the compressed bitstreams of the HEVC sequences. The suggested technique chooses the highly important syntax elements (cu_qp_de/ta_abs, mvd_sign_flag, the suffix of abs_mvd_minus2, and the coeff_sign_flag) of the HEVC bitstreams to be encrypted utilizing the advanced encryption standard (AES) algorithm. The suggested technique presented acceptable security parameters with relatively low complexity. Ma et al. [36] introduced a security-maintaining motion estimation scheme for HEVC streams. Both compression and ciphering processes are employed to save the format compliance of the transmitted HEVC data, where the ciphered data have an identical bit rate as the original encoded HEVC data. The major properties of the suggested encoding-ciphering technique are achieving higher encoding ratio desirable and lower processing complexity. Thiyagarajan et al. [37] presented a low overhead and energy-concerned ciphering for HEVC communication in IoMT to secure and scramble the structural video syntax elements of intra-prediction modes, the texture video syntax elements of transform coefficients and the motion video syntax elements of the motion associated codewords. The presented IoMT-based HEVC ciphering algorithm modifies and adapts the choice of the aforesaid video syntaxes to be encoded corresponding to the motion energy, texture, and structure present in every HEVC frame. So, the suggested algorithm adapts between two cases of high and low energy levels of the frames in HEVC sequences based on an adaptive and estimated threshold. In the case of a high-energy video frame, the proposed ciphering algorithm encrypts the completely HEVC syntax components. In the case of a low-energy video frame, the proposed ciphering algorithm encrypts alternative HEVC syntax components for accomplishing minimal ciphering overhead. The extensive simulation results proved that the suggested IoMT-based ciphering algorithm powerfully decreases the ciphering overhead with an acceptable security degree. In [38], a real-time end-to-end region-of-interest (ROI)-based HEVC ciphering algorithm is introduced. The suggested ciphering algorithm divides the input HEVC frame into discrete rectangular regions to obtain ROI areas from the background of the video frame, and only these extracted ROI regions are ciphered. The selective HEVC ciphering algorithm encrypts a set of syntax video components that maintains the format compliant of the HEVC codec. Consequently, the ciphered video bit-streams can be deciphered with a typical HEVC decoder with only the knowledge of a secret key to decrypt the ROI regions. The obtained results demonstrated that the presented ROI-based HEVC ciphering algorithm can be performed for real-time security applications with achieving miniature complexity expenses and transmission bitrate. In [39], a lightweight IoMT-based selective encryption algorithm for H.264 video communication is proposed. This algorithm encrypts the chosen syntax video components with the exclusive OR (XOR) based on an extended permutation process. The results confirmed that the suggested H.264 selective ciphering algorithm delivered considerable privacy with a little complexity and an insignificant bitrate overhead of the ciphering process, which validated that this presented security algorithm is an appropriate option for energy-constrained mobile devices in an IoMT ecosystem. In [40], a lightweight ciphering-based safeguard information sharing and storage utilizing public clouds and HEVC is introduced. The presented ciphering procedure is based on the AES algorithm and it is suggested for the information communication between the media clouds and mobile users. The suggested algorithm encrypts the intra-unsliced-encoded bit streams of the input HEVC videos to sustain powerconserving limitation and real-time computational processing. The simulation outcomes indicated that the suggested algorithm provided minimal processing time and transmission bitrate to be readily employed for real-time video transmission in cloud services. More chaos and DNA-based image cryptography algorithms are introduced in the literature works. These algorithms can be exploited and adapted for HEVC-based cryptosystems. Chai et al. [6] suggested a new hybrid image ciphering technique based on DNA encoding, row-by-row diffusion process, and wave-based permutation process. This technique achieves great privacy and confidentiality results and can withstand several multimedia attacks like chosen-plaintext attacks and more. In [11], a novel image cryptography algorithm of DNA encoding, pixel permutation, and two-dimensional Henon-sine map has been presented and implemented to achieve an efficient diffusion process on the values of image pixels. The ciphering technique introduced in [41] employed DNA functions and a hamming distance approach in scrambling digital images to increase the cryptosystem capability to survive chosen and known-plaintext attacks. Although these aforementioned image cryptosystems further enhanced assessment security factors, their keyspace is relatively inadequate. Also, these ciphering algorithms presented in [6], [11], [41] are implemented only for gray digital images. Therefore, before employing the cryptography process, there is an additional encumbrance of transforming color digital images and other kinds of information to the consistent form of gray digital images. The authors in [42] introduced a hybrid cryptography algorithm of DNA sequence procedure and cellular automata to encipher multiple digital images. This algorithm has an improvement in enhancing performance computational time, however, it can be employed only for gray digital images and no considerable variations in the obtained values of assessment security factors have been detected while competing with the previous cryptography algorithms. Numerous chaosbased procedures in digital image cryptography [43]- [47] are inadequate to withstand the conventional known-plaintext and chosen-plaintext attacks [31]. It is noticed that the preliminary conditions employed in chaotic maps play a crucial task in determining the chaotic performance. In [48], the authors implemented the DNA encoding and Message-Digest hash algorithm (MD5) to generate primary conditions of the employed chaotic maps. In [49], the authors suggested a joint cryptography algorithm based on cellular neural network and DNA encoding to generate chaotic sequences. These sequences are exploited to break the extreme correlations amongst neighboring pixels of a digital image. For robust multimedia cryptosystems, it is very significantly necessary to have any designed cryptography process not only based on the secret keys but as well on the input original video frame or image. In [12], a robust image cryptography algorithm is suggested in which the employed key streams for the ciphering process are produced from the input plain image and a secret key creating the cryptosystem to work in a different way for every input digital image. It is proved that this cryptography algorithm withstands chosen and known-plaintext multimedia attacks, although the obtained entropy values are comparatively low when contrasted to other cryptography algorithms. The suggested work in [8] is developed for the gray image ciphering process based on a hybrid of DNA operations, cellular automata, and hyper-chaotic schemes. This presented cryptosystem seems computationally complex, however, it can avoid the known and chosen-plaintext multimedia attacks. The operations of DNA XOR, subtractions, and additions are employed in the majority of the cryptosystems explaining image and video ciphering utilizing DNA sequences [7]. In several cryptosystems such as in [7], [50], the Hash functions based on secure Hash algorithm-256 are even utilized to control the preliminary conditions employed for producing secret key streams. The principal and major vulnerabilities recognized in the related multimedia cryptosystems are as follows: • The related chaos-based cryptosystems have not indicated the followed criteria for the choice of the employed chaotic map. • Most of the related cryptosystems merely are only based on secret key streams. • No meaningful improvement is discovered in the estimated Shannon information entropy (the most important security property in any cryptography science) even in modern related cryptosystems. • Nearly all related research papers assess their presented work based on an upper limit of five to six test videos or images for investigation and evaluation purposes. • Almost related cryptosystems are not investigating the effect of different kinds of noises on the analysis of the security performance of the designed system. • The running speed and computational complexity of almost related cryptosystems have not been considered and examined. • All security metrics and extensive privacy analyses are not discussed and investigated in detail in almost related cryptosystems. • Almost related cryptosystems have minimal sensitivity concerning the modification in plaintext (avalanche effect property) and secret key (key sensitivity property). • Almost related cryptosystems are not achieved both diffusion and confusion properties. Therefore, most of the related cryptosystems have critical shortcomings, in terms of surplus memory and energy consumption, higher delay and computational cost, and not delivering an adequate degree of confidentiality and privacy, due to their simplicity. Motivated by the preceding debates, to tackle such drawbacks, this paper introduces a novel hybrid HEVC cryptosystem amalgamating Arnold chaotic map, DNA functions, and modified Mandelbrot set. The first step in the proposed cryptosystem is the generation of key streams using the Arnold chaotic map. Then, these generated key streams are encoded with the help of DNA sequences. After that, the calculation of Hamming distance amongst the generated key streams and the Y-U-V compressed video components is performed, then the DNA sequences are used to encode the result of hamming distance step. Finally, an important mechanism that encompasses both confusion and diffusion procedures based on DNA encoding is employed. The XOR operation is exploited to perform the diffusion procedure, while a novel suggested conditional shift scheme is employed to accomplish the confusion procedure of pixel values. The Mandelbrot set is exploited in our proposed cryptosystem to generate the input of the conditional shift scheme, and finally, once again a diffusion process is carried out to obtain the encrypted HEVC frames. III. PRELIMINARIES In this section, the basic concepts of the DNA encoding procedure and Mandelbrot sets that are exploited in our proposed HEVC cryptosystem are described. A. DNA ENCODING PROCEDURE The procedure of DNA encoding is the operation utilized to map binary values series into DNA bases of thymine (T), adenine (A), cytosine (C), and guanine (G). The structure of genetic code blocks can be constructed from these DNA bases. The choice of T, A, C, and G is achieved with a rule of DNA encoding strategy [14]. Two binary digits at any time can be exploited to employ the encoding process. The available rules of DNA encoding process that can be performed to encode 01, 00, 11, and 10 are 24 forms of rules. There are only eight of these rules that fulfill the complementary rule of Watson-Crick [10], as demonstrated in Table 1. In this paper, to encode the binary sequences, rule 01 is employed for the DNA encoding process. Therefore, this DNA encoding rule is exploited to encode video frames by replacing their binary values of the pixels with the congruous DNA sequences. If we suppose that the DNAEncode(·) is the utilized function for the DNA encoding process. So, for example, if we have a pixel value of 120 with an 8-bit form of (01,111,0 0 0), its DNA encoded structure can be acquired as ''CTGA''. If we suppose that the DNADecode(·) is the utilized function for the DNA decoding process. So, for example, for a sequence with a DNA form of ''TGAC'', its decoded binary structure corresponding to the employed DNA rule 01 is given by (11,10 0,0 01) (equals 255 in a decimal manner). Moreover, the DNA-based XOR function is used in this paper for encoding sequences. Because there are eight DNA rules that fulfill the complementary rule of Watson-Crick [10], so there are eight forms of rules for the DNA-based XOR function that can be utilized in this paper. Thus, the DNA-based XOR function of the employed DNA rule 01 is demonstrated in Table 2. For instance, the result of the DNA-based XOR operation of the two sequences in DNA forms of TGAC and CTGA can be found as GCGC. B. MANDELBROT SET The basic idea of the Mandelbrot set (M set) is that it is a collection of points that can be represented in the complex plane. Each point in this plane can be depicted utilizing a complex number c ∈ C described on the way of c = x + jy, where both x, y ∈ R. Figure 1(a) shows an example of the supposed Mandelbrot set structure of the points of a colored video frame in a grey format. Due to the great advantages of the Mandelbrot set [51], we exploited the generated values resulted from this set in the shifting process in our proposed cryptosystem. So, the M set is employed in our work for the purpose that it has convoluted arrangements emerging from a simple characterization, and a minor shift of the control parameter can implement the M set structure. The typical definition of the M set is given in Eq. (1) [51], where Z 0 = 0 and C is a constant value that is selected to have a value of 10 14 in our proposed cryptosystem. To shuffle off the group of the whole zero values of the black pixels in Figure 1(a), a simple modified M set generation procedure described in Algorithm 1 is suggested. This algorithm is exploited to get a modified version of the Mandelbrot set as given in Figure 1(b), it is noticed that its boundary forms a fractal. Algorithm 1 Modified M Set Generation Process input: the image of M set structure (Figure 1(a)). if p(m, n) equals 0 then // p(m, n) signifies the pixel value at a location (m, n) in Figure 1(a). p(m, n) = [(m × n) + C] mod 256 end output: the image of modified M set structure (Figure 1(b)). Figure 2 illustrates the structure diagram of the suggested cryptographic procedure. The deciphering process can be constructed by reversing the steps of the encryption process. The suggested HEVC cryptosystem consists of three main phases: (1) Chaotic map sequences-based keystream generation, (2) DNA sequences encoding, and (3) Diffusionconfusion process. This hybrid HEVC cryptosystem is suggested to generate a highly ciphered form of the plain compressed HEVC frame indestructible by the invaders while HEVC streaming in IoMT applications. The ciphering process can be utilized for any HEVC frames whichever size with whatever their content characteristics. IV. PROPOSED HYBRID HEVC CRYPTOSYSTEM The suggested HEVC cryptographic procedure can be performed using three different phases as shown in Figure 2, and in-detail description of these three phases is given as follows. A. PHASE (1): CHAOTIC SECRET KEY GENERATION A proper map choice is one of the essential phases in the ciphering process. The chaotic feature of the secret sequences generated by the chosen map improves the privacy and precludes the ciphered video frames from being divulged or violated by the aggressors. Thus, the choice of the chaotic map controls on the ciphering quality so that the original information pattern of the video frame is hidden in a superior manner. In the suggested cryptography procedure, the Arnold chaotic map is selected to be employed for the ciphering process. We tested the ciphering quality of the Arnold chaotic map compared to other chaotic maps: Logistic map, Henon map, Duffing (Holmes) map, Baker map, Gauss-iterated map, Lorenz map, Tinkerbell map, and Tent map. So, the map selection algorithm introduced in [13] is exploited in our proposed cryptosystem to find the best chaotic map based on the estimated entropy value which is the most important security property in any cryptography science. So, the motivation for choosing entropy value as the benchmark metric is that an efficient ciphering technique should get the information entropy lean to a value of 8 [30], and consequently, the pattern of video frame information is obscured in a safer direction. Thence, the choice of an appropriate map which eventually results in better information entropy value is very advantageous. After extensive tests, it has been proved and identified that the Arnold chaotic map is the most excellent chaotic map for cryptography compared to other types of chaotic maps. It achieves the best value of average information entropy of 7.96 compared to other tested chaotic maps. Therefore, our suggested HEVC cryptosystem utilizes it for the aid of secret key streams generation. This phase of chaotic secret key generation encompasses the following two steps (1 and 2). Step (1): Generate the three secret key streams (K 1 , K 2 , and K 3 ) produced from the three chaotic sequences (S 1 , S 2 , and S 3 ) created via the employed Arnold chaotic map (Repeat the Arnold chaotic map t times and generate K m through Eq. (2)). Step (2): Employ the DNA sequences encoding process on the obtained K 1 , K 2 , and K 3 using the function VOLUME 8, 2020 DNAEncode(·), to obtain the bases of the DNA sequences (E 1 , E 2 , and E 3 ) with a size of the same size of the input HEVC frame, as shown in Eq. (3). where each one of K i is transformed into its binary format before employing the DNA sequences encoding process (DNAEncode(·)). B. PHASE (2): DNA SEQUENCES ENCODING This phase consists of the subsequent four steps (3 to 6). Step (3): Separate the three main Y , U , and V components of the input compressed HEVC frame. Step (4): Estimate the value of Hamming distance between the generated key streams (K i ) and the three decomposed Y , U , and V matrices, as given in Eqs. (4) to (6). where HM(·) refers to the hamming distance function which reverts the number of bits which are distinct at identical location in the inputs of this function. The objective from the utilization of the estimation of the hamming distance between the generated key streams and the three decomposed Y, U, and V matrices of the video frame is to avoid the drawbacks that may be resulted from the employed Arnold map. It is known that the state resulted from the Arnold map may be periodic after a number of iterations. Therefore, the suggested cryptography procedure can survive this shortcoming in such a manner that the generated secret key streams are not employed immediately to the video frames. As an alternative, the estimation of hamming-distance amongst the generated secret key streams and video frame components is performed, followed by the DNA sequences encoding which can abolish this impact. Therefore, the step of hamming-distance estimation is established decisively due to the occurrence of periodic secret key streams produced from the employed Arnold map after several iterations. Step (5): Employ the strategy of DNA encoding on the H Y , H U , and H V to produce the matrices of DNA sequences: EH Y , EH U , and EH V , as given in Eqs. (7) to (9). Step (6): Perform XOR function between generated DNA sequences of key streams given in (3) and the estimated matrices of encoded DNA sequences as given in (7) to (9). The actual ciphering process begins from this phase of the diffusion and confusion mechanisms. The objective of the confusion mechanism necessitates reordering or rearranging the values of the pixels without adjusting their values. While the objective of the diffusion mechanism aims to adjust the values of the pixels. Therefore, these two mechanisms are the two key steps tracked in every ciphering process. So, they are essential for suppressing the main information in the plain video frame from the aggressors. These two mechanisms can be accomplished in any way such that the plain video frame must be regained through the deciphering process. Thus, these two mechanisms have to be reversible as well. In the suggested cryptographic algorithm, a conditional shift mechanism described in Algorithm (2) is established to encounter the requirement of the confusing process and a Bit XOR process captures the responsibility of the diffusion process. The confusion and diffusion processes in this phase can be described as in step 7. Step (7): Employ the confusion-diffusion process based on the following sub-steps, to produce the video frame components C Y , C U , and C V that are concatenated to generate the final ciphered compressed HEVC frame. The main confusion-diffusion procedure steps are described as follows: 1. Obtain the estimated values of the encoded DNA sequences of the Y , U , and V components of the input compressed HEVC frame, as given in Eqs. (13) to (15). 2. Receive the encoded DNA sequences of the secret key streams as E 1 , E 2 , and E 3 . 3. Employ the XOR-based DNA process between the encoded DNA sequences of the key streams and the encoded DNA sequences of the YUV components, as given in Eqs. (16) to (18). 4. Employ the proposed conditional shift mechanism as described in the steps of Algorithm (2), on the obtained XE Y , XE U , and XE V to produce the keys of S Y , S U , and S V . 5. Employ the DNA decoding process on the outcome delivered by the phase (2) of the DNA sequences encoding, as given in Eqs. (19) to (21). 6. Execute the diffusion process based on the Bit XOR process to get the ciphered video frame components C Y , C U , and C V that are merged to obtain the final ciphered compressed HEVC frame, as given in Eqs. (22) to (24). A. VISUAL ANALYSIS More and comprehensive evaluations have been carried out for the purpose of security analysis of the suggested cryptosystem. The visual inspection is the first and main evaluation metric that is used to assess the ciphering/deciphering performance of the suggested HEVC cryptosystem. Figure 3 indicates the encryption-decryption results of the tested compressed HEVC frames. From the offered results, it is observed the great advantage of the suggested cryptosystem in disappearing and hiding the main details within the tested video frames, while the suggested cryptosystem can efficiently and successfully decrypt and recover the video frames with superior performance at the receiver side. B. HISTOGRAM ANALYSIS The distribution of the pixel strength rates of a video frame can be demonstrated by the histogram, it can also deliver certain statistical knowledge of the video frame. A protected and secure video cryptosystem can make the ciphered video frame has a histogram with a uniform distribution to withstand any type of statistical channel attacks [16]. Figure 4 indicates the histograms of the tested original, ciphered, and deciphered video frames. The original video frame distribution varies appreciably from the ciphered video frame distribution. Consequently, it is observed that our suggested HEVC cryptosystem has introduced a uniform pixel distribution on the ciphered video frame with obscuring the actual pattern of the tested video frames. Thus, it is noticed from the histograms that there are no patterns/sequences of any observable nature in the corresponding ciphered video frames. Moreover, it is clear that the histograms of the decrypted video frames are similar to the histograms of the original video frames, so the suggested cryptosystem can effectively and profitably recover the video frame histograms with better functionality. These histograms results corroborated the soundness of the suggested HEVC cryptosystem. C. CORRELATION ANALYSIS In each video frame, there are a certain amount of relationships are sustained amongst each pair of neighboring pixels. Excellent cryptography techniques are anticipated to prevent or conceal such relationships between pixels to defend the video content from various channel attacks [20]. To realize and investigate the relationships amongst the pairs of video frame pixels, it is required to choose specific adjoining pixels of the input video frame along with the three vertical (V), horizontal (H), and diagonal (D) directions. The pixel pairs correlation can be determined as given in Eq. (25). where x i N and cov(x, y) = E((x − E x )(y − E y )). where the two sequences of neighboring pixels of the vertical, horizontal, or diagonal are denoted by (x, y), and N signifies the video frame size. Table 3. It is apparent from Table 3 that all the three V, H, and D directions of correlation values among each adjacent pair pixels of the whole ciphered video frames are extremely low. Therefore, it is established that all structures of pattern in the ciphered video frames have been hidden that making them are indestructible by the intruders and attackers. D. ENTROPY ANALYSIS The entropy is a definition which is utilized to describe the intensity of a video frame or an image. So, it refers to the information or data amount which is concealed in a video frame utilizing any form of technique. The Shannon entropy defines a degree of unpredictability of a video frame [23]. Shannon entropy for an 8-bit video frame is determined as presented in Eq. (26). where the i th grey value in a video frame is given by x i and the probability of x i in a video frame is denoted by P(x i ). A good cryptography technique should have an estimated value of Shannon entropy close enough to 8. Table 4 presents the information entropies of the tested original, ciphered, and deciphered frames of the tested compressed HEVC sequences. It is observed that our HEVC cryptosystem provides the ultimate values of the Shannon entropy which characterizes ideal values for a variety of video frames with different features. This indicates that the information leakage in the ciphering may be overlooked. Subsequently, the suggested cryptosystem is robust and secure alongside entropy attacks. E. SSIM, FSIM, AND PSNR ANALYSIS The SSIM (structural similarity) index, FSIM (feature similarity), and PSNR (Peak Signal-to-Noise Ratio) metrics are used to assess the quality performance of the ciphering and deciphering processes. In our simulation tests, we evaluated the SSIM, FSIM, and PSNR values between the original video frames and ciphered video frames that must be given with low values for the efficient ciphering process. Also, we estimated the SSIM, FSIM, and PSNR values between the original video frames and decrypted video frames, that must be given with high values for the efficient decrypting process. The SSIM is a metric that is used for determining the relationship between two video frames. The video frame pixels have strong and great inter-dependencies particularly when they are spatially close together, this can be estimated and determined by the concept of structural information [12]. These inter-dependencies convey valuable information about the structure of the objects in the visual video frame. The decimal value of the SSIM index is between −1 and 1. The SSIM metric can be estimated as given in Eq. (27). where µ x and µ y are the average values of x and y, respectively. σ 2 x and σ 2 y are the variance values of x and y, respectively. σ xy is the covariance value of x and y. 2 and C 2 = (K 2 L) 2 are two constant variables that are used to alleviate the division process with a low denominator, where L is the pixel-values dynamic range. The value of K 1 and K 2 are ordinarily selected to be 0.01 and 0.03, respectively. Table 5 illustrates the SSIM results between the original and ciphered frames of the tested videos. For a well-ciphering process, it is recommended to get lower values for the SSIM results between the original and ciphered frames. Table 6 demonstrates the SSIM results between the original and deciphered frames of the tested videos. For a welldeciphering process, it is recommended to get higher values for the SSIM results between the original and deciphered frames. It is observed from Tables 5 and 6 that the HEVC cryptosystem provides SSIM results that are near to the target and optimum values. The FSIM is a metric that is utilized for examining the ciphering-deciphering proficiency of the suggested HEVC cryptosystem. It estimates the local similarity value amongst two different video frames. We tested this metric between the original and ciphered frames, and between the original and deciphered frames. The decimal value of the FSIM index is between −1 and 1. The FSIM metric can be estimated as given in Eq. (28). where is the spatial domain of the video frame, S L (x) signifies to the overall anticipated similarity amongst two video frames, and PC m (x) refers to the expected value of phase congruency. Table 5 illustrates the FSIM results between the original and ciphered frames of the tested videos. For a well-ciphering process, it is recommended to get lower values for the FSIM results between the original and ciphered frames. Table 6 demonstrates the FSIM results between the original and deciphered frames of the tested videos. For a well-deciphering process, it is recommended to get higher values for the FSIM results between the original and deciphered frames. It is observed from Tables 5 and 6 that the HEVC cryptosystem provides FSIM results that are near to the target and optimum values. The PSNR is another important metric that is utilized for analyzing the ciphering-deciphering performance of the suggested HEVC cryptosystem. The PSNR metric is estimated as the percentage between the highest possible signal power and the power of falsifying noise. Therefore, it is preferable to get higher values for the efficient deciphering process (between original and deciphered frames) and lower values for the efficient ciphering process (between original and ciphered frames) [17]. For a greyscale video frame, the PSNR metric is calculated as in Eq. (29). Due to a very broad dynamic range of many different signals, the PSNR is typically expressed in terms of the logarithmic decibel scale (dB). Table 5 illustrates the PSNR results between the original and ciphered frames of the tested videos. For a well-ciphering process, it is recommended to get lower values for the PSNR results between the original and ciphered frames. Table 6 demonstrates the PSNR results between the original and deciphered frames of the tested videos. For a well-deciphering process, it is recommended to get higher values for the PSNR results between the original and deciphered frames. It is observed from Tables 5 and 6 that the HEVC cryptosystem provides PSNR results that are near to the target and optimum values. where the MSE is the value of mean square error that is described as in Eq. (30). where V 1 (i, j) refers to the original video frame and V 2 (i, j) signifies to the corresponding ciphered or deciphered video frame. F. DIFFERENTIAL ATTACK ANALYSIS Occasionally, an adversary may attempt to produce a little variation in the original video frame which is utilized for ciphering and examine the variation in ciphering outcomes (that is, the cipher video frame of the plain frame and the cipher video frame of plan frame with a little variation). In this manner, the adversary follows the relationship amongst the plain video frame and the two ciphered video frames [44]. The differential cryptanalysis is a procedure that eases in deciphering a video frame. Therefore, it is evident that our HEVC cryptosystem should be anti-differential, which requires that it must be complicated for the aggressors to recognize how the original video frame is associated with the ciphered video frame. The NPCR (Number of Changing Pixel Rate) and UACI (Unified Averaged Changed Intensity) are the two main indicators utilized for this aim. These evaluation indicators are well-defined as in Eqs. (31) and (32). where C 1 (i, j) and C 2 (i, j) are the two encrypted video frames equivalent to the original video frame before and after a little adjustment, respectively. The m and n values refer to the video frame size (width and height). The proven estimated values of the UACI and NPCR metrics are approximately 0.33 and 0.996, respectively [9]. By obtaining these optimum values, it will be indicated that the ciphering process is highly vulnerable to the input video frame, and hence the suggested cryptosystem will survive the differential channel attack to a significant amount. Table 7 demonstrates the UACI and NPCR outcomes of the tested video frames. It is remarked that all obtained values are exceedingly close to the theoretical ideal values. G. CIPHERING QUALITY ANALYSIS 1) HISTOGRAM DEVIATION (D H ) The maximum quantity of deviation amongst the histograms of the original and ciphered video frames [53] can be VOLUME 8, 2020 estimated by utilizing the metric of histogram deviation to appraise the ciphering quality performance of the suggested HEVC cryptosystem. This metric can be estimated as in Eq. (33). It is observed that the outcomes of the D H values are low as shown in Table 8. Consequently, the original and ciphered video frames are uncorrelated which proves the high-quality performance of the suggested HEVC cryptosystem. where d i is the amplitude of the absolute difference at the gray level i. The m and n values refer to the video frame size (width and height). 2) IRREGULAR DEVIATION (D I ) The maximum quantity of irregular deviation caused in the ciphered video frame from the ciphering procedure on the plain video frame [53] can be estimated by utilizing the metric of irregular deviation to appraise the ciphering quality performance of the suggested HEVC cryptosystem. This metric can be calculated as in Eq. (34). The outcomes of D I values are introduced in Table 8. It is observed that the outcomes of the D I values are low. As a result, the original and ciphered video frames are uncorrelated which proves the high-quality performance of the suggested HEVC cryptosystem. where H is the histogram of the difference resulted from video frame, m and n values refer to the video frame size (width and height), and M H is the value of the histogram. H. KEY SPACE AND SENSITIVITY ANALYSIS 1) KEY SPACE ANALYSIS To withstand the critical brute force attack, it is recommended that the employed cryptography technique must have a secret key with a large space [13]. Therefore, the keyspace should be large enough to build a robust and secure cryptosystem. To prevent the brute-force attacks of the transmitted video frames, the keyspace should have at least a value of 2 100 . In our HEVC cryptosystem, different initial values of the employed Arnold map and hamming distance matrix are utilized to obtain the secret keys. For the Arnold map, the initials values of X 0 and Y 0 are utilized to produce secret sequences of each Y, U, and V channels with allowed values: [0, 1] with an iteration value of t. For the hamming distance matrix for each Y, U, and V channels: H Y , H U , H V , it is assumed that each one of this matrix of hamming distance has a size of 256 × 256 for the input video frame with a size of 256 × 256. Therefore, the number of various values feasible for each initial value of X 0 and Y 0 is around (2×10 15 ) 3 , and the equivalent for the iteration counter t is supposed to be10 2 . Also, each generated matrix has a total number of elements equals 65,536, where each element position with the matric can have possible various values of 256 (0-255). Thus, the total number of possible various values for the three H Y , H U , H V matrices is about 256 (65,536×3) . Therefore, the final value of keyspace is about (256) (65,536×3) × 10 2 × (2 × 10 15 ) 3 , which is decidedly larger than 2 100 , this confirms that the suggested HEVC cryptosystem will be greatly robust against brute-force channel attacks. 2) KEY SENSITIVITY ANALYSIS The ciphering algorithm should be susceptible to the preliminary and constraint values of the employed chaotic map [17]. So, the cryptosystem must generate distinct output result for a minor variation in the secret keys. Thus, to demonstrate that if there is any small alteration in the input main boundaries and control values, it will create a considerable modification at the output, and subsequently, the original video frame persists unrecoverable and the ciphered video frames cannot be deciphered accurately. Figure 15 illustrates the investigation of key sensitivity analysis for the tested video frames. Consequently, for examining the key sensitivity execution of the suggested HEVC cryptosystem, the ciphered video frames, deciphered video frames, and their histogram representations are exhibited in Figure 15 for all tested video frames at right and wrong values of secret keys. After utilizing ciphering on the test video frames with the right keys (keyset1) with X 0 = 0.105795019, Y 0 = 0.2685999, we marginally modify one of the preliminary control values of the employed Arnold map (the X 0 value is adjusted to be equals X 0 = 0.105795020) to form a keyset2, and then we tried to decipher the video frames with the altered keyset2. From the findings, it is observed the extreme key sensitivity efficiency of the suggested cryptosystem in the case of a minor adjustment in the secret keyset values. It is indicated that the deciphered frames acquired with the modified keys (keyset2) are fairly distinct, not the actual video frame delivered though a tiny alteration is employed to secret keys. This proves that our suggested cryptosystem has wonderful sensitivity to the secret keys and thus averting it from numerous channel attacks. I. EDGES DETECTION ANALYSIS The suggested HEVC cryptosystem must guarantee the protection of the edge's information in the transmitted video frames from the channel attacks. Consequently, the visual misrepresentation for the ciphered video frames exploiting the suggested HEVC cryptosystem can be quantified by the distortion offered at video frames edges. The metric of EDR (Edge Differential Ratio) is utilized to estimate the edge distortion, it is formulated as in Eq. (35) [24]. where the pixel value in the detected edges inside the binary form of the original video frame can be estimated by P(i, j) , and the related pixel value in the binary detected edges in the ciphered video frame can be calculated by P(i, j). Table 9 exposes that the EDR outcomes amongst the ciphered and plain video frames are close to 1 that guarantees that the ciphered and plain video frames are extremely dissimilar. Figure 16 offers the visual Laplacian description of Gaussian binary edge detection for the original, ciphered, and deciphered video frames. From the offered results, it is observed that there is a great difference in edges between original and ciphered frames. This proves the wonderful advantage of the suggested cryptosystem in disappearing and hiding the main details within the tested video frames, while it can efficiently restore the video frames with superior performance. J. CHANNEL NOISES ATTACK ANALYSIS In this section, we investigate how the suggested cipheringdeciphering procedures perform with channel noises. The communication medium continually comprises several kinds of noise. Throughout communication, the video frame in the ciphered form will firmly be seriously influenced by these channel noises. Therefore, our deciphering procedure must be able to survive the channel noise in such a manner that the deciphered video frames should be comprehensible or in a human-understandable shape even if they are infected with channel noise during video streaming. Thus, we must verify that the suggested cryptosystem is effective enough to produce the identifiable and noticeable video frame from the ciphered video frame comprising channel noise. Different types of channel noises (Gaussian, Poisson, Salt and Pepper, and speckle) are considered in our analysis. 1) GAUSSIAN NOISE ANALYSIS In digital images and videos, Gaussian noise mainly results during the acquisition process. The utilized sensor in the imaging system has an ingrained noise as a consequence of the illumination level and its specific temperature. Also, there is another source of electronic circuit noise injected to the sensor which is coming from the electronic circuits associated with the imaging sensor [14]. A conventional model of image or video frame noise is independent, additive, and gaussian at each value of pixels, and it is independent of the intensity of the signal. Figure 17 presents the results of ciphered and deciphered frames of the Gaussian noise analysis for all tested video frames affected with Gaussian noise has a zero mean and different variance values of 0.001, 0.003, and 0.005. It is observed that the deciphered video frames are identifiable and detectable even if the related ciphered video frames are influenced by different patterns of Gaussian noises. So, the suggested cryptosystem has a terrific benefit in resisting the effect of Gaussian noise attack. 2) SHOT/POISSON NOISE ANALYSIS The Poisson noise typically results from the statistical quantum oscillations of the imaging sensor, its effect appears in the darker sections of a video frame or an image. This noise is considered as a change in the number of sensed photons at a certain exposure level [42], so it may be named as a shot photon noise. It has a value of root mean square which is proportional to the square root of the intensity of the video frame, and the number of noises at distinct pixels are independent of each other. So, the distribution of the shot noise is considered a Poisson distribution. Figure 18 presents the results of ciphered and deciphered frames of the Poisson noise analysis for all tested video frames affected by Poisson noise. It is clearly noticed that the deciphered video frames are detectable and identifiable even if the related ciphered video frames are influenced by Poisson noise. This proves the great advantage of the suggested cryptosystem in withstanding the effect of the Poisson noise attack. 3) SALT AND PEPPER NOISE ANALYSIS The Salt-and-Pepper noise has different names of Fat-tail distributed noise, or impulsive noise, or spike noise [47]. The effect of salt-and-pepper noise on the digital image or video frame results in bright pixels in dark areas and dark VOLUME 8, 2020 dark pixels, and a hybrid of median and mean filtering. Figure 19 introduces the outcomes of ciphered and deciphered frames of the Salt and Pepper noise analysis for all tested video frames affected with Salt and Pepper noise at different variance values of 0.001, 0.003, and 0.005. It is noticed that the deciphered video frames are discernible and demonstrable even if the associated ciphered video frames are influenced by various patterns of Salt and Pepper noises. Consequently, the suggested cryptosystem has a tremendous advantage in combating the impact of Salt and Pepper noise attack. 4) SPECKLE NOISE ANALYSIS The speckle noise emanates from the models of destructive and constructive interference exhibited as dark and bright dots in the video frame [22]. Figure 20 demonstrates the results of ciphered and deciphered frames of the speckle noise analysis for all tested video frames affected with speckle noise at different variance values of 0.001, 0.003, and 0.005. It is observed that the deciphered video frames are noticeable and discernable even if the concomitant ciphered video frames are affected by numerous patterns of speckle noises. So, the suggested cryptosystem has an immense gain in lessening the influence of speckle noise attack. K. OCCLUSION ATTACK ANALYSIS During the transmission and streaming of video sequences through the Internet and communication networks, some video frames may be dropped as a result of malicious destruction or congestion in the network [6]. In this section, the occlusion attack analysis is investigated to assess the capability of recovering original video frames from ciphered video frames in the case of some portion of it has been occluded or lost. The results of the occlusion attack analysis of all tested video frames are displayed in Figure 21. It is observed that the video frames can be deciphered in a comprehensible or plausible manner even if certain pieces of ciphered video frames are lost in different regions during the video streaming. This proves the capability of our suggested HEVC cryptosystem for resisting the probable occurrence of occlusion attack. L. COMPUTATIONAL PROCESSING ANALYSIS A good cryptography technique is anticipated to have a rapid execution speed to achieve lower computations of processing. Different video frames with various sizes have been utilized as examples to evaluate the ciphering/deciphering running time of the suggested HEVC cryptosystem. Our implementation testes have been carried out on a personal laptop with 8 GB RAM, 1TB hard drive, and Intel(R) Core(TM) i7-4500 CPU @ 1.80GHz and 2.40GHz. The execution system is Microsoft Windows 10, while the computational platform is MATLAB R2019a. The results of the average ciphering/deciphering time taken by the suggested cryptosystem algorithm for processing the tested video frames are presented in Table 10. It is observed that these achieved running speeds are acceptable by considering its tremendous level of privacy and security for video streaming in IoT multimedia applications. M. DISCUSSION OF CLASSICAL CATEGORIES OF ATTACKS ANALYSIS It is known that every designed ciphering-deciphering system will be released publicly and accessible at the end. Therefore, under the public domain, the attackers might be able to investigate the designed cryptosystem steps as they are accessible, except the secret keys distributed amongst the transmitter and receiver side to execute the ciphering or deciphering process. It is recognized that there are four conventional categories of multimedia attacks: known-plaintext, chosen-ciphertext, chosen-plaintext, and ciphertext only. It is definitely apparent that the chosen-plaintext attack is the extremely horrible or crucial attack as the attacker someway has attained provisional access to the setting of the ciphering and deciphering procedures, thus in this case, the attacker might be able to create the equivalent ciphertext for a chosen-plaintext. If our HEVC cryptosystem has the ability to beat the chosen plaintext attacks, it will be surely and defeat and avoid the remaining three categories of attacks. It is definitely proved in sections (H.1 and H.2) that our suggested HEVC cryptosystem is highly sensitive to the constant values (or control parameters) and the preliminary values utilized with the employed Arnold map. Also, the suggested cryptosystem is sensitive to the constant initial values utilized with the employed Mandelbrot set. There is a most important step in the ciphering process in our cryptosystem which is the calculation of the Hamming distance step, which determines the hamming distance value amongst the plain video frame components and the corresponding secret key sequences. Consequently, the step of hamming distance matrix calculation plays the main role in the advance processing of our HEVC cryptosystem. Therefore, our suggested cryptosystem not simply depends on the employed secret keys but as well on the original video frame. The key-value of the iteration count t can be set to various random values for each input plain video frame. Hence, there are entirely different generated outputs from the utilized Arnold cat map when it changes the value of the iteration count. Thus, if we definitively presume that even the adversary is capable to acquire some patterns of plain-cipher video frame pairs, our suggested HEVC cryptosystem is adequate and enough to survive the chosen plain-text critical attack. And as a result, it will surely resist the other three types of attacks. Therefore, the hamming distance calculation step is one of our main contributions in this work for building a robust and secure video streaming system. Because of our suggested cryptosystem can be tested and applied to any multimedia content like digital images ciphering, as we already tested it for the ciphering of streamed video frames. So, to further confirm the performance efficiency of the suggested cryptosystem, we analyzed its execution on an ordinary digital color image (Lena) to compare its security performance to a wide range of recent related image cryptography techniques [6]- [23]. Tables 12 presents the comparison study of the suggested cryptosystem with recent preceding image cryptography works in [6]- [23] in terms of entropy, PSNR, correlation, NPCR, and UACI. It is observed from this comparative analysis that all assessment parameters are superior compared to the preceding related algorithms. So, it is proved from all introduced results that the suggested cryptosystem has great sensitivity to the plain images and video frames, and it can survive the known/chosen-plaintext attacks, confirming that our cryptosystem is better robust and secure than other related cryptosystems. VI. CONCLUSIONS AND FUTURE WORK In this work, a hybrid secure and robust HEVC cryptosystem based on DNA sequences, chaotic maps, and Mandelbrot sets is suggested. The Arnold map has been selected as the most excellent map to be employed with our cryptosystem due to its great security performance, where a straightforward and appropriate strategy for the chaotic map selection is devised to choose the most proper chaotic map. In the suggested cryptosystem, the ciphering process is employed independently on each of the three video frame channels to further boost privacy and security efficiency. The suggested cryptosystem proved its superior performance for the security of video streaming. The ciphered video frames engendered by our suggested cryptosystem are not possible to be deciphered by the attackers as the ciphering process is performed utilizing the arbitrarily created secret sequences and keys from chaotic maps. Also, the advantage of large keyspace of the suggested cryptosystem eradicates the impact of brute-force attacks. Extensive security analysis is investigated for the suggested cryptosystem which includes visual analysis, histogram analysis, quality analysis, correlation analysis, noise effect analysis, differential analysis, attack analysis, entropy analysis, etc. These assessment indicators which are studied and examined provide superior values than the preceding related works. Additionally, it has been proved that the suggested cryptosystem yields a greater robust and secure way to communicate different types of multimedia content like images and videos. In the future, we can incorporate the parallel diffusion and permutation concepts to further enhance the computation speed of the suggested cryptosystem. Also, we intend to design a multilevel security system for reliable HEVC communication by merging the watermarking, and steganography algorithms for achieving further security of video streaming in IoT multimedia applications. Moreover, we aim to develop a smart and secure video streaming security system based on the new trends of deep learning techniques.
v3-fos-license
2016-05-12T22:15:10.714Z
2012-09-07T00:00:00.000
2709240
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1186/2251-6581-11-13", "pdf_hash": "1d9e008bcbdbd10eb2db3ae321ae4f405f78f0e8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2783", "s2fieldsofstudy": [ "Medicine" ], "sha1": "59c7c1052a9abefd55a14608f43160a6deb41f6d", "year": 2012 }
pes2o/s2orc
Gender differences in association between metabolic syndrome and carotid intima media thickness Background Metabolic syndrome (Mets) is a cluster of cardiovascular risk factors which can predicts cardiovascular disease (CVD). Carotid intima-media thickness (CIMT) is known as a surrogate measure of subclinical atherosclerosis and predictor of CVD. Although, it has shown the association between Mets and CIMT, this relation regarding sex differences is limited. We aimed to find out whether gender differences in this association. Methods In this cross-sectional study, we recorded height, weight, waist circumference (WC), blood pressure, and lipid profiles. We used Mets; defined based on NCEP ATP III definition, and traditional cardiovascular risk factors; age, body mass index (BMI), WC, hyperlipidemia, and hypertension, in multivariate regression models which including;. The CIMT measurement < 0.73 or ≥0.73 mm was considered as low- or high risk to CVD. Results Overall, 150 subjects were enrolled to study that their ages were 36-75 years. The 47.3% of them (71 subjects) had Mets. CIMT was increased in Mets group compared non-Mets group (P = 0.001). In logistic regression analysis, a significant association was found between Mets and CIMT in women, but not in men (p = 0.002, and p = 0.364, respectively). After adjustment to age, WC, BMI, hypertension and hyperlipidemia, this association was significant just in women (p = 0.011) independent of WC, BMI, hyperlipidemia and hypertension. Conclusion Our data showed that MetS is a stronger risk factor for subclinical atherosclerosis in women than in men. So, we suggest the assessment of CIMT along with definition Mets in middle-aged women could be lead to earlier detection of at risk individuals to CVD. Introduction Metabolic syndrome (MetS) is defined as a cluster of cardiovascular risk factors including central obesity, hypertension, dyslipidemia, and glucose intolerance [1]. It has been shown that MetS is a predictor of type 2 diabetes mellitus (T2DM) [2] and cardiovascular disease (CVD) [3]. MetS is associated with an approximately two fold increase in CVD [4]. The Third National Health and Nutrition Examination Survey (TNHNES) reported that the prevalence of MetS was 24% in adults older than 20 years and 42% in individuals aged 70 years or older [5]. Its age-adjusted prevalence among adults aged 25-64 years who participated in the MONICA WHO Study in Iran was estimated at 27.5% [6]. The prevalence was significantly higher in women than in men (35.9% vs. 20.3%). Carotid artery intima-media thickness (CIMT) is a non-invasive surrogate marker of atherosclerosis [7]. Its progression is influenced by conventional CVD risk factors [8]. Measurement of CIMT can directly predict the risk of future cardiovascular events [8]. Increasing age, male gender, hypertension, obesity, and T2DM or glucose intolerance are associated with accelerated atherosclerosis in the carotid arteries [8]. Several studies have reported an association between MetS and increased CIMT [9][10][11][12][13]. It has been suggested that MetS is a stronger risk factor for atherosclerosis in women than in men [14,15], although there are other reports which do not confirm this finding [12,16]. To our knowledge possible sex differences regarding this issue have not been studied in Iran and we aimed to investigate this subject in a cross-sectional survey. Study population We used data from a subgroup of participants who were enrolled in the Rapid Atherosclerosis Prevention In Diabetes (RAPID) study. The RAPID study is an ongoing prospective single-center cohort study in subjects aged ≥30 years and without any clinical evidence of coronary artery disease (Minnesota codes 1.2.1, 1.2.4, 1.2.5, and 1.2.7) at the time of the investigation. This study was started in September 2010 at Dr. Shariati Hospital/ Tehran University of Medical Sciences (TUMS), for early detection of atherosclerosis in T2DM patients. Written informed consent was obtained from all participants. The study was approved by the ethics committee of the TUMS. In the present study we compared carotid stiffness between subjects with and without MetS. Participants with clinical evidence of coronary artery disease; angina, positive ST elevation in ECG, or CVD endpoints such as myocardial infarction (MI) or stroke were excluded. We recorded characteristics of 150 participants of whom 71 (47.3%) suffered from MetS. Mean age of the participants was 49.8 ± 7.5 years (range: 39 years). Definitions Mets was diagnosed according to NCEP ATP III guidelines [1]. According to NCEP ATP III definition, MetS was confirmed if at least three of the following criteria were present: waist circumference (WC) ≥102 cm (in men) or ≥88 cm (in women), triglyceride ≥150 mg/dl or a history of previous treatment for dyslipidemia, HDL cholesterol ≤40 mg/dl (in men) or ≤50 mg/dl (in women), blood pressure ≥130/85 mmHg or those who had been treated for hypertension, and fasting blood sugar (FBS) ≥ 110 mg/dl or a history of previous treatment for diabetes [1]. CVD is a group of heart and blood vessel disorders that include coronary heart disease, cerebrovascular disease, peripheral arterial disease, and congenital heart disease, etc [17]. Heart attacks and strokes are usually acute events caused by a blockage that prevents blood from flowing to the heart or brain. The most common reason is a build-up of fatty deposits on the inner walls of the blood vessels [17]. The term MI reflects cell death of cardiac myocytes which is caused by ischemia. In other words it is the result of a perfusion imbalance between supply and demand. MI should be diagnosed by symptoms, ECG abnormalities and enzymes, specific serological biomarkers and imaging [18]. Angina (or angina pectoris) is a symptom of coronary artery disease caused by reduced blood flow to the heart muscle leading to chest pain [19]. Conventional cardiovascular risk factors Hypertension, WC, and hyperlipidemia were defined according to NCEP ATP III guidelines [1]. In each subject (at standing position), WC was measured with a tape in centimeters as the widest value between the margin of lower limb and iliac crest. Blood pressure was measured twice (5 minutes interval) using a standard calibrated mercury sphygmomanometer on both right and left hands after the participants had been sitting for at least 10 minutes. The highest blood pressure of two sides was considered as blood pressure. Lipid profiles (total cholesterol >200 mg/dl, low density lipoprotein >100 mg/dl, and triglyceride >150 mg/dl) were considered as hyperlipidemia according to NCEP ATP III guidelines [1]. Diabetes was defined when fasting blood sugar (FBS) was ≥ 110 mg/dl or there was a history of previous treatment for diabetes. Height was measured in standing position and weight was measured twice with at least clothes and without shoes for calculating BMI by this formula: weight (kg)/height 2 (m 2 ). Assessment of CIMT We used high-resolution B-mode carotid ultrasound scanner equipped with a linear 12 MHz transducer (My Lab 70 X Vision. Biosound Esaote, USA) for examination of the right and left carotid arteries. The examinations were performed by an expert in 12 locations as the following: Right anterior and posterior internal carotid arteries, Left anterior and posterior internal carotid arteries, Right anterior and posterior carotid artery bifurcation, Left anterior and posterior carotid artery bifurcation, Right anterior and posterior common carotid artery, and Left anterior and posterior common carotid artery [20]. A segment of the artery which was most clearly visible to the examiner was magnified to identify a distinct lumen-intima and media-adventitia interface. CIMT was defined as the distance between the leading edge of the lumen-intima interface and the leading edge of the media-adventitia interface. Maximum thickness was measured semi-automatically off-line with artery measurement system software (Vascular tools 5, Medical Imaging Applications LLC, USA). The cut-off point of ≥0.73 mm was considered as high risk for development of atherosclerotic vascular disease [21]. Laboratory tests Venous blood samples were drawn from the ante-cubital vein after an overnight fast to measure fasting blood sugar (FBS), total cholesterol, triglyceride, and high-density lipoprotein (HDL). These biochemical tests were determined enzymatically with Pars-Azmon kit/Iran. Low-density lipoprotein (LDL) cholesterol was calculated with the Friedewald formula if triglyceride level was met <400 mg/dl [22]; it was measured directly in participants whose triglyceride was ≥400 mg/dl. Statistical analyses The normality of distribution of data was evaluated by Kolmogrov-Smirnov test. All obtained values were expressed as mean ± standard deviation (SD). Paired T-Test was applied for variables with normal distribution, and Wilcoxon and Mann-Whitney nonparametric tests for other variables. Univariate and multivariate logistic regression models were performed to examine association between Mets and conventional risk factors of CVD with CIMT separately in men and women. Statistical analyses were performed using SPSS, version 15.0 and P value ≤0.05 was considered as statistically significant. Results Majority of the participants [87 (58%)] were females. Prevalence of hyperlipidemia (hypertriglyceridemia + hypercholesterolemia) and hypertension was 126 (84%) and 66 (44%), respectively. Within participants with MetS, the percent of MetS components was as follow: 10% without component, 22.7% one component, 20% two components, 20.7% three components, 21.3% four components, and 5.3% five components. Table 1 shows the baseline characteristics of participants with and without MetS. We found in regression analysis that is an interaction with sex in the association between MetS and CIMT. The statistical difference (p) among men was 0.053 and among women was <0.001. Results of univariate and multivariate regression models separately in men and women are shown in Table 2, and Table 3, respectively. Discussion We found that CIMT in asymptomatic middle-aged adults with MetS was higher than those without MetS (p = 0.001). Furthermore, regression analyses for conventional risk factors both unadjusted and adjusted, indicated that this association was significant only in women (p = 0.002, p = 0.011, respectively). These findings highlight the importance of an increased burden of subclinical atherosclerosis in middle-aged women with MetS and suggest the presence of an increased risk of future CVD in this group. It has been shown that patients with MetS are at increased risk for development of vascular abnormalities ranged from endothelial dysfunction, followed by artery stiffness to evident atherosclerosis [23]. A meta-analysis showed that subjects with MetS have 61% higher risk of CVD than those without MetS [24]. Several cross-sectional studies have shown a significant association between CIMT and MetS [11][12][13]25]. Hulthe et al [10] reported an association between MetS with accelerated atherosclerosis in asymptomatic middle-aged adults. The Botnia Study demonstrated that middle-aged adults with MetS have an approximately three-fold increased risk of incident CVD [26]. The Bogalusa Heart Study [27] reported that MetS, defined by either NCEP ATP III or WHO criteria, was associated with increased CIMT. We chose NCEP ATP III criteria for definition of MetS due to its easy applicability to clinical practice and epidemiological studies. In addition, in this method we do not require insulin measurement, albumin assessment in urine or oral glucose tolerance test (OGTT) as needed for definition MetS according to WHO [28]. So, the NCEP ATP III criteria may be less sensitive than the WHO criteria in predicting T2DM [29]. In contrast, it was found that subjects with MetS according to NCEP ATP III criteria were less insulin resistant and at higher risk for future CVD than subjects with MetS by WHO definition [30]. Because of differing definitions of MetS in various articles, comparison with previous studies is difficult. The Atherosclerosis and Insulin Resistance Study showed that middle-aged white men with MetS according to WHO criteria had increased CIMT [10], whereas the Brunek Study demonstrated that CIMT was significantly higher in middle-aged and elderly adults with MetS according to NCEP ATP III and modified WHO definitions [3]. By other diagnostic criteria including International Diabetes Federation (IDF) and American Heart Association/National Heart, Lung, and Blood Institute (AHA/NHLBI), the sex differences in association between CIMT and MetS were not homogenous [13]. However, irrespective of definition, CIMT was significantly higher in both men and women with MetS than those without MetS [13]. The gender difference association between MetS and CIMT in our study was in line with findings of Iglseder et al [14] who also found this significant association only in women. This finding may be due to higher levels of C -reactive protein (CRP) and inflammatory markers in pre-menopausal women [31]. Higher levels of fasting leptin and lower insulin sensitivity in pre-menopausal women may also play a role in this context [32]. On the other hand, the results of study by Skilton et al [13] showed no sex differences in the association between MetS and CIMT by NCEP ATP III criteria, although this relation was apparent when they used other criteria for definition of MetS [13]. This finding stipulates that female protection against atherosclerosis is lost in the presence of MetS. This suggestion is supported by identification of sex-specific and sex-independent quantitative trait loci for MetS components in animal studies [33]. It is also supported by observation of the influence of sex on several components of MetS among male and female twins [34]. When we assessed the effect of MetS on CIMT after adjusting for traditional cardiovascular risk factors in the multivariate regression model, MetS was a significant independent predictor of CVD only in women (p = 0.011). We found some differences between both sexes in traditional risk factors affecting CIMT. As expected, age was the strongest determinant of CIMT in both sexes. This finding is in line with previous studies [11,35]. Hyperlipidemia was another determinant which was effective only in men with. The Atherosclerosis Risk in Communities (ARIC) study [7] demonstrated an association between CIMT and incidence of MI even after adjusting for age, race, diabetes, cholesterol, hypertension and smoking in a large population study of middle-aged adults. In addition, the Cardiovascular Health Study showed a significant association between CIMT and risk for CVD after adjusting for traditional risk factors [36]. In the Muscatine Study, multivariate analysis showed that CIMT was associated with systolic blood pressure, increasing age, and LDL in women and with smoking in men [37]. In the Health 2000 Survey [9] it was found that BMI, WC, LDL, total cholesterol and diastolic blood pressure had statistically significant univariate correlation with CIMT in women but not in men. However, our results suggested that MetS screening in hyperlipidemic men provides more benefit than women to identify subjects at risk for CVD. This finding may be related to gender-specific differences in the association between LDL cholesterol and atherosclerosis [38]. However, we cannot address any cause and effect relationship regarding the effect of hyperlipidemia on CIMT due to cross-sectional nature of our study. Furthermore, most subjects were being under treatment with several types of medications for hyperlipidemia. These medications have direct effects on vascular function to halt the progression or even reduce CIMT [39]. Our study had several limitations. Because our study was carried out in a small population, the observed effects may not be applicable to the general population. In some studies it has been observed that CIMT increases with increasing number of MetS components [40][41][42]. This supports the hypothesis that declares clustering of MetS components has an additive effect on progression of CIMT [41]. However, we were not able to investigate this hypothesis in our study due to limitation in sample size. Thus, prospective studies are required to determine the ability of each component of MetS to predict the occurrence of cardiovascular events in women. Conclusion It seems that MetS provides useful information of the patient's cardiovascular risk in addition to the traditional risk factors. The burden of subclinical atherosclerosis increases in middle-aged with MetS. In addition, increasing age and presence hyperlipidemia are strong predictors of increased CIMT. We suggest the assessment of CIMT along with identification of individuals with MetS to provide appropriate intervention for prevention of CVD in middle-aged individuals especially in old women.
v3-fos-license
2019-01-29T00:30:02.813Z
2019-01-21T00:00:00.000
159161084
{ "extfieldsofstudy": [ "Political Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2071-1050/11/2/553/pdf?version=1548150263", "pdf_hash": "633c812488bba1c15d3dd4888a725b3624f08738", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2785", "s2fieldsofstudy": [ "Economics" ], "sha1": "7ac9c37f568fd7dcb8b84e4dfef2174132fa7644", "year": 2019 }
pes2o/s2orc
The Evolution of Urban Spatial Structure in Bras í lia : Focusing on the Role of Urban Development Policies Many cities evolve over time, but some are designed from scratch. Brasília is presented as a unique case on urban planning for having been built from figuratively nothing, based on a design concept that was the brainchild of Brazilian urbanist Lucio Costa. The present study aimed to analyze the interrelation between urban planning and spatial structure change over time to understand the role of urban development policies on the spatial organization of Brasília. The study was conducted based on three interrelated aspects: (1) The intentions of the plans, (2) territorial governance, and (3) external conditions. The results showed that the circumstances of territory occupation—characterized by a polycentric development system with dispersed satellite cities economically dependent on Brasília—have been gradually replaced by strategic development policies, mainly influenced by social and political driving forces. Accordingly, this research suggests a reconsideration of the scale of development instrumentations based on a better understanding of the metropolitan area of Brasília as a unique structure by strengthening its interrelations and seeking better coordination of interests and adaptability of governance processes. Introduction In the last century, certain countries have planned to transfer or even established new capital cities, following the steps of Washington and Canberra, or developing a town which already exists.More recently, Egypt and South Korea began developing their new capital cities.However, the decision to relocate a capital city is not simple, and the cost must be considered, as well as the fact that, as shown by Gottmann [1], capital cities act as articulators between different regions of the country.The rearrangement of these complex structures is complicated. Therefore, it is imperative for planners and policymakers to understand the dynamics of the spatial structure of these new cities, and urban development policy due to the common arguments that planning influences spatial structure, particularly in urban areas.This transformation differs according to the speed and intensity of urbanization processes.In fact, the dynamics of transformation are a result of the relationship between several factors-political, cultural, natural, technological, and economic factors-and their agents [2,3].Although a great part of the existing research has focused on natural and economic evidence to analyze spatial changes, recent studies have begun to pay special attention to the role of urban development policies as important drivers of spatial transformation patterns [4,5], highlighting the importance of developing better land use models to support urban planning. Urban development tools are multidisciplinary and, therefore, have several objectives and scales, including land use plans, master plans or strategic plans.For various purposes, policymakers pursue conducting urban development with the aim of promoting the sustainable growth of regions [6] or in response to rapidly increasing housing demands [7].Regarding spatial planning in Brazil, discussions of systematic public policies toward the process of urbanization first occurred in the 1960s due to massive immigration and rapid urban growth.According to Veloso [8], the model for urban policies in this period was prescribed by the state due to political and financial support from the federal government.Before this period, the Brazilian territory was more simplistically considered a place of work, residence, and exchanges [9]. The work of Deák [10] shows that urban planning in Brazil was recognized as a set of actions around the spatial organization of urban activities, which were not to be settled or guided by the market.In other words, conception and application in urban planning were meant to be assumed by the state, and from this context, the city of Brasília emerged.The plan for a new capital was associated with the process of national integration through the implementation of the first National Road Plan (1951), promoting access and occupation of the hinterlands, which accelerated the emergence of small villages and the construction of new cities [11].According to the numbers of the Brazilian Institute of Geography and Statistics (IBGE) National Census [12], the northern and west-central regions represented only 6.8% of the national population in 1960.In 2010, that percentage reached 15.9% of the national population. Brasília is considered ground zero in the national road system, which explains the close relationship between the road network system and urbanism in the pilot plan of the new capital of Brazil [13] (pp.230-239).As for the role of Brasília in the master plan for regional development, Costa [14] (p.3), the author of Brasília's masterplan, highlighted in the project report that "Brasília would not be shaped in regional planning, but rather would be the cause of it."In other words, development of the Federal District would be defined by state intervention in the territory according to the guidelines proposed by Lucio Costa.Although the evolution of Brasília's spatial structure is typically described as a product of the application of urban development processes over time, we cannot ignore other driving forces at work within the region.Friedmann [15] calls for non-Euclidean planning in a world of "many space-time geographies."He also argues that planning is the "real time" of everyday events, rather than general strategies.Following the same logic, the work of Graham and Healey [16] criticizes one-dimensional treatments, arguing that planning must consider relationships and processes rather than objects and form.Several researchers and experts consider space a social construction [17,18].It is important to understand that the process of spatial production and organization is in constant transformation.Thus, uncertainty is intrinsic and must be considered as an important factor.Recent research shows that the combination of historical maps, geographic images, and population features can provide fundamental information on living space changes, as well as how changes stand to affect our future environment [19].Studies also adopt document-analysis methods to evaluate the impact of planning and public policies in narratives of spatial structure transformation [20]. Therefore, the purpose of this paper is to examine the relationships between the transformation of spatial structure and urban development policies in Brasília, enabling the evaluation of potential strategies for future courses of urbanization.This study is organized into five sections.The opening introduction is followed by the sections of literature review and methodology.Section 4 examines the urban development policies in the Federal District, together with how those policies influenced the transformation of urban spatial patterns in Brasília over time and integrated analysis of this relationship.Finally, conclusions and discussion are addressed in the end. Literature Review The relocation of capital cities has come out in countries with distinctive economic development and been ruled by different political systems.According to Vale [21], more than three-quarters of the capital cities in 1900 were not serving as state capitals in 2000.Some purpose-built cities emerged on a tabula rasa, such as Brasília, Abuja (Nigeria), and Putrajaya (Malaysia).In contrast, cities like New Delhi (India) and Islamabad (Pakistan) have been developed adjacent to prior ones.Therefore, the features, actions, and ideas behind new capital cities require special attention. The social and spatial structures of any given area play important roles in the evolution of new human settlements, emerging from both planning and spontaneous circumstances.These definitions are interrelated to the point where each system affects the other regarding characteristics and management.In land-use change science, planning is consistently identified as a political driver [22].Current studies adopt the idea of driving forces as a framework for analyzing the causes, processes, and consequences of spatial changes.This approach has become an effective tool for evaluating urban development policies [23]. According to Bürgi et al. [2], five driving forces-which can be classified as political, cultural, natural, technological, and economic forces-determine an actor's decision making.In determining land-changes, for example, socioeconomic necessities are articulated in political programs and policies; thus, socioeconomic and political driving forces are strongly interconnected.Many analyses on political forces have been conducted through qualitative and quantitative evaluations of the impacts of urban policies on urban spatial changes [2,24,25].Pagliarin [26] studied the relationships among suburban land-change patterns, political processes, and urban planning regulations, demonstrating how the dispersed metropolitan structure derives from local planning policies conducted by municipal governments via land-use micro transformations.Although several studies have explored the impact of urban policies and other actors on urban spatial changes, it remains difficult to conceptualize the role of spatial planning because of the complexity of the theme. Due to government control of land within the Federal District and the lack of a regional plan, studies on spatial structure have focused on the pilot plan of Lucio Costa for the city [27,28].In both previous analyses, Bertaud criticized the ideology that drives land use for having produced mild cases of population dispersion.In a different scenario, Moser [29] identified a spatial segregation issue based on racial identities in Malaysia's new capital city.According to this study, Putrajaya's design emphasizes the Muslim identity while excluding non-Muslims, which does not provide a great deal of flexibility for changing needs and demographic change.Other driving forces cannot be ignored; external and socioeconomic conditions influence the development and application of urban planning.The city of Brasília was developed to be a monocentric city, but due to great demographic growth in the early years, the urban area ended up expanding according to a model of polycentric occupation, with satellite cities scattered throughout the territory. In a study on dormitory towns, Goldstein and Moses [30] highlighted the issue of commuting, especially the use of private cars and residential locations (usually distant from the city center), which increase transportation costs and lower housing prices with distance from the center.Ficher [13] (pp.230-239) showed that automobile use was incorporated into the spatial plan of Brasília, thereby promoting a city molded by hierarchical and specialized traffic routes.In a comparative discourse analysis between two distinguished cities, Brisbane and Hong Kong, Leung et al. [31] explored the effects of the peak oil discourse in influencing urban transport policy and showed how transport policy is highly political.Consequently, roads and transportation, representing a key technological driving force in urban planning, play an important role in shaping metropolitan areas.This junction between the use of roads and dispersed satellite cities apparently shaped the unique spatial structure of Brasília. According to Meijers et al. [32] (p.18), "The establishment of a polycentric urban region as an actor has to deal with a large number of public and private actors, all having their own goals and preferences and often having differences in procedures, culture and power, perceived and real."However, Burger and Meijers [33] showed that most metropolitan areas present more morphologically polycentric than functionally polycentric patterns and that this difference is explained by size, external connectivity, and degree of self-sufficiency of the major center. Regarding the local governance of capital cities, some urban centers perform national functions, while others perform more local functions [34].Kaufmann [35] conducted a study comparing the locational policy agendas of Bern, Ottawa, The Hague, and Washington, D.C., revealing that local autonomy constraints, such as city budgets, are more common in purpose-built capitals than in purposely selected capitals.Consequently, secondary capital cities, such as Washington, D.C., Ottawa, and Brasília, tend to request compensation payments, and elaborate development-oriented policies agendas.In addition, local governments are central actors in urban governance arrangements, since they lack an industrial history and strong private agents.The challenge of these cities is to find the equilibrium between government-market interests. However, the methodology for evaluating long-term urban development remains necessary.Through GIS retrospective analysis, García-Ayllón [36] examined the evolution of the land market and the resulting urbanization in La Manga, a city created in 1963 out of nowhere as part of a strategy to develop tourism in Southeastern Spain.The analysis criticized the La Manga urban process as a tourism product market from the perspective of supply and demand for land, which ended up slowing the value of land, generating overcrowding, and aggravating the road traffic. Following Adams [37], the relationships and interactions among agents such as developers, politicians, and landowners shape urban development processes.On the other side of the argument, Anas et al. [38] focused their studies on the relationship between urban spatial structure and market forces.The authors claimed that continuing decentralization represents a more polycentric form of urbanization, with subcenters that depended on an old central business district (CBD).According to Anas et al. [38] (p.1), some subcenters are older towns, which were gradually incorporated into expanded urban areas.Others, by contrast, are distant from city centers, having been spawned at nodes of urban transportation networks, and are usually known as "edge cities." To better understand these dynamics, several studies have evaluated and mapped spatial patterns and tendencies in metropolitan areas by analyzing land cover changes [39,40] and population patterns [41,42].Ihlandfeldt [43] investigated the spatial distribution of jobs in Atlanta metropolitan areas, with the results indicating that people-regardless of their race or employment status-have poor access to jobs, a fact that is attributed to residential segregation.In reference to Brasília, Holanda et al. [44] discussed the main attributes of the metropolis concerning the economics of urban sprawl.Through several indices, the authors measured fragmentation, dispersion, and eccentricity within the region, showing how these features have negative consequences for socio-spatial segregation in Brasília.The study revealed a positive correlation between family income and distance from the CBD, and an arrangement wherein low-income families tend to live farther from the city center, which is a typical characteristic of Brazilian cities.In a recent study conducted by Pereira and Schwanen [45] on commuting time in Brazil among metropolitan regions, it was affirmed that in the Federal District, journey-to-work trips are 75% longer between the poorest population decile and the richest decile.In another study conducted in Beijing, Lin et al. [46] suggested that the development of economic and employment clusters could influence employees' commuting times. Although there are many studies on the characteristics of urban spatial structures in Brasília, the outcomes of urban development policies are rarely evaluated.Instead, measures of given impacts usually focus on user satisfaction [47].Planning evaluation is an important stage of the spatial planning process and is fundamental to understanding the role of urban policies.Gordon [48] analyzed the 1915 Report of a General Plan for Canada's capital and discussed the transition in planning practices from "city beautiful" to "city scientific," showing that the aesthetic approach ignored significant aspects, such as housing and social issues.Nevertheless, few empirical studies have focused on urban planning implementation, including which of its aspects are important to measure.Lai and Baker [49] claimed to develop a theory of planning process and the need for strategic regional plans and strategic planning bodies in growing economies.Moreover, as pointed out in the work of Kinzer [50], disagreement on a clear definition of what is considered successful planning implementation is a barrier to reaching a better understanding of the role of public policies.Because the relationship between the evolution of spatial structure in Brasília and urban development policy is not addressed in previous articles, this study aims to provide a better linkage between these two topics. Study Case Area Many cities evolve over time, but some are designed from scratch.In 1915, the urban planner Edward Bennett launched the "city beautiful" plan for Canada´s new capital, Ottawa.The plan included the cities of Ottawa and Hull.When Queen Victoria designated Ottawa as Canada's capital in 1857, Ottawa was a small lumber town with the population of 10,000-12,000 [48]. Another case is Pakistan's new capital city.From 1959 to 1963, Pakistan was also conceiving the master plan of a new capital city as a part of a large metropolitan area by integrating it to the city of Rawalpindi as a twin city [51].However, the plan was never put into practice due to the lack of institutional development.Recently, aiming at balanced national growth between the Seoul metropolitan area and local regions, South Korea developed a new administrative city of Sejong, which is 145km away from the city center of Seoul.Sejong is based on the concept of sustainable development with transit-oriented development (TOD) and traditional neighborhood development (TND).The city's expected population is 500,000 [52].Brasília is presented as a unique case on urban planning for having been built from nothing in a depopulated area at the end of the 1950s. The initial layout of Brasília was the brainchild of Brazilian urbanist Lucio Costa.Moreover, the city is the center of the former political and administrative power of Brazil, a region known as the Federal District.Facing geopolitical and economic changes, the region was declared a World Heritage Site by the United Nations Educational, Scientific, and Cultural Organization (UNESCO) in 1987 and has undergone various planning approaches.With its greatly expanded urban area, the city today is much more complex than the city represented in the pilot plan of Lucio Costa (see Figure 1).Another case is Pakistan's new capital city.From 1959 to 1963, Pakistan was also conceiving the master plan of a new capital city as a part of a large metropolitan area by integrating it to the city of Rawalpindi as a twin city [51].However, the plan was never put into practice due to the lack of institutional development.Recently, aiming at balanced national growth between the Seoul metropolitan area and local regions, South Korea developed a new administrative city of Sejong, which is 145km away from the city center of Seoul.Sejong is based on the concept of sustainable development with transit-oriented development (TOD) and traditional neighborhood development (TND).The city's expected population is 500,000 [52].Brasília is presented as a unique case on urban planning for having been built from nothing in a depopulated area at the end of the 1950s. The initial layout of Brasília was the brainchild of Brazilian urbanist Lucio Costa.Moreover, the city is the center of the former political and administrative power of Brazil, a region known as the Federal District.Facing geopolitical and economic changes, the region was declared a World Heritage Site by the United Nations Educational, Scientific, and Cultural Organization (UNESCO) in 1987 and has undergone various planning approaches.With its greatly expanded urban area, the city today is much more complex than the city represented in the pilot plan of Lucio Costa (see Figure 1).The new capital of Brazil was intended to be a monocentric city.Instead, the urban area ended up following a model of polycentric occupation via the implementation of satellite cities scattered throughout the territory.The model proposed by Lucio Costa represented only a small part of the urban picture in today's Federal District; the regions officially called "Brasília," "South Lake," and "North Lake" accommodate only 12% of the metropolitan population [46].The area of influence of Brasília expands to cities of the neighboring state, near the limits of Brasília.Its status as the third largest Brazilian metropolis with an estimated population of 3,039,444 in 2017 (IBGE) was achieved due to urban development strategies employed in the Federal District.Consequently, this study shows how urban development processes are fundamental criteria of the connection between urban growth and spatial organization in the region. Moreover, until 1960, the Brazilian modernist movement was well respected on an international scale.Between the decades of 1930-1960, Brazil was known as the world capital of modernism, with the claim that "nowhere else was modernist architecture so enthusiastically adopted as a national The new capital of Brazil was intended to be a monocentric city.Instead, the urban area ended up following a model of polycentric occupation via the implementation of satellite cities scattered throughout the territory.The model proposed by Lucio Costa represented only a small part of the urban picture in today's Federal District; the regions officially called "Brasília," "South Lake," and "North Lake" accommodate only 12% of the metropolitan population [46].The area of influence of Brasília expands to cities of the neighboring state, near the limits of Brasília.Its status as the third largest Brazilian metropolis with an estimated population of 3,039,444 in 2017 (IBGE) was achieved due to urban development strategies employed in the Federal District.Consequently, this study shows how urban development processes are fundamental criteria of the connection between urban growth and spatial organization in the region. Moreover, until 1960, the Brazilian modernist movement was well respected on an international scale.Between the decades of 1930-1960, Brazil was known as the world capital of modernism, with the claim that "nowhere else was modernist architecture so enthusiastically adopted as a national style" as in Brazil [53] (p.2).In 1929 and 1936, Le Corbusier visited Brazil to work on a project in Rio de Janeiro, during which he promoted some conferences that helped to advance his ideas in the country and powerfully informed his own practice [54] (pp.113-115).The Swiss architect had an enormous influence on the work of Lucio Costa, Oscar Niemeyer, and other architects of that generation. Data and Methodology Government policies can be considered as the main driving forces of growth in urban areas [55].Nevertheless, to quantify governance and conceptualize the role of urban development policies on the spatial structure is a great challenge [56], partly due to limited knowledge.In addition, the challenge is due to uncertainty about the definition of the production of space, as Hillier [57] (p.30) stated that "unexpected elements come into play and things do not work out as expected in strategic planning practice."In this way, space is a social construction.On the other hand, Briassoulis [58] considers policy-making and planning as technical, stylized, and top-down activities. Concerning analysis of the role of spatial planning on land change, Hersperger et al. [59] proposed a framework based on three important interrelated elements (see Figure 2).Namely, (1) the intentions indicated in planning maps or text, together with the built environment as envisioned, (2) territorial governance (in other words, the processes by which policies that involve the coordination of different actors and interests are developed), and (3) any external conditions that might affect the development and/or implementation of a given plan (for example, unstable economic or political situations).These conditions can reinforce path-dependent policies or result in the selection of new paths. Data and Methodology Government policies can be considered as the main driving forces of growth in urban areas [55].Nevertheless, to quantify governance and conceptualize the role of urban development policies on the spatial structure is a great challenge [56], partly due to limited knowledge.In addition, the challenge is due to uncertainty about the definition of the production of space, as Hillier [57] (p.30) stated that "unexpected elements come into play and things do not work out as expected in strategic planning practice."In this way, space is a social construction.On the other hand, Briassoulis [58] considers policy-making and planning as technical, stylized, and top-down activities.Concerning analysis of the role of spatial planning on land change, Hersperger et al. [59] proposed a framework based on three important interrelated elements (see Figure 2).Namely, (1) the intentions indicated in planning maps or text, together with the built environment as envisioned, (2) territorial governance (in other words, the processes by which policies that involve the coordination of different actors and interests are developed), and (3) any external conditions that might affect the development and/or implementation of a given plan (for example, unstable economic or political situations).These conditions can reinforce path-dependent policies or result in the selection of new paths. Regarding data sources, government policies and political factors were considered as indicators for analysis (see Table 1).In addition, the Territorial and Urban Information System (TUIS) developed maps on the urban expansion of Brasília from 1958 to 2015, from which important information was extracted about urban development changes and fragmentation over time.These maps are suitable for general urban analysis, but they cannot quantify levels of urbanization because the maps do not provide information on population and housing density [60].This analysis is contextually rooted in the temporal evolution of the urban characteristics observed in Brasília, useful in describing the Regarding data sources, government policies and political factors were considered as indicators for analysis (see Table 1).In addition, the Territorial and Urban Information System (TUIS) developed maps on the urban expansion of Brasília from 1958 to 2015, from which important information was extracted about urban development changes and fragmentation over time.These maps are suitable for general urban analysis, but they cannot quantify levels of urbanization because the maps do not provide information on population and housing density [60].This analysis is contextually rooted in the temporal evolution of the urban characteristics observed in Brasília, useful in describing the dispersed concentration of the population in specific polycentric patterns.The evolution of Brasília's spatial structure provided an ideal context to examine how space responds to political uncertainty and changes, and when development agendas converge with the public interests. 2009 Land Use Planning The most recent plan creates an urban containment zone to control irregular growth. Source: Public Archives of the Federal District, Brasília. Consequently, this research adopted the framework of Hersperger et al. [59] as a primary methodology.In other words, this study was mostly conducted through descriptive analyses including demographic and socioeconomic analysis, and spatial analysis of the evolution of urban growth in the metropolitan Brasilia area.In addition, this study analyzed the spatial characteristics of satellite cities as well as Brasília over time.Finally, this study describes the relationships between urban development policies and the evolution of urban spatial structures of the metropolitan Brasília area. Population data corresponding to each period was derived from National Census reports (1960, 1970, 1983, 1991, 2000, and 2010) and Household Sample Surveys (2013 and 2015).Other data were borrowed from existing studies, documents, and related figures.Analysis of spatial evolution and population growth was developed by contrasting the numbers for corresponding years in order to obtain a long-term growth rate.This way, this research sought to focus on critical analysis of existing literature, tracing the transformation of urban forms connecting public policies with their consequences for the development of the region. Origination of Brasília Brasília is not a conventional city.Brasília did not originate spontaneously from any previous occupation of space as a result of economic, social, and political processes inherent in urban dynamics.Instead, it is a city that emerged from an idea transformed into a design.The city was built as a symbol of modernity under the precepts of modernist ideals.The physical shape and structure of the metropolitan area of Brasília have been gradually modified according to zoning regulations, mainly via housing development policies.What is more, Brazilian society faced trends of internal migration on a scale never observed before, especially between the decades of 1950 and 1970.At this point, Brazil had officially become an urban nation due to the intensification of the industrialization process initiated mainly by the opening of the economy to foreign capital [11]. As its first act in the implementation of a new capital, the federal government required the transfer of the capital from Rio de Janeiro to the new city, together with the creation of the Urbanization Company of the New Capital of Brazil (NOVACAP).The company established to control land use and directly execute or contract companies for projects on behalf of the state.In addition, NOVACAP was put in charge of a national contest for the elaboration of the master plan of Brasília.The federal government envisioned a plan for 500,000 inhabitants maximum, along with a roadway and railway connecting Brasília to Anápolis (taking into consideration the pre-existing location of the airport) in the southwest area of the Federal District. Urban planner Lucio Costa, the engineer of the winning project, developed a plan that masterfully included all the central elements of the territory.The original project was inspired by concepts of urban modernism, idealized from a rational and functional plan based on a transport system of roadways [61].Accordingly, the Pilot Plan "Report" on Brasília (1957) is considered the first application of urban regulation in the region.In addition to the creation of the satellite cities-later termed "administrative regions"-these first elements, as envisioned by Costa, serve to confirm the role of Brasília as a planned center, which was intended to influence the production of space within the future metropolitan area. Regarding the Pilot Plan's spatial structure, Lucio Costa divided the built space into four sectors: Monumental, residential, social, and bucolic (see Figure 3).The architect did not prohibit the mixed use of sectors; however, the distribution applied both in Brasília and the satellite cities led to the sectorization of functions all over the territory.The theory of functionalism, upheld by Le Corbusier and arguably his greatest influence, established that in a "Contemporary City," everything is classified by function, with discrete functions occupying and characterizing separate sectors [7].Moreover, in Brasília, the land was controlled by the state and administratively distributed, rather than sold on a free market. Lucio Costa suggested that the NOVACAP urbanization company would act as a real estate developer and that the price index should follow the demand.In his own words: "I understand that the blocks should not be landed [and], rather than the sale of lots, the state should provide land quotas, whose value would depend on the location, in order not to impede the current planning and possible future remodeling in the internal delineation of the blocks."[14] (p.15) Additionally, the author suggested an upfront evaluation of all proposed private projects in two stages-namely, "draft" and "definitive" projects, in order to promote better quality control of the built environment. Complementing its applied functionalism, one of the foundations of the new city was its extensive road network, implemented over the whole territory as part of the Highway National Plan, of which Brasília was the center [11].Carpintero [62] states that the road network of Brasília was based on specific functions established in the Charter of Athens, with the two structuring axes (the Monumental Axis and the South-North Axis) converging to the central area.Le Corbusier used the same concept in the Contemporary City, described as "Two great superhighways (one running east-west, the other north-south) form the central axes, intersecting at the center of the city" [6].Accordingly, the work of Costa [14] (p.15) states that "Because the framework is so clearly defined, it would be easy to build."The project prioritized the use of private automobiles as the main means of urban transportation, with highways at the core of the city.Moreover, the project followed the guidelines of the garden city movement, characterized by a great proportion of green and open spaces and low-density occupation, thus lending the urban environment a park feeling [63].The empty spaces in Brasília were considered elements of the modernist structure, and Lucio Costa justified them by saying that he was inspired by the immense lawns of English landscapes [62] (pp.132-133). The description above shows the foundation of the current spatial structure of Brasília, which the process of urban evolution divides into two distinguished periods-namely, the first two decades (the 1960s and 1970s), when the main satellite cities and road structure were implemented, and the second period (from 1980 to the present day), which has seen the consolidation of the metropolitan area. Residential Sector To apply the principles of highway techniques in urban planning, the curved axis is given the main commuting function with a broad central highway and side roads for local traffic.Main residential areas are located along this road axis.Social graduation is attained by assigning greater value to certain blocks.In addition, the emergence of slums must be prevented in both peripheral urban areas and rural areas.The responsibility of the NOVACAP was to legally regulate and provide decent and affordable housing to all inhabitants.The construction of residential areas is not permitted around the edge of the lake.Instead, the area is designated for leisure and entertainment facilities and is preserved as a natural landscape.Regarding population density, the contest specified a plan to accommodate a maximum of 500,000 inhabitants.Construction of satellite cities was foreseen in the plan at the point when the population of Brasília reached the original limit. Monumental Sector Civic and administrative centers, cultural centers, sporting arenas, storage, banking, commercial retail, small local industries, and railway stations were set to be located along the transverse axis, known as the Monumental Axis (shown on the map above). Social Sector The entertainment sector (social sector) is located at the intersection of the Monumental Axis, characterized by cinemas, theaters, and restaurants, and is directly connected to the railway station (later replaced by a bus terminal). Bucolic Sector Parks, great green spaces, a zoo, botanical gardens, and a sports complex comprise the bucolic sector. Modernist and Centralized Period Analysis of the urban policy implementation process is fundamental to understand the interrelation between urban growth and centralized political control in the Federal District.The pilot plan predicted the long-term creation of orderly planned satellite cities in case the center city achieved its limit of 500,000 inhabitants over time [64].Instead, construction of the satellite cities was precipitated by rapid population growth within the first years of the new city (see Table 2).The Modernist and Centralized Period Analysis of the urban policy implementation process is fundamental to understand the interrelation between urban growth and centralized political control in the Federal District.The pilot plan predicted the long-term creation of orderly planned satellite cities in case the center city achieved its limit of 500,000 inhabitants over time [64].Instead, construction of the satellite cities was precipitated by rapid population growth within the first years of the new city (see Table 2).The immigrants were in great part escaping from the historic drought that occurred in Northeastern Brazil between 1957 and 1958, which contributed to the proliferation of illegal settlements around the territory [10].In 1960, the inaugural year of the new capital, the first three satellite cities of Taguatinga, Sobradinho, and Gama were already established with the purpose of accommodating both and residents of illegal settlements.Taguatinga was strategically located near the department of the National Institute of Immigration and Settlement (30 km from the CBD) in charge of connecting workers with job opportunities [65] (pp.85-94).Sobradinho (20 km from the CBD), also a destination for inhabitants of illegal settlements, soon became a common residence for federal workers.The city of Gama (38 km from the CBD) accommodated residents from illegal settlements and was built around the construction sites of Brasília.The design of Gama was inspired by the project that took third place in the national contest for the new capital.Bertaud [27] states that cities, as they grow in size, tend to change from a monocentric structure to a polycentric structure gradually.The work of Medeiros and Campos [66], however, shows how Brasília was born as a polycentric town. The first satellite cities consisted of urban centers promoted by the state, designed as dormitory cities with most of the economically active population working outside the municipality.This policy continued for the next years, physically isolating low-income residents (see Table 3).Consequently, these dwellers were forced to face long-distance commuting, costly public transportation, and reduced access to urban scale economies [67].Brasília was initially planned as a car-oriented city with modernist design concepts.Regarding the road system, the government launched the Federal District Highway Plan in 1960 with the aim of integration, circulation, and distribution of local productions.Inspired by the American park-way system, the plan included 13 parkways linking regional and federal highways.Among 13 new parkways, one parkway called the Contorno Park Road (140km) around the city center was built as a physical barrier to control urban growth.This parkway was intended, in part, to preserve the pilot plan.Moreover, in 1961, a Conservation Unit, located in the National Park, created from the need to protect the rivers supplying water to the federal capital and to maintain the natural vegetation.Comprising 30,000 hectares at the time, this unit also contributed to controlling urban growth in the north of Brasília. The patterns of expansion in Brasília and the satellite cities ended up defining the model of dispersed urban growth for the next 20 years.Ferreira [68] argued that by not following the specifications of the original plan (which accounted for peripheral growth at a later phase due to natural expansion), Brasília optimized an organizational strategy of space. In 1964, the country underwent tremendous political change at the hands of a military coup (the first major external condition affecting territorial governance).The period brought resurging interest in consolidating Brasília as the capital of Brazil following strong resistance from political opponents of Jucelino Kubitscheck, the ex-President responsible for the construction of Brasília [11].Moreover, between 1960 and 1964, the city had seven different mayors, with public institutions facing several administrative and structural changes.During this period of political uncertainty, immigration was intense but soon was controlled by the military.As part of the strategy to restore investments made in Brasília, the Housing Finance System (law no.4380, 21 August 1964) was created to attend to the national demand for housing, particularly in middle and low-income segments of the population.The National Housing Bank (NHB), as the central instrument of the Housing Finance System, had economic mechanisms for stimulating the acquirement and construction of social interest housing through private initiatives.The hallmark of NHB was communication between public and private sectors, which would oversee the production, distribution, and control of new dwellings to serve the greater need. Also in 1964, as a strategy to consolidate the polycentric urban model, law 4.545/64 was created to divide the territory of the Federal District into administrative regions, including Brasília, Gama, Taguatinga, Brazlândia, Sobradinho, Planaltina, Paranoá, and Núcleo Bandeirante.In 1967, Guará was created.In 1969, Ceilândia was developed to accommodate 40,000 residents from an illegal settlement known as Vila IAPI (see Table 4).In contrast to previous cities, these two new towns marked the inauguration of a new urban strategy insofar as both were located within consolidated regions [10].This new approach would ultimately be confirmed by the Territorial Organization Structural Plan of 1978.Regarding rising demands for high-income housing, the government expanded the regions of Lago Sul and Lago Norte as well as oversaw the creation of "Park Way" (all located within the limits of the road-park beltway) near the lake in southwestern Brasília.In the same period, in order to stimulate the investments in rural areas, several roads were built or expanded to access the production centers.One year later, in 1968, the first Transportation Master Plan of the Federal District was required by the Ministry of Transports and Secretary of Planning and was created.This plan was established following the Territorial Organization Structural Plan, excluding the city of Brazlândia, which was located far away from the city center and had a small volume of commuting at that time.However, due to the lack of renewal of the proposed guidelines, the first transportation master plan became obsolete. Between the decades of the 1970s and 1980s, Brasília faced expansion around the satellite cities due to the construction of great residential neighborhoods through the Housing Finance System.The work of Ferreira [68] points out that the territory's periphery had accommodated 91% of all the low-income families in the entire region, which consisted of 570,000 inhabitants in 1973.Moreover, it is important to highlight that housing in Brasília was primarily reserved for public sector employees of the old capital, Rio de Janeiro [69] (pp.[64][65].The region of Cruzeiro was reserved for the military, and the "wings" were reserved for public service workers [70]. Strategic and Decentralized Period In the early 1980s, the state developed two new satellite cities following a gap of 12 years from the establishment of the final city in the preceding wave (Ceilândia in 1969).The second generation of satellite cities started with Samambaia (1981), and engineer Riacho Fundo (1983) followed the economic block of Taguatinga and Ceilândia bordering the Taguatinga park road.In addition, an industrial sector was installed within Ceilândia's limits to attend the demand of jobs in the region, followed by the construction of an expressway directly connected to the Pilot Plan.In the following years, in order to meet the demands of the housing shortage, the government of Brasília implemented the administrative municipality of Samambaia, and later of Santa Maria, Recanto das Emas, São Sebastião, Paranoá, Riacho Fundo 1 and 2, and Candangolândia.In addition, the government oversaw the expansion of the pre-existing satellite cities. In 1985, during the redemocratization of Brazil (following 20 years of military regime), the local government elaborated upon the Plan of Territorial Occupation.At this point, the new government proposed micro zoning and established the use of land according to two basic categories: Urban and rural soils.These new measures were a strict response to previous indiscriminate use of rural areas for urban purposes.In addition, two important features of this phase were the push for the formation of an urban agglomeration from Taguatinga and Ceilândia to Gama (see Figure 4), and criticism of the theory and practice of functionalism.Ten years after the first plan, the Public Transportation System of the Federal District was created to accommodate the demand of society in 1987.In 1991, the government took the first step towards a mass transport system, which was inaugurated in 2001, connecting the satellite cities of Samambaia, Taguatinga, Águas Claras, and Guará to the Pilot Plan.In 2006, Ceilândia opened its first station and was integrated into the existing transportation system. In 1988, the new Federal Constitution (the second major external condition affecting territorial governance) granted political-administrative autonomy to the states, cities, and the Federal District.In addition, the constitution forced cities to formulate their own Urban Development Master Plans [71].The Plan of Land Use, implemented in 1990, consolidated the definition of urban and rural zones, opening for the first time the possibility for private interests to participate in the parceling of land.The period also marks the return on investment in roads within the eastern sector, as well as maintenance of the road-park beltways.The work of Lopes [72] argues that this period was characterized by the dominance of initiatives of metropolitan actions, in which, time and again, will was shown to surpass rationality. for urban purposes.In addition, two important features of this phase were the push for the formation of an urban agglomeration from Taguatinga and Ceilândia to Gama (see Figure 4), and criticism of the theory and practice of functionalism.Ten years after the first plan, the Public Transportation System of the Federal District was created to accommodate the demand of society in 1987.In 1991, the government took the first step towards a mass transport system, which was inaugurated in 2001, connecting the satellite cities of Samambaia, Taguatinga, Águas Claras, and Guará to the Pilot Plan.In 2006, Ceilândia opened its first station and was integrated into the existing transportation system.Regarding the impact of territorial governance on urbanization, another aspect of the new constitution (1988) was decentralization of urban management throughout the whole district toward the aim of greater independence in its localities.From this moment, each municipality was expected to form discrete departments of planning, which were to be in charge of developing zoning codes.In addition, the plan required the creation of jobs all over the district to improve the job-housing balance. In the 1990s, the administrative regions of Sudoeste (in the central area of Brasília) and Águas Claras (between Guará and Taguatinga) were created to appeal to middle-upper and middle-income inhabitants.Águas Claras, in particular, is characterized by the verticalization of its residential buildings-the limits of which were modified in relation to Lucio Costa's original plan-thereby intensifying the density in this region [70] (see Table 5).Regarding the dispersed expansion (represented by the distant satellite cities), one of the objectives was to preserve the function of the federal government [73].In 1992, the first Land Use Plan consolidated the region of the pilot plan and Taguatinga as complementary centers, connected by a system of mass transport.Along these lines, the plan also reinforced the satellite cities of Samambaia, Recanto das Emas, Gama, and Santa Maria as poles of secondary development.The strategy of satellite cities (cities developed beyond the limits of the road-park beltway) transformed the Federal District to a more dispersed spatial structure (see Figure 5).Concerning urban expansion along the northeast and southeast vectors, revision of the land use plan (1997) required rigid control throughout the region, after which housing districts were implemented, and urbanization was expanded through this vector.Additionally, the plans confirmed the west/southwest axis as a priority in the evolution of the new spatial structure.Regarding economic decentralization, the plan promoted the creation of an industrial pole in Santa Maria, a fashion pole in Guará, and a wholesale pole in Recanto das Emas (cities located in the southwest).Santa Maria and Recanto das Emas are both situated on the growth vector established in the Plan of Land Use from 1990.In 2009, the government launched a review of the land use plan (1997), confirming Brasília as a developing regional center and national metropolis.Regarding the novel policies of decentralization, the document reinforced the push for the creation of new projects in consolidated areas.Although past land use plans had fought against dispersed growth, during the decades of the 1990s and early 2000s, the urban area growth rate was higher than population growth (see Figure 6, Graph 1, and Table 6).In 2009, the government launched a review of the land use plan (1997), confirming Brasília as a developing regional center and national metropolis.Regarding the novel policies of decentralization, the document reinforced the push for the creation of new projects in consolidated areas.Although past land use plans had fought against dispersed growth, during the decades of the 1990s and early 2000s, the urban area growth rate was higher than population growth (see Figure 6, Figure 7, and Table 6).The new plans insisted on low/medium density and predominant residential projects on the east side across the lake-an area where an increasing number of condominiums has been observed in recent years.At the same time, the zoning plan was more restrictive, strictly prescribing areas of The new plans insisted on low/medium density and predominant residential projects on the east side across the lake-an area where an increasing number of condominiums has been observed in recent years.At the same time, the zoning plan was more restrictive, strictly prescribing areas of The new plans insisted on low/medium density and predominant residential projects on the east side across the lake-an area where an increasing number of condominiums has been observed in recent years.At the same time, the zoning plan was more restrictive, strictly prescribing areas of environmental concern, mostly located on the east side, aiming toward the preservation of agricultural land. Regarding transportation strategy plans, the government launched the Urban Transportation Plan (law no.4.566/2011), establishing the plan for "Integrated Brasília," in which the entire transport network of the territory was to be combined into a single regional system, integrating the itineraries of both the bus and subway systems.The plan established "Travel Generator Poles," intending to optimize the impact of mass transport on surrounding urban circulation rather than focusing on the formation or consolidation of these areas [61].In terms of the area covered in the new plan, the project established both the territory of the Federal District and the eight municipalities in the surrounding state of Goiás as the target area of influence. According to the last household travel survey conducted in Brasília (2000 and 2009), the implementation of a subway system and improvements on public transport do not show significant changes on transport mode choice within the Federal District.The percentage of inhabitants using public transport increased from 36.6% in 2000 to 41.0% in 2009, while the use of private automobile practically remained the same in 9 years, from 50.9% to 51.0% [74].This inexpressive change in the transport mode choice could be partially explained by the distribution of land use in the region (see Figure 8).The signs of functionalism are still present in Brasília, where the separated land use patterns do not contribute to reducing vehicle travel and increase the use of alternative modes, particularly walking.environmental concern, mostly located on the east side, aiming toward the preservation of agricultural land. Regarding transportation strategy plans, the government launched the Urban Transportation Plan (law no.4.566/2011), establishing the plan for "Integrated Brasília," in which the entire transport network of the territory was to be combined into a single regional system, integrating the itineraries of both the bus and subway systems.The plan established "Travel Generator Poles," intending to optimize the impact of mass transport on surrounding urban circulation rather than focusing on the formation or consolidation of these areas [61].In terms of the area covered in the new plan, the project established both the territory of the Federal District and the eight municipalities in the surrounding state of Goiás as the target area of influence. According to the last household travel survey conducted in Brasília (2000 and 2009), the implementation of a subway system and improvements on public transport do not show significant changes on transport mode choice within the Federal District.The percentage of inhabitants using public transport increased from 36.6% in 2000 to 41.0% in 2009, while the use of private automobile practically remained the same in 9 years, from 50.9% to 51.0% [74].This inexpressive change in the transport mode choice could be partially explained by the distribution of land use in the region (see Figure 7).The signs of functionalism are still present in Brasília, where the separated land use patterns do not contribute to reducing vehicle travel and increase the use of alternative modes, particularly walking. Summary and Conclusion This study examined the evolution of urban spatial structure in Brasília focusing on the role of urban development policies.From the pilot plan of Lucio Costa (1957) to the most recent land use plan of 2009, this research examined the evolution of urban spatial structure from a modernist and centralized policy to a strategic and decentralized policy in metropolitan Brasília area.The evolution of the urban form of the Federal District reflects a changing posture due to public policies. Based on the comparison and analysis of historical facts, collected data, and various theories, political realities from the earliest stage of the development of Brasília must be considered as the Summary and Conclusion This study examined the evolution of urban spatial structure in Brasília focusing on the role of urban development policies.From the pilot plan of Lucio Costa (1957) to the most recent land use plan of 2009, this research examined the evolution of urban spatial structure from a modernist and centralized policy to a strategic and decentralized policy in metropolitan Brasília area.The evolution of the urban form of the Federal District reflects a changing posture due to public policies. Based on the comparison and analysis of historical facts, collected data, and various theories, political realities from the earliest stage of the development of Brasília must be considered as the foundation of its current spatial structure.The urban expansion of Brasília into peripheral areas occurred before the end of construction of the pilot plan due to public policies that aimed to preserve the original design via the rigid control of land use within the territory.Rapid population growth, however, created additional pressure for residential space in both inner and peripheral areas of the region.Moreover, by not accommodating different social classes and enabling the fostering of social relations, Brasília emerged as an embodiment of top-down planning.In this arrangement, it is possible to observe the origins of social segregation and fragmentation in Brasília, both founded on the intrinsic functional hierarchy in the original plan of Lucio Costa.Accordingly, the social driving forces observed in this study influenced new urban development in both directions-toward decentralization and against decentralization-depending on which social groups or political parties were involved at specific periods in the history of the urban development of Brasília. During its early years of establishment, the Federal District faced a period of confused territorial governance due to unstable political circumstances on both local and national scales.Brasília went through a long period without any systematic regional approach for development.In addition, local authorities needed to address the problem of illegal residents through the creation of satellite cities far from the central area.These practices resulted in a disconnected network of districts without a proper integrated infrastructure and caused unnecessary consumptions of land.These problems were mitigated only after the implementation of both the Special Program for the Geoeconomic Region of Brasília (1975) and the Territorial Organization Structural Plan (1978)-the first two signals of strategic development. The most important aspect observed in this study is the impact of territorial governance on the evolution of urban form in Brasília.In Brazil, urban management tends to be conducted without integrated strategies or instruments of action.This sectoral approach, which accumulates various urban policies that focus on the same territorial base, leads to spatial fragmentation.This fragmentation undermines the unity of the territory.The unity of the territory, observed in the past 40 years, is essential for the local economy and is an important issue observed in prior purpose-built capital cities, especially in secondary capital cities such as Brasília. Discussion and Policy Implications Although we agree that the local government's autonomy is undoubtedly a form of democratic progress due to the new federal constitution of 1988, the Brasília metropolitan area should be coordinated by regional planning, including central governments and local authorities.Additionally, in order to achieve proper territorial governance, this study considers it essential to give more administrative autonomy to urban development departments in local governments.The current administrative structure with urban development strategies, which is interrelated to political interests, is unfavorable to long-term urban sustainable development. Furthermore, it is important to reinforce the policy of mixed-use development.Although mixed-use practices have gained attention among the current practices of the master plan, they remain poorly implemented.Transportation infrastructure investments exclusively concentrated on public transportation, without integrating diverse sectors such as housing and land use.As a result, housing and land use would never result in achieving sociosustainable development.Moreover, following other purpose-built capital cities, such as Putrajaya, the segregation resulting from prior housing policies, observed through the demographic division between satellite cities and Brasília, must be avoided in future development plans in Brasília or any other new capital city under development, such as Sejong in South Korea, and the yet-unnamed new capital city of Egypt. This study indicated the important role of public policies on the evolution of the metropolitan Brasília area.The next step would be the coordinated developments among local governments and the Federal District through long-term strategies to achieve sustainable growth.It must be done without reducing the power of the municipality.Instead, public policies should strengthen existing relationships between stakeholders, looking for better coordination of interests, and improving the adaptability of governance processes in relation to socioeconomic and environmental demands.This study contributes toward a better reading of long-term urban development policies and their effects on the spatial structure of Brasília over time.The comparison with similar purpose-built capital cities presents similarities that must be considered when new urban settlements are developed.The findings presented here, therefore, add to the recent literature arguing that strategic development and flexibilization are fundamental for the changing needs of new capital cities. Due to some limitations in this study, a few questions remained unanswered.It was not possible to conduct an empirical analysis of the impact of spatial structure on housing, transportation, environment, quality of life, and so on.Thus, new studies should address the consequences of the evolution of metropolitan Brasília. Figure 1 . Figure 1.Map of the Federal District.Source: Data collected from 2015 District Household Sample Survey (Pesquisa Distrital por Amostra de Domicílios: PDAD); Urban cover area from Secretary of State for Territorial and Housing Management (Secretaria de Estado de Gestao do Territorio e Habitacao: SEGETH) Figure 1 . Figure 1.Map of the Federal District.Source: Data collected from 2015 District Household Sample Survey (Pesquisa Distrital por Amostra de Domicílios: PDAD); Urban cover area from Secretary of State for Territorial and Housing Management (Secretaria de Estado de Gestao do Territorio e Habitacao: SEGETH). Figure 2 . Figure 2. Diagram of the impact of spatial planning on spatial structure change.Source: Diagram is adapted from Hersperger et al. (2018). Figure 2 . Figure 2. Diagram of the impact of spatial planning on spatial structure change.Source: Diagram is adapted from Hersperger et al. (2018). Figure 3 . Figure 3. Functionalism in Brasília described by the urban planner Lucio Costa, the author of the master plan of Brasília in 1957.Source:Costa (1957) [14]. Figure 3 . Figure 3. Functionalism in Brasília described by the urban planner Lucio Costa, the author of the master plan of Brasília in 1957.Source:Costa (1957) [14]. Sustainability 2018 , 10, x FOR PEER REVIEW 14 of 21 decentralization, the plan promoted the creation of an industrial pole in Santa Maria, a fashion pole in Guará, and a wholesale pole in Recanto das Emas (cities located in the southwest).Santa Maria and Recanto das Emas are both situated on the growth vector established in the Plan of Land Use from 1990. Figure 5 . Figure 5. Evolution of the urban spatial structure (metro line works started in 1992 and completed in 2008).Source: Drawn by authors, SEGETH 1956-1994. Figure 5 . Figure 5. Evolution of the urban spatial structure (metro line works started in 1992 and completed in 2008).Source: Drawn by authors, SEGETH 1956-1994. Figure 7 . Figure 7. Built area land use in the Federal District (2017).Source: Adapted from Urban Land and Territorial Tax Report (SEGETH, 2017). Figure 8 . Figure 8. Built area land use in the Federal District (2017).Source: Adapted from Urban Land and Territorial Tax Report (SEGETH, 2017). Table 1 . Main urban policies implemented in the Federal District. Table 2 . Population and annual growth rate of Brasília and Brazil from 1960 to 2018. Table 3 . Monthly income range calculated based on 1980 monthly minimum wage rate. Table 4 . The first generation of satellite cities in Federal District (2015). Table 5 . The second generation of satellite cities in the Federal District (2015). Table 6 . Population growth rate over time in Brasília. Table 6 . Population growth rate over time in Brasília. Table 6 . Population growth rate over time in Brasília.
v3-fos-license
2019-11-20T05:41:00.782Z
2009-01-01T00:00:00.000
607750
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://hqlo.biomedcentral.com/counter/pdf/10.1186/1477-7525-7-88", "pdf_hash": "9a84f05dc8eb9533411e8fe7472674cb54d37545", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2788", "s2fieldsofstudy": [ "Medicine" ], "sha1": "60a18f114b06c495ac82be684e9b77b5fe2c015b", "year": 2009 }
pes2o/s2orc
Health and Quality of Life Outcomes Background: The AMC Linear Disability Score (ALDS) is a calibrated generic itembank to measure the level of physical disability in patients with chronic diseases. The ALDS has already been validated in different patient populations suffering from chronic diseases. The aim of this study was to assess the clinimetric properties of the ALDS in patients with peripheral arterial disease. Methods: Patients with intermittent claudication (IC) and critical limb ischemia (CLI) presenting from January 2007 through November 2007 were included. Risk factors for atherosclerosis, ankle/ brachial index and toe pressure, the Vascular Quality of Life Questionnaire (VascuQol), and the ALDS were recorded. To compare ALDS and VascuQol scores between the two patient groups, an unpaired t-test was used. Correlations were determined between VascuQol, ALDS and pressure measurements. Results: Sixty-two patients were included (44 male, mean ± sd age was 68 ± 11 years) with IC (n = 26) and CLI (n = 36). The average ALDS was significantly higher in patients with IC (80, ± 10) compared to patients with CLI (64, ± 18). Internal reliability consistency of the ALDS expressed as Cronbach's α coefficient was excellent (α > 0.90). There was a strong convergent correlation between the ALDS and the disability related Activity domain of the VascuQol (r = 0.64). Conclusion: The ALDS is a promising clinimetric instrument to measure disability in patients with various stages of peripheral arterial disease. Background The impact of a disease on a patient's quality of life and level of activities of daily life (ADL) is an important outcome measure in clinical studies [1]. It is well known that perceived quality of life and ADL are significantly impaired in individuals with peripheral arterial disease (PAD) [2-5]. There are several instruments available to measure quality of life in patients with PAD. Both generic instruments, Published: 12 October 2009 Health and Quality of Life Outcomes 2009, 7:88 doi:10.1186/1477-7525-7-88 Received: 7 April 2009 Accepted: 12 October 2009 This article is available from: http://www.hqlo.com/content/7/1/88 © 2009 Met et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. participation component. Differences were also observed in how other ICF components (body functions, environmental factors) and health are operationalized in the instruments. Conclusion: Linking the meaningful concepts in the participation instruments to the ICF classification provided an objective and comprehensive method for analyzing the content. The content analysis revealed differences in how the concept of participation is operationalized and these differences should be considered when selecting an instrument. Background Participation is cited as central to a person's quality of life and well-being [1]. The reduction of disabilities and improving participation for individuals with disabilities are therefore important goals of rehabilitation [2]. Working for pay, attending school and joining in community activities are all examples of life situations that comprise participation. Participation is defined in the International Classification of Functioning, Disability and Health (ICF) as the 'involvement in a life situation' and participation restrictions are defined as 'problems an individual may experience in the involvement in life situations' [3]. Although the idea of participation is not new, participation as defined in the ICF is a relatively new concept and as a result the conceptualization and measurement of participation continues to evolve [4]. Whiteneck [5] in his critique of the ICF recommended that new instruments operationalizing the concepts in the ICF be developed and tested to assess the relationship among the concepts in the ICF model. Instruments should be pure measures and not contain content from other ICF concepts if the intent is to examine the relationship among the concepts in the ICF model [6]. Furthermore, if instruments are to be used to evaluate treatment effects then the content of the individual questions must be clearly understood since there is a chance of not capturing the effect if multiple outcomes are assessed [6]. It is therefore necessary to identify participation instruments developed using the ICF and then examine the content to determine how the concept of participation has been operationalized and if content pertaining to other concepts is included. In 2003 Perenboom and Chorus [2] reviewed the literature and examined how existing generic instruments assess participation according to the ICF. These authors concluded that most of the instruments evaluate one or more domains related to participation but none of them measure all the domains [2]. Since Perenboom and Chorus [2] conducted their review, new instruments have been developed using the ICF. A preliminary version of the ICF was published in 1997 and the first version was published in 2001, as a result few of the instruments included in the Perenboom and Chorus [2] review were based on the ICF model. The methodology for linking content of instruments to the ICF classification has been developed [7,8] and this methodology is recommended since it provides a standardized framework for evaluating content [9]. To date, this methodology has been used to compare the content of both generic and disease-specific instruments [9,10]. The purpose of this study was to build on the work by Perenboom and Chorus [2] and examine the content of instruments measuring participation according to the ICF using the published methodology. Concept of Participation In the ICF model the concepts of activity and participation are differentiated, but in the classification these concepts are combined and there is a single list of domains covering various actions and life areas. The user is provided with four options on how activity and participation can be considered: 1) divide activity and participation domains and do not allow for any overlap; 2) allow for partial overlap between activity and participation domains; 3) operationalize participation as broad categories within the domains and activity as the more detailed categories, with either partial or no overlap; and 4) allow for complete overlap in the domains considered to be activity and participation [3]. Similarly, in the literature there is no consensus regarding how activity is differentiated from participation [2,5,[11][12][13][14]. Some have suggested that participation comprises life roles [2] whereas others have used multiple criteria to differentiate these concepts [5]. In this study option number one (described above) was selected to differentiate these two concepts. The following ICF domains (or chapter headings) were considered relevant to the concept of participation: Communication; Mobility; Self-care; Domestic life; Interpersonal interactions and relationships; Major life areas; and Community, social and civic life (Chapters 3 to 9 respectively). For the purpose of this study, chapter headings were used instead of interpreting the individual questions according to criteria since it was felt to be more objective. Chapter 1 Learning and applying knowledge and Chapter 2 General tasks and demands cover content primarily related to the ICF concept of activity, defined as 'execution of a task or action by an individual' [3] and were therefore not included. Instruments A systematic search of seven databases [Medline; CINAHL; EMBASE; HaPI; Psyc (Info, Articles, Books)] was conducted to identify all the instruments that assess participation and were based on the ICIDH-2 or ICF model. The ICIDH-2 was first released in 1997 and so the search included articles published between 1997 and March 2008. Instruments including domains covering a minimum of three chapters in the ICIDH-2 participation dimension, or three chapters from the ICF Chapters 3 to 9 in the activities and participation component, were considered to assess participation. A minimum of three ICIDH-2 participation dimensions or three ICF chapters were required in order to exclude specific instruments (e.g. employment instruments). Instruments which met this definition of participation were then included if they were designed to assess participation in the community, either self-administered or interview administered, generic in content, developed for adults and published in English. A list of the search terms is provided in the Appendix. Linking to the ICF Classification For each instrument all questions were assigned ICF categories or codes, also known as linking or cross-walking. First the content contained within each of the questions and, if applicable, response options (response scale) were identified using standardized linking rules [8]. This content is referred to as the meaningful concept(s) in the published methodology [8]. The meaningful concept(s) capture all of the ideas or information contained within a question and these concepts are used to select the ICF categories in the classification. The ICF consists of two parts: functioning and disability and contextual factors. Functioning and disability contains the following components: body structures, body functions, and activity and participation. Contextual factors comprise the background of a person's life and living which interact with the individual and determine their level of functioning [3]. They include environmental and personal factors. Environmental factors include the physical, social and attitudinal environment in which people live [3]. These factors are external to individuals and can have a positive or negative influence on an individual's performance as a member of society, on an individual's capacity to execute actions or tasks, or on an individual's body functions or structures [3]. Personal factors are the particular details of an individual's life and include factors such as gender, age and coping style [3]. A detailed classification of environmental factors was first introduced in the ICF and currently a classification does not exist for personal factors. In addition, the ICF model includes the health condition (disorder or disease) which is classified using the World Health Organization's etiological classification, the International Classification of Diseases-10 (ICD-10) [3]. To determine if contextual factors and health conditions are included in the participation instruments, relevant information stated in the instructions was also used to identify meaningful concepts, which is a modification to the published linking rules. For example, if the instructions state the respondent should consider the impact of his or her health condition or the use of assistive devices when thinking about participating in certain life roles, then 'health conditions' and 'assistive devices' were included as meaningful concepts for each question. The meaningful concepts in the instructions were included for each question since a person should consider the instructions when answering each question and it also ensures the content is comparable among the instruments. Any terms referring to a time period (e.g. in the past four weeks) and qualifiers such as 'difficulty', 'satisfaction' or 'importance' were not considered to be meaningful concepts. To ensure the meaning of each question was captured, meaningful concepts could be repeated within the instruments; as an example, if an instrument has five to six questions which are related to each aspect of participation (e.g. dressing) then 'dressing' was considered a meaningful concept in each of the six questions to determine how many questions ask about dressing. If examples are used to describe an aspect of participation then all the examples were coded as meaningful concepts and linked to ICF categories. Meaningful concepts were also identified in screening questions since these questions ask about aspects of participation. The ICF classification was then used to assign ICF categories to the meaningful concepts. In the ICF classification the components are labeled with letters: body structures (s), body functions (b), activity and participation (d), and environmental factors (e). As mentioned previously, personal factors are not specified. Within each component in the ICF, the categories are organized hierarchically and assigned a numeric code. The categories are nested so the chapters also referred to as domains, include all the detailed subcategories. An example demonstrating the coding from the activities and participation component is d5 Self-care (chapter/first-level category), d540 Dressing (second-level category) and d5400 Putting on clothes (third-level category). The ICF classification allows the meaningful concepts to be linked to very detailed categories and the categories can be rounded up to examine coverage in broad aspects of participation. The meaningful concepts were linked to the most precise ICF category, ranging from the chapter (1 digit code) to the fourth-level (5 digit code). According to the published linking rules [8], the 'other specified' and the 'unspecified' ICF categories should not be used. The meaningful concept was coded as 'not definable' if there was not enough information to select the most precise ICF category and if a meaningful concept was not included in the ICF (e.g. suicide attempts) it was coded as 'not covered' [8]. A meaningful concept was coded as a 'personal factor' if it asks about age or other factors that relate to the background of the person. Meaningful concepts such as health, illness or physical disability were coded as 'health condition'. Examples of the meaningful concepts extracted from the questions and the assigned ICF categories and codes are provided in Table 1. One coder was primarily responsible for determining the meaningful concepts and two coders linked the meaningful concepts in the instruments. The results were compared and the coders discussed the questions where different ICF categories were selected. Another coder was consulted if there were any questions regarding the meaningful concepts, ICF categories or codes and made the final decisions. All the coders were familiar with the ICF and the linking rules [8]. Analysis First a descriptive analysis was conducted. The total number of meaningful concepts linked to categories in the ICF components (activities and participation; body functions; body structures; environmental factors) and the number of meaningful concepts which could not be linked (coded as not defined, not covered, health condition) were counted for each instrument. In the analyses the third-and fourth-level categories were rounded up and reported as second-level ICF categories. The percentage of agreement between the two coders was calculated for the first-and second-level ICF categories and codes ini-tially selected for the meaningful concepts in each instrument and did not consider any revisions made by the third coder. Second, the content of each instrument was examined. Since there is no consensus on how to operationalize participation, for the content analysis participation was defined broadly and included all domains within the activities and participation component. The content in each of the instruments was examined by reporting the: 1) coverage of the ICF chapters (domains) within the activities and participation component; 2) relevance of the meaningful concepts to the activities and participation component; and 3) context in which the activities and participation component categories are evaluated. Coverage was examined by calculating the number of activities and participation component domains included in each instrument and the percentage of questions containing ICF categories from the activities and participation component. Relevance was examined by determining if all the questions contain a meaningful concept linked to the activities and participation component (d-category). Since an instrument may contain meaningful concept(s) related to participation but an ICF category could not be selected, meaningful concepts coded as 'not defined' and 'not covered' were reviewed by one of the coders to determine if the meaningful concepts were similar to the content included in the activities and participation domains d1 Learning and applying knowledge through to d9 Community, social and civic life. Finally, to determine the context in which the activities and participation categories were evaluated, the percentage of questions containing ICF categories from the ICF components (body functions, body structures, environmental factors, personal factors) as well as those coded as 'health conditions' and 'not defined/not covered' were reported. Identification of the Participation Instruments A review of the literature in September 2007 identified 3087 articles. After reviewing the articles based on the two stage eligibility process ten instruments were included: Impact on Participation Autonomy (IPA) [15,16], Keele Assessment of Participation (KAP) [17], PAR-PRO [18], Participation Measure-Post Acute Care (PM-PAC) [19], Participation Objective Participation Subjective (POPS) [20], Participation Scale (P-Scale) [21], Participation Survey/Mobility (PARTS/M) [22], Perceived Impact of Problem Profile (PIPP) [23], Rating of Perceived Participation (ROPP) [24], and World Health Organization Disability Assessment Schedule II (WHODAS II) [25]. The Participation Measure-Post Acute Care-Computerized Adaptive Test version (PM-PAC-CAT) [26] was added when the systematic search was updated in March 2008. For eight of the instruments (IPA, KAP, PARTS/M, PM-PAC, POPS, P-Scale, ROPP, WHODAS II) a copy of the instrument was available and so these instruments were included in the content analysis. Linking the Meaningful Concepts to the ICF A total of 1351 meaningful concepts were identified in the eight instruments. In the P-Scale there are a total of 36 questions, however only 18 questions were assessed in this study since the meaningful concepts are not explicitly stated in 18 questions which ask 'how big a problem is it to you?' and follows the first question. In addition, there was no impact on the results by only including 18 questions from the P-Scale. The percentage of observed agreement between the two coders ranged between 91% (P-Scale) to 100% (ROPP) for the first-level ICF categories and codes and 77% (P-Scale) to 95% (ROPP) for the second-level ICF categories and codes. Level of agreement could not be reported for the IPA since this instrument was linked to the ICF classification using a similar methodology by the same coders in a previous study but coder agreement was not assessed. The PARTS/M has the highest number of meaningful concepts (n = 545). Sixty nine percent (933/1351) of the meaningful concepts were linked to categories in the component activities and participation (see Table 2 . A summary of the results based on the criteria used to examine the instrument content is described in Table 3. Overview of the Content in the Participation Instruments Impact on Participation and Autonomy (IPA) The IPA contains 41 questions and 206 meaningful concepts. The activities and participation domains d6 Domestic life, d7 Interpersonal interactions and relationships, d8 Major life areas have the most coverage, with 22% of questions (n = 9 questions) covering each domain. In the IPA many questions ask the respondent to consider the use of assistance or the use of aids and these meaningful con- Participation Measure-Post Acute Care (PM-PAC) The PM-PAC instrument contains 51 questions. One hundred and twenty six meaningful concepts were identified and 117 of these were linked to the ICF. The PM-PAC has two questions which ask about 'filing your taxes' and 'completing forms for insurance or disability benefits' where the instructions ask the respondent to consider any assistance (e3 Support and relationships) or services (e5 Services, systems and policies) available to them. There are also meaningful concepts which were coded as 'not defined', for example 'other activities' and 'days away from your home'. Although the PM-PAC has questions which do not contain any ICF categories from domains in the activities and participation component, there is at least one meaningful concept in each question related to these domains. Examples of meaningful concepts which were coded as 'not defined' or 'not covered' but considered related to the concept of participation include 'days away from your home', 'accomplishing tasks', 'filing taxes' and 'completing forms for insurance or disability benefits'. World Health Organization Disability Assessment Schedule II (WHODAS II) The WHODAS II contains 36 questions and a total of 81 meaningful concepts. Forty-two meaningful concepts were linked to the ICF classification. The meaningful concepts covered all of the activities and participation domains with the exception of d2 General tasks and demands. Meaningful concepts were also linked to body functions as well as environmental factors. In terms of body functions, three questions which ask about 'remembering to do important things', 'being emotionally affected' and 'living with dignity', were linked to b144 Memory functions, b152 Emotional functions and b1Mental functions, respectively. There were 39 meaningful concepts which could not be linked to the ICF classification. Instructions in the WHODAS II state the respondent should consider his or her health for each question, resulting in 36 'health condition' codes. Three meaningful concepts were considered to be 'not defined' ('staying by yourself for a few days') or 'not covered' ('impact on your family'). In the WHODAS II there are five questions which do not contain any categories in the activities and participation domains and were also not considered to be related to participation; these questions include meaningful concepts related to body functions (b1 Mental functions, b144 Memory functions, b152 Emotional functions), 'not covered' ('impact on your family') or 'not defined' ('barriers or hindrances in the world around you'). Concept of Participation By linking the meaningful concepts identified in the participation instruments, it was possible to determine which ICF categories the instruments include. In this study an instrument was considered to assess the concept of participation and included if its domains cover a minimum of three chapters (domains) between d3 Communication and d9 Community, social and civic life in the ICF component activities and participation. This broad definition of participation was used since there is no consensus regarding how activity is differentiated from participation [2,5,[11][12][13][14] and selecting chapter headings provided objective criteria. In considering which activities and participation domains the instruments cover, an even broader definition of participation was used by also including d1 Learning and applying knowledge and d2 General tasks and demands since these domains may have been considered relevant to the concept of participation by the instrument developers. Perenboom and Chorus [2], however, considered a question to be assessing participation if it asks about "actual or perceived participation (involvement, autonomy, social role)" (page 578) and so different results would be obtained using this definition. Content of the Participation Instruments Although all the instruments cover six to eight of the nine activities and participation domains, there are differences in the actual content. All of the instruments include content from domains d6 Domestic life, d7 Interpersonal interactions and relationships, d8 Major life areas and d9 Community, social and civic life. However, there are differences in whether the domains d3 Communication, d5 Selfcare and certain aspects of d4 Mobility are considered aspects of participation. Four instruments (PM-PAC, P-Scale, ROPP, WHODAS II) intend to assess d3 Communication based on the materials describing their development and ICF categories from d3 Communication were noted for all these instruments. Meaningful concepts linked to categories in d3 Communication were also identified in the KAP and POPS which is likely not the major focus, as the questions have meaningful concepts linked to multiple ICF domains. For example, in the POPS the question 'How many times do you speak with your neighbour?' includes the meaningful concept 'conversation' which was coded as d350 Conversation but it is only a minor meaningful concept and the major meaningful concept is 'relationship with neighbour(s)', coded as d7501 Informal relationships with neighbours. In some instruments, such as the PM-PAC, assessing communication is a major focus ('How much are you limited in watching or listening to the television or radio?'). Empirical findings suggest that it is difficult to demonstrate discriminant validity among participation domains [15,17] and this may be a result of overlapping content. In future studies it may be beneficial to identify and code the major and minor meaningful concepts, since this could assist with developing a priori hypotheses regarding expected correlations between instrument domains. All of the instruments contain meaningful concepts linked to categories in d5 Self-care with the exception of the POPS. When the POPS was developed self-care was not included since participation was operationalized as "engagement in activities that are intrinsically social, that are part of household or other occupational role functioning, or that are recreational activities occurring in community settings" (page 463) and self-care did not qualify [20]. The PM-PAC does not intend to assess self-care [19] but there were two meaningful concepts linked to d5 Selfcare. One question in the PM-PAC asks about 'exercising' which was coded as d5701Managing diet and fitness and the other question asks about 'providing self-care to yourself', which was coded as d5 Self-care. In terms of mobility, all of the instruments contain meaningful concepts linked to categories in d4 Mobility and all the instruments intend to include content from this domain. Three instruments (IPA, PARTS/M, WHODAS II) operationalize moving in the home using specific phrases such as 'getting out of bed', 'getting out of a chair' (PARTS/M) or 'getting up and going to bed' (IPA). In the other instruments, mobility includes broader statements such as 'moving or getting around the home' (KAP, PM-PAC, P-Scale, ROPP) and in the POPS mobility includes only using transportation. Two instruments, the P-Scale and WHODAS II, were considered to have content not related to the concept of participation, which was defined broadly as ICF categories in the activities and participation domains d1 Learning and applying knowledge to d9 Community, social and civic life. The P-Scale has one question which only asks about the observable attitudes of others ('In your home, are the eating utensils you use kept with those used by the rest of the household?'). The WHODAS II contains five questions which ask about content related to body functions (e.g. 'remembering' which was linked to b144 Memory functions) or were not covered/not defined (e.g. 'barriers or hindrances in the world around you'). By linking the meaningful concepts to the ICF classification it was evident that not all questions appear to assess participation as defined in the ICF. This information may assist users in understanding what the questions assess and aid in selecting an instrument depending on his or her purpose, since this may or may not be an issue. Linking the Meaningful Concepts to the ICF The methodology published by Cieza et al. [7] was used to identify and link meaningful concepts to the ICF. Our results for the activities and participation codes for the WHODAS II can be compared to a study by Cieza and Stucki [10], which also linked the WHODAS II to the ICF. It is difficult to compare the results from these two studies directly since Cieza and Stucki [10] used an older version of the linking rules [7] and we modified the linking rules by including 'health condition' as a meaningful concept if it was included in the instructions. Cieza and Stucki [10] identified 38 meaningful concepts and in our study we had 45 not including coding 'health condition', however, we did not include the five questions in the WHODAS II on general health and it appears that Cieza and Stucki [10] did. Both studies had the same number of meaningful concepts linked to body functions (n = 3), environmental factors (n = 1) and 'not defined' (n = 2). There were some differences. We linked 38 meaningful concepts to categories from activities and participation and Cieza and Stucki [10] linked 30 meaningful concepts and we linked one meaningful concept to 'not covered' whereas these authors linked two meaningful concepts. The implications of not reliably determining if the meaningful concepts can be linked to the ICF classification or differences in the ICF categories and codes selected can impact the results and how the questions in the instruments are interpreted. It has been recognized that there are a number of challenges with using the linking rules (e.g. establishing the meaningful concepts contained in the assessment items) [27]. Offering on-line training on how to use the ICF linking rules and presenting difficult coding examples are types of initiatives that could help improve the standardization of this methodology. Participation and Other ICF Categories and Codes Meaningful concepts included in the instructions as well as within each question were examined to determine the context in which aspects of participation are assessed. The ICF states that disability is a dynamic process which results from the interaction of the ICF components (body structures, body functions, activities and participation) and the contextual factors (environment, personal factors) [3]. It is helpful to identify what is asked in relation to participation; for example, for every participation topic area (e.g. dressing, working inside the home) included in the PARTS/M, a question is asked if participation is impacted by pain and/or fatigue. Clinically it is useful to determine the impact of factors such as pain and fatigue, since similar to environmental factors they can be potentially modified in order to enhance participation. As stated by Nordenfelt [13] and others [28], activity and participation must occur in an environment. In the ICF there is reference to a 'standard environment' versus 'usual environment' and this distinction is one way activity is differentiated from participation [3]. It is interesting how environmental factors asking about assistance or equip-ment are included in some instruments (IPA, KAP, PARTS/M, PM-PAC, POPS, P-Scale) but not in other instruments (ROPP, WHODAS II). The PARTS/M specifically assesses the use of assistance and the frequency which accommodations, adaptations or special equipment is used. Asking about the use of equipment and assistance is important clinically since a person's environment can often be modified to enhance their participation. Further qualitative and quantitative studies will determine if respondents inherently consider their environment when answering the questions. Similar to environmental factors, there is variation in whether a participation restriction is attributed to a health condition. In the WHODAS II and IPA the instructions state that the respondent should consider their health condition or disability. In the PARTS/M there are specific questions which ask if the person's participation is limited by their illness or physical impairment. Dubuc et al. [29] demonstrated the importance of specifying whether the participation restriction is a result of a health condition or not, especially for areas which are highly influenced by environmental factors. By asking if the participation restriction is a result of a health condition, it underestimated the influence of the environment since subjects focussed on the implications due their health and did not often consider the restrictions in the physical and social environment [29]. More research should determine the best way to assess these influencing factors. The PARTS/M offers the advantage of asking specific questions with and without the influence of health and the environment which may help determine the causes of the participation restrictions and also provide potentially 'pure measures' of participation. None of the instruments have meaningful concepts coded as personal factors, which is not surprising since this data is often collected separately (e.g. age, gender) in research studies. Further studies should compare questions that either attribute or do not attribute participation to factors such as the environment or health conditions to determine if these phrases influence a person's response. Study Limitations There are several limitations to this study which need to be considered when interpreting the results. In this study only instruments which were developed using the ICF were included and the meaningful concepts were linked to the ICF classification, which limits the findings to how participation is conceptualized in the ICF. In addition, the criteria assume it is desirable to have an instrument cover the majority of areas within a multidimensional concept such as participation and so it may not be suitable for instruments which focus on selected areas such as employment. By linking the meaningful concepts in the questions to the ICF classification it provided an objective evaluation, however, it is possible that we did not capture the correct meaning of the questions. Since very few studies have linked the instruments used in this study to the ICF classification, the results from this study should be confirmed in other studies. Interpreting the questions and determining the meaningful concepts can be influenced by culture and the experience of the coders and enhancements to the ICF linking rules will help improve the assessment of content validity in these types of studies. Conclusion In summary, this study linked eight instruments measuring participation to the ICF. The benefits of linking content from instruments to the ICF have been described in various studies [9,10,30]. These benefits include enabling users to review the content as part of the selection process, providing a standardized approach to comparing the content and informing future revisions of existing instruments. An enhancement to the linking methodology used in this study enabled the role of contextual factors as well as attribution of the participation restriction due to health to be further examined within each question. Including contextual factors in the ICF is an important step forward and empirical research comparing results from instruments which either include and or do not include contextual factors will further advance the measurement of participation. The instruments all contain content from the domains d6 Domestic life to d9 Community, social and civic life but there is variability in whether content from domains d1 Learning and applying knowledge, d3 Communication and d5 Self-care is included. Two instruments, P-Scale and WHODAS II have questions which did not contain any ICF categories related to the domains in the activities and participation component, which suggest these questions may not measure aspects of participation. The differences in content, attributing participation restrictions to health and asking about aspects of the environment should be considered when selecting a participation instrument as it may or may not be desirable depending on the intended purpose. ▪ disability evaluation ▪ outcome assessment ▪ rehabilitation Additional material
v3-fos-license
2023-07-01T06:16:09.758Z
2023-06-29T00:00:00.000
259296535
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11845-023-03432-4.pdf", "pdf_hash": "4f1584cd31951912427ac1d54cf2b510bda89c76", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2791", "s2fieldsofstudy": [ "Medicine" ], "sha1": "d1f3b89003e4455adfdb3fa12c152036758b9074", "year": 2023 }
pes2o/s2orc
Patient–healthcare provider communication and age-related hearing loss: a qualitative study of patients’ perspectives Background The prevalence of age-related hearing loss (ARHL) significantly increases in people aged 60 and older. Medical errors are frequently reported because of communication breakdown, especially for patients with ARHL. Aims This qualitative study focuses on identifying the communication challenges faced by people aged over 65 with ARHL and potential ameliorative strategies based on the participants’ personal experiences. Methods Thirteen participants, attending a support service for older adults with hearing loss in the South of Ireland, were recruited using convenience sampling. Semi-structured interviews were conducted with participants. Interviews were audio-recorded and transcribed using NVivo 12 software. Braun and Clarke’s thematic analysis methodology was used to identify themes arising from two main study domains: difficulties faced during the most recent healthcare interaction and suggestions for improving overall healthcare communication. Results Older adults with hearing loss identified general mishearing, lack of awareness and use of medical terminology to be the cause of ineffective communication. Raising awareness of the impact of presbycusis on clinical interaction among healthcare professionals was cited as being of crucial importance. Other helpful strategies include repeat and rephrase, use of written information, providing context, minimizing ambient noise, continuity of care, longer consultation length and good body language. Conclusion Effective clinical communication can be achieved through a clear understanding of the patient’s perspective. Healthcare providers should be made aware of the hearing issues and associated communication difficulties posed, within the context of the development of patient-centred strategies to improve patient safety. Introduction Interpersonal communication has been described as a critical tool for life adjustment, linking people with their environment [1].Communication challenges are commonly reported by older people, either due to normal ageing or communication disorders related to various conditions [2].Despite the cause, communicating well with older adults remains a significant challenge for many healthcare providers and is often complicated by sensory impairments and/or cognitive problems [2].Previous studies have also identified hearing loss as a modificable risk factor for cognitive decline [3,4].According to the National Institute on Deafness and Other Communication Disorders in the USA, hearing loss is ranked as the third most prevalent chronic condition in older adults [4].Age-related hearing loss (ARHL), also known as presbycusis, is the second most common illness in aged people worldwide [5].It affects approximately one-third of people aged 65 to 74 and almost half of those over the age of 75 [6].This presents a significant challenge in delivering healthcare, as the number of older adults continues to grow. Presbycusis is sensorineural, meaning the primary damage happens in the hair-like cells within the cochlea or the hearing nerve.It is characterized by decreased hearing sensitivity, especially for high-frequency sound, and most often affects both ears.Previous quantitative studies have confirmed that presbycusis has a negative impact on clinical communication, across both hospital and primary care clinical settings [2].People with this type of hearing loss tend to exhibit difficulties with speech perception and comprehension because of associated difficulties with high-frequency consonants, which are fundamental for word discrimination, such as distinguishing between words such as "time" and "dime" [7].These difficulties are exacerbated in a noisy environment and slowly progresses to loss of hearing sensitivity at lower frequency sound, which makes it harder to understand words in a quieter setting.This results in a loss of clarity of speech sounds; increasing the volume of the speech may or may not improve the condition. Older adults with specific communication needs are significantly more likely to experience preventable adverse events and functional decline in hospital [8].This can be challenging during an inpatient stay and may limit a person's confidence to participate in their care and their ability to follow instructions [9].Lower ratings of patient-physician communication and overall healthcare have been reported among older adults with self-reported hearing loss [10].In most instances, patients would feel embarrassed and frustrated having to ask the others to repeat words and sentences, which ultimately causes them to withdraw from social activities [11].Due to poorly adapted communication strategies, it is reported that people with hearing loss perceive their social skills as poor.Consequently, the combination of hearing impairment and a poor coping strategy contributes to poor self-esteem in these patients [12]. In 2001, the Institute of Medicine highlighted the importance of effective communication in facilitating knowledge transfer and shared decision-making involved in patient-centred care [13].A recent study found that only 44% of older adults using multiple medications have spoken to a healthcare provider about possible drug interactions, suggesting an important gap in communication [14].This was exacerbated by the observations that older adults often receive prescriptions from multiple providers who are not all using a shared electronic health record system [14].Even more concerning, 30% only partially understood the healthcare provider's explanation for the requirement for a medication, tests or procedure, and 10% did not understand it at all [15].Thus, it is no surprise that patients who are deaf or hard of hearing are at high risk of breakdowns in healthcare communication, which is the leading cause of medical errors.The present study aims to explore the experiences of patients with ARHL in interacting with healthcare providers, including perceived communication challenges as well as, importantly, their views on how such communication problems can be addressed.Through learning about patients' experiences of clinical communication, we aimed to suggest improvements to how patient-healthcare practitioner interaction can be delivered. Study design A qualitative study, employing one-to-one semi-structured interviews, was conducted during October 2019.The one-toone format, as opposed to group interviews or focus groups, was chosen as a method of collecting data, as it enables participants with presbycusis the freedom and comfort to express their views in regard to patient-healthcare communication, without the challenge of communicating in group settings [16]. Participants All older adults who attended the "Hard of Hearing" support services for older adults with hearing loss provided by the Cork Deaf Association were invited to participate in the semi-structured one-on-one interview with the following inclusion criteria: participants aged 65 and above, with agerelated hearing impairment and attending services provided by the Cork Deaf Association.The Cork Deaf Association is a local charitable organization that is part-funded by the Irish Health Service Executive.The selected age group was chosen based on the known fact that presbycusis has the highest incidence among the over-60 s [17].The Hard of Hearing support group provides opportunities for older adults with hearing loss to participate in social outings, receive access to information talks, attend coffee mornings etc. Data collection Thirteen participants were recruited using convenience sampling.Informed consent was obtained before conducting the semi-structured 1-to-1 interview.All interviews were conducted at the office of the Cork Deaf Association.Each interview lasted between 10 and 15 min.Participants were asked to complete a demographic profile form at the beginning of each interview session.The sessions were audio-recorded.The moderator of the interview chaired the sessions with a topic guide.The topic guide consisted of seven questions asking participants to share their thoughts regarding (i) difficulties faced during the most recent interaction and (ii) suggestions for improving overall communication.Figure 1 provides a summary of the interview topics.Participants were advised to approach the researchers or supporting staff of participating organizations if they experienced any distress arising from study participation. Research instruments and data analysis Audio recordings were transcribed, and the final verbatim was analysed using Braun and Clarke's thematic analysis method [18].Thematic analysis is a flexible and distinctive systematic method that has been used to explore patients' experiences of healthcare services through identifying, analysing and organizing qualitative data.In this approach, data was read and re-read several times by two researchers (COT, LLML) to become familiar with the data (step 1).Individual interview transcripts were uploaded onto NVivo 12, which was used for generating initial codes (step 2) and developing initial themes (step 3).A single response could be coded to more than one theme.Two researchers (COT, LLML) were involved in revising themes (step 4) and determining and designating themes (step 5).Lastly, two major domains and their respective themes were generated before researchers worked together in yielding the final report (step 6) [18].All the quotes were condensed for clarity. Ethical considerations and data privacy Ethical approval was obtained from the Social Research Ethics Committee (SREC) of the School of Medicine, University College Cork.Written informed consent was obtained from the participants before the study commenced.Participants were designated a study identity to anonymize all personal data. Participant characteristics Table 1 provides an overview of participants' sociodemographic and hearing loss characteristics, respectively.The mean (± SD) age was 75.1 ± 6.20 years; 53.8% were male and 46.2% were female.92.3% had bilateral hearing loss; 30% of the 76.9% acquired hearing loss participants were exposed to a previous noisy work environment.All reported using hearing aids. Overview of themes We identified themes arising from questions related to both study domains.The first domain "Difficulties faced during the most recent healthcare interaction" explored the challenges participants faced in a clinical setting.They often misheard the conversation which can be attributed to healthcare providers' lack of understanding of their communication needs and their use of medical terminology.The second domain "Suggestions for improving healthcare communication" relates to participants' opinions regarding patient-led solutions to promoting a better quality of care.This includes repeating back information, displaying positive non-verbal cues and enhancing awareness of communication needs of older adults. Domain 1: Difficulties faced during the most recent healthcare interaction Theme 1: General mishearing Several participants mentioned background noise as the primary cause of hearing difficulty in a hospital setting.On the other hand, consultation in the GP offices is regarded Topics for the interviews with study participants With regards to your most recent interaction with a healthcare provider, 1) Where was the location? 2) Was the environment noisy or quiet? 3) Were you by yourself or with someone else?What differences does it make?4) Did you need to have any information repeated?Give me an example.as almost always pleasant and quiet, as they attributed this experience to the privilege of having a private room that enables one-to-one interaction: I think that on one-to-one, it's very easy.But when there's a group around, it's difficult.For hearing people, it's difficult with all the noise.I say that like in a meeting, like a noisy ward.(P13, female). Furthermore, participants noted the impact of foreign accents on comprehension.Despite their attempts to understand accents, some people have more difficulty understanding speakers with strong and different accents.It is an unavoidable phenomenon, particularly as it relates to healthcare providers for whom English is not their native language: A foreign doctor, I mean they speak the best, they can but I'm lost.(P9, female). It is not uncommon that fast-paced speech can make it unclear for the other party to grasp the meaning in a conversation.This occurs if the person is a naturally fast talker, and it can exacerbate in the presence of nervousness, urgency and mental fatigue.Lack of verbal pauses provides no time for patients, particularly those with hearing impairment, to comprehend the content and concepts. If the person talks too fast, I find it hard to understand.(P4, male). Theme 2: Lack of awareness It was suggested that people generally equate presbycusis with total deafness despite it being a gradual loss of hearing that occurs in most people as they grow older.Participants reported frustration with having to constantly explain the slight yet crucial difference between these forms of hearing loss.Participants reported that as they have already experienced such situations numerous times, it is not uncommon for a trained healthcare professional to be confused too: I put it down to people who were not aware of what it's like to have hard of hearing.There is a perception 'oh, you're deaf', full stop.But there are two sections -deafness and hard of hearing.Hard of hearing is the problem.You go into any doctor but there's nothing about hearing, even the family doctors are not aware of it.(P1, male).Despite being well informed regarding the issue of hard of hearing, participants revealed that they often felt that little to no accommodation has been provided to maximize the patients' benefits during the consultation.This is reflected in various disruptions including constant interruptions and the absence of a loop system: One time I just had to say to them that 'I have hard of hearing now'.I said 'call my name loud' or whatever like that, but they don't like to be told that.(P2, male).Some participants also highlighted the impact of insufficient consultation time as they require extra time to fully understand and process the conversation compared to a normal hearing individual: They don't give enough time for you to explain.(P4, male). Theme 3: Use of medical jargon The common consultation content misinterpreted by patients with hearing loss related to medication and the use of medical terminology: Information about medication, mostly.(P4, male).See, if you go to a clinic now and you're talking and they might call in the other person and they are talking too.They are staying there and I don't have a clue what they are talking about.(P9, female). Domain 2: Suggestions for improving healthcare communication Theme 1: Repeating information One way to upgrade patient-healthcare provider communication is by focusing on promoting patient comprehension of health information.This can be achieved through clear communication, confirming understanding, asking questions and gaining clarity.Verbal confirmation is cited as often vital to gauge the patient's understanding at the end of the consultation: Because he was aware of my hearing problem and he said to me a couple of times now 'tell me if I'm not getting through'.(P1, male).In most cases, written notes are required to help patients better retain medical information.It is notably useful in instances such as information overload, patients with memory impairment, coverage of complex topics and where there is emphasis on certain details.One of the participants revealed the limitation of verbal confirmation as it is dependent on the receiver's capacity to process and retain data: There'll be also stages where I know no matter how many times the person repeats it, I won't hear it.They have to write it down.(P10, male).Some participants acknowledged the challenge of coping with changing subjects amid discussion, yet some find it more troubling.They admitted getting lost in the conversation when the topic is unwillingly changed.Focus on one topic at a time is ideally the best option when dealing with an older adult with hearing impairment: I am speaking about something and it's grand, I can understand.I can follow points then let's say there's a third party comes in and they communicate, they changed the context, I'm lost.(P10, male). Theme 2: Raise awareness Healthcare providers should be made aware of the hearing issue and the difficulties posed by it.Continuity of care with the same doctor is cited as the easiest and straightforward way to address this challenge: I think that's important to continue care with the same doctor, that's important.(P3, female).Also, the likelihood of getting a longer consultation duration is higher with the same doctor who is aware of the patient's hearing problem as compared to those who are not.They are presumably more attentive to patients' needs without rushing through the consultation.Most participants noted that extra time is very much needed even with new doctors since they generally need the information to be repeated or vice versa: You know because they have to repeat themselves which takes up that bit of time.(P9, female). Participants' perceptions towards attending a consultation with company were also explored.Surprisingly, every one of the participants preferred to be alone.Reasons relating to personal preferences and convenience were then disclosed, yet the most striking one being self-autonomy.They elaborated it as a sense of empowerment and confidence without feeling like an "ill person" in the absence of another person: He has come up with me in the past but I found it better when I present my own on a one to one because the audiologist and whoever is with my son tends to, they don't realize that I suppose, they tend to exclude you and they talk to one another as if you are a thing, you know, that has to be discussed ….So, I was better one to one with the audiologist myself.(P3, female). Notification may be the key to raising awareness.Just like any known drug allergy, participants recommended labelling "hearing impairment" on top of their medical records.This will bring immediate attention to the doctor and appropriate adjustments can be made before seeing the patient: The doctor knows because on my thing on the computer is 'deaf', profoundly deaf or something so he knows.I think once he looks at my thing and sees 'deaf' on top, they take that into account.(P3, female). Theme 3: Non-verbal cues Great emphasis was placed on maintaining eye contact when it comes to effective communication.The power of good eye contact in the context of patient-healthcare provider relationship includes but is not limited to building a strong rapport and trust.It makes them feel listened to which in turn leads to the willingness to share their problem, promoting better patient engagement and medication adherence: As long as they look at me, as long as they don't talk to me with their face down and they do sometimes forget and they do that, you know, or sometimes they move over to get something and they talk with their back to me.Now I have to be facing the person.Eye contact definitely.(P3, female). One of the participants mentioned that facing the other person when talking would make lip reading much easier: Whereas I will be completely dependent on lip reading so I have to see its full face.(P10, male).Another participant indicated that communication errors can be reduced if doctors position themselves nearer to the patients: I have to sit very close to listen to the consultant.(P1, male).Participants reportedly observe subtle body language to quickly "read between the lines" and interpret the meaning behind the silent clues.The display of positive body language is therefore highly encouraged in every setting: I often think two people could say the same thing.You could say something to me now, some very offhand, you know, hurtful.On the other hand, you could use the same words to me but it's how the context and that, I would think a lot of body language.(P5, female). Discussion This study aimed to understand perceptions of older adults with hearing loss regarding patient-healthcare practitioner interactions and how they might be improved.Nearly half of older adults, including hearing aids users, reported mishearing healthcare providers in clinical settings [19].Studies concluded that background noise, multiple people talking at the same time and poor pronunciation between two similar words are barriers to achieving clear communication [11,19].When background noise is present, consonant confusion can easily give rise to communication misunderstanding.Many consonants contain high-frequency sounds, which are often lost, while low-frequency sounds remain full and clear.This explains why participants typically report mishearing the content despite being able to hear the speaker's voice.The speech becomes unclear, and people talking appears mumbled.Their clarity of speech comprehension has diminished as they are missing a large portion of important speech signals [20].Fortunately, participants report fewer problems when visiting their general practitioner (GP) due to its communication-friendly environment (i.e. a quiet, well-lit room with furniture arranged for face-to-face interactions) [21]. Ageism has been cited as an important barrier to good communication between older patients and healthcare providers [22].These participants highlighted the lack of understanding and adaptation from their healthcare providers who are either primary care doctors or internists in their respective medical fields.Unlike clinicians involved in geriatric care, who are often more educated and sensitive to the unique healthcare needs of older adults, they are more likely to engage in overaccommodation known as elderspeak -addressing the elderly in an overly simple and patronizing way.Overaccommodation occurs when the speaker is over-reliant on negative stereotypes of ageing [23].Often, the first instinct when facing older people with hearing loss is to increase the volume of the speech by talking loudly or shouting, which has a paradoxical effect in those with presbycusis, as it raises the sound pitch.It is more beneficial to slow down the rate of speech which in turn improves articulation and clarity.Indeed, it has been reported that individuals with hearing loss suffer important communication problems if the speaker fails to articulate slowly or deliberately [24].Additionally, mild and moderate accents can affect a patient's ability in recognizing monosyllabic and multisyllabic words [25]. Currently, within primary care, the minimum consultation length recommended is 10 min.However, a survey of the British Medical Association found that 92% of 15,560 GPs perceived that 10 min was inadequate for primary care consultations [26].This is especially true given that GPs are increasingly dealing with a growing elderly population with chronic and complex conditions.Patients believe that insufficient consultation time results in poorer quality of care, a higher chance of needing a prescription or attending more frequently [27].In particular, some participants revealed feeling rushed and not able to get their point across without "clock watching" behaviour exhibited by some GPs.Thus, alternatives such as telephone calls and email consultations should be offered according to individual clinical needs.A longer consultation can then be provided for those with special needs, including patients with presbycusis, as they require visual cues and assistive technology when communicating [28]. Medical jargon and medication are the two most misinterpreted consultation content as pointed out by the participants.A false understanding of commonly used medical terms can jeopardize patient-healthcare provider communication and decision-making [10,29]. Diagnosis, treatment and prognosis are commonly misinterpreted by patients which include misunderstandings around medical information due to complicated medication dose and regimen [11,30].It is therefore the prescriber's responsibility to inquire about the patient's understanding of medical information to prevent detrimental drug adverse events.Given the importance of this matter, surprisingly, few studies exist that guide healthcare providers on how to approach this task.One study [31] offers some insight into three types of approach: yes-No, tell back-directive and tell back-collaborative.The yes-no approach focused on closeended questions, whereas the tell back-directive method used open-ended questions that were physician-centred.The tell back-collaborative is a patient-centred open-ended approach, making it clear that it is a shared responsibility between the patient and practitioner.Patients showed a significant preference for the tell back-collaborative as they view the request for tell back as evidence of practitioners' care and concern for them personally.In short, it is critical to invite the patients to restate their understanding of the information in their own words within a shame-free environment. Unsurprisingly, participants wish to continue receiving care from the same doctors who are familiar with their condition.Relationship continuity has been shown to increase security and trust within the patient-doctor relationship, which in turn increases patient's willingness to accept medical advice and adherence to long-term preventive regimens [31].Despite participants favoured attending their consultation alone, Adelman et al. [32] revealed the importance of having a frequent third-party present when it comes to decision-making and conduit for education.The third-party may play the role of advocate, passive participant or antagonist.However, it can sometimes feel awkward for the practitioner to have an additional person in the room acting as an interpreter [33].Nevertheless, it is crucial for all parties involved to respect the patient's autonomy in any circumstances. Interestingly, many sounds can be seen that are hard to hear.Lip reading significantly improved speech discrimination in older adults with hearing loss; it was undoubtedly greater in individuals with presbycusis [34].A relevant study highlighted the use of visual cues in recognizing voice sounds as the production of each phoneme is associated with a particular facial expression pattern [11].These visual hints can be hidden if the speaker is looking away while taking notes on the computer or addresses the recipient while performing another task as highlighted by one of the participants.This explains why some participants repeatedly position themselves in front of their healthcare providers to allow a better view of lip reading.Moreover, maintaining eye contact is just as important as keeping the mouth and face visible when interacting with any patient.It 1 3 shows attentiveness and interest in what is being said.Eye contact has been positively linked to patient's assessment of clinician empathy and rating of attributes, for instance, connectedness and how much they liked the clinician [35].Participants agreed that eye contact opens up communication and helps build rapport and trust. The qualitative study allows broad, open-ended inquiry which encourages participants to raise issues that matter most to them and helps the researchers to determine the idiographic causes.It is restricted to older adults attending a specialist service, which might suggest recruitment bias.The results of this study may lack generalization because the sample was restricted to participants who were available to proceed with the interview at a specific date and time.Hence, they are not necessarily representative of the population of interest due to the small sample size and context specificity.Additionally, the presence and degree of hearing loss, as well as hearing aid use, were based on selfreport rather than objective data, excluding the possibility to explore the relationship between the level of hearing loss and difficulties faced during the clinical encounter.The use of hearing aids during clinical consultations was also not explored during the study interview. Conclusion Effective communication assists health professionals in identifying individual needs in a holistic approach.Key targets for intervention arising from this study include putting an alert or code on the patient record to ensure that the healthcare provider knows that this patient has certain communication needs due to their hearing loss.From recognizing the challenge of hard-of-hearing patients to making adjustments such as rephrase and assess for understanding, good body language, face-to-face orientation, communication-friendly environment, written information and adaptable consultation length, these measures will improve patient outcomes and lead to greater patient-healthcare provider satisfaction. Table 1 Baseline socio-demographic and hearing loss characteristics SD standard deviation, n = frequency
v3-fos-license
2017-06-10T07:12:47.236Z
2014-03-01T00:00:00.000
2684246
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.scielo.br/j/ibju/a/tcvrbDT8SjxcSRjpZw43kyN/?format=pdf&lang=en", "pdf_hash": "aef21b82fc7aae0c6cedcc70da850cd4ee9e3eb5", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2793", "s2fieldsofstudy": [ "Medicine" ], "sha1": "5299207ddeb2b059ed395825f9e5f4c60cd894bf", "year": 2014 }
pes2o/s2orc
Qualitative analysis of the deposit of collagen in bladder suture of rats treated with tacrolimus combined with mycophenolate-mofetil ARTICLE INFO ______________________________________________________________ ______________________ Purpose: To evaluate the synthesis of type I (mature) and type III (immature) collagen in bladder suture of rats treated with a combination of tacrolimus and mycophenolate mofetil for 15 days. Materials and Methods: Thirty rats were divided into 3 groups: the sham, control and experimental groups. All the animals underwent laparotomy, cystotomy and bladder suture in two planes with surgical PDS 5-0 thread. The sham group did not receive treatment. The control group received saline solution, and the experimental group received 0.1mg/kg/day of tacrolimus with 20mg/kg/day of mycophenolate mofetil, for 15 days. From then on, the tacrolimus was dosed. The surgical specimens of the bladder suture area were processed so that the total type I and type III collagen could be measured by the picrosirius red technique. Results: There was a predominance of type I collagen production in the sham and control groups compared to the experimental group, in which type III collagen was predominant. The production of total collagen did not change. Conclusion: The association of tacrolimus and mycophenolate mofetil in animals qualitatively changes the production of collagen after 15 days with a predominance of type III collagen. INTRODUCTION Urological complications increase the morbidity and mortality of kidney transplantation by increasing the length of hospital stays and the need for re-surgery (1).The incidence of urological complications after kidney transplantation ranges from 2.5% to 14.7% (2).The main urological complications in kidney transplants are ureteral strictures and urinary fistulas (3).Most urinary fistulas appear early, within the first 90 days postoperatively (4). After tissue damage, the process of restoring the tissue through a series of biochemical and physiological cellular processes begins (5).Collagen has a special feature.It is the main protein of connective tissue, responsible for the mechanical strength and resistance of the scar tissue.Regardless of the injured tissue, collagen is the most important component in tissue repair (6).Type I (mature) collagen is the most frequent.It is synthesized by fibroblasts and predominant in bones and tendons.Type III (immature) collagen is most commonly found in soft tissues, such as blood vessels, dermis and fascia.The physical characteristic that best distinguishes type I collagen from type III is the interlacing of their fibers.The fibers of type I collagen are more intertwined and compacted than those of type III collagen, which has little interlacing, which results in lower tensile strength for scar tissue.The strength of a suture can be evaluated by the ratio of immature and mature collagen (7).These qualitative characteristics of deposited collagen are important for the structural support of an anastomosis.The maximum deposition of collagen in healing tissue is found on the fifteenth day (8).Every organ has a varying capacity for tissue repairs.The bladder has different characteristics when compared to gastrointestinal tract regeneration (9). The picrosirius red staining technique stands out due to its greater selectivity for conjunctive tissue (10).This staining is specific for collagen, since there are no strong stains on the glycoprotein fibers (11).The less interwoven collagen fibers, representing type III collagen, are represented in green.The more interwoven fibers, aligned and with strong staining, representing type I collagen are orange-red (12).The calculation of the percentage of fibers, classified as type I or type III according to their color, enables a qualitative assessment of collagen fibers (13). Among the various factors that may affect wound healing, immunosuppression is an important factor that hinders the healing process (14).There are various immunosuppressive regimens, and these drugs are based on calcineurin inhibitors, with cyclosporine and tacrolimus being the most commonly used.The most studied adjuvant drugs are mycophenolate mofetil and sirolimus.A combination of tacrolimus with mycophenolate mofetil is more commonly used nowadays (15).This experimental study with rats aimed to verify the effect of the combination of Tacrolimus and Mycophenolate Mofetil on the synthesis of types I and III collagen in bladder wound healing. Animals and groups We observed the ethical principles in animal experimentation established by the Brazilian School of Animal Experimentation (COBEA).Thirty Wistar rats, aged 120-140 days and weighing 265.34 ± 23.73 grams were used.They were divided into 3 groups of 10: the sham, control and experimental groups. Surgical technique The rats were weighed and submitted to inhalation of halothane sedation and anesthesia by intramuscular injection of ketamine and xylazine hydrochloride.A four-centimeter longitudinal midline incision was made in the following sequence: skin, subcutaneous tissue, rectus abdominis muscles and peritoneum.The isolated urinary bladder of the animal was subjected to a three-centimeter longitudinal cystotomy in the anterior bladder wall.The defect was closed with 5-0 polydioxanone suture in two planes.The closure of the abdominal cavity was done with Polyglactin 910 (3-0) thread, and the skin closure with simple colorless nylon (3-0) thread. The animals in the sham group received no specific treatment after the surgery procedure.The animals in the control group were subjected to the same conditions of sedation and received daily subcutaneous injections and oral saline solution in volume proportional to their weight.The rats in the experimental group received daily treatment with tacrolimus and mycophenolate mofetil.The tacrolimus was administered subcutaneously on a daily basis at a dose of 0.1mg/kg/day for 15 days and mycophenolate mofetil daily dose of 20mg/ kg/day for fifteen days, administered orally (4). By the fifteenth day of evolution, all the rats were sedated and underwent cardiac puncture for blood collection.The blood samples were sent to the laboratory in order to perform a clinical analysis of tacrolimus (16).After the death of the animals, samples were collected from the bladder wall.The sample was then sent for the determination of total type I and type III collagen tissue by the histological technique of picrosirius red (17). Optical microscopy We assessed the area, density and the percentage of type I and III collagen.For identification of type III and type I collagen, the sections were analyzed by an Olympus® brand optical microscope with 400 times magnification under polarized light.The images were captured by an optical system, frozen and scanned.This was performed by image analysis application using Image-Pro Plus version 4.5 for Windows (RGB).This program identifies the type of collagen-based colors.Red, yellow and orange correspond to type I collagen (mature), whereas green is equal to type III collagen (immature).Three fields were evaluated (upper, middle and lower), perpendicular to the suture bladder.The result was expressed as a percentage area. A descriptive analysis of the data was applied to graphs and charts.The Student t and ANOVA parametric tests were used with the Gra-phPad application and a significance level of less than 5% (p < 0.05) was adopted. In histopathological reviews concerning the histometric assessment of the areas of total collagen, when the values of the areas occupied by the total collagen of the groups were compared, there was no statistical difference, as shown in Figure -1.The control group had a mean of 22,728,734.89 ± 8,535,056.23μm 2 , the sham group 20,280,575.18± 6,637,851.96μm 2 , and the experimental group 20,467,537.37 ± 8,946,377.93μm 2 total collagen.There was no significant difference between the groups (p = 0.7558). The histology assessment of the areas of type I collagen (Figure -2) showed that in the control group the average mature collagen detected was 95.94 ± 2.28%, in the sham group 94.76 ± 4.05%, and in the experimental group, 4.95 ± RESULTS Regarding the dosage of tacrolimus it was observed that no serum levels of the drug were detected in the sham and control groups.In the experimental group an average of 11.3 ± 2.07ng/ mL of tacrolimus was detected. 3.97% of mature collagen (type I) in square microns x 1.000.000.There was no statistically significant difference between the control and sham groups (p = 0.4362).Comparing the sham and experimental groups, there was a statistically significant difference (p < 0.0001), and also between the control and experimental groups (p < 0.0001).Total area of Collagem (square microns x 1.000.000) Concerning the histological evaluation of areas of type III collagen (Figure -2), in the control group the average immature collagen detected was 4.06 ± 2.28%, in the sham group, 5.23 ± 4.05% and in the experimental group, 95.04 ± 3.97%, in square microns x 1,000,000.Between the control and sham groups, there was no statistically significant difference (p = 0.3307).Comparing the sham and experimental groups, there was a statistically significant difference (p < 0.0001), as there was between the control and experimental groups (p < 0.0001). Figure -3 shows the histological sections stained with Sirius red F3BA (40x).On the left picture, shown in red, type I collagen, and on the right picture in blue-green, type III collagen, under polarized light. DISCUSSION The main contribution of this study is that it demonstrates the qualitative change in the synthesis of collagen in a bladder wound in rats sub-jected to pre-defined immunosuppression drugs.There are several studies showing the complications and healing changes in the presence of immunosuppression, but few are prospective and well controlled.There have been many studies of other tissues such as the skin and gut, but little in urothelium (6).In this study, there was significant reduction in the production of type I collagen, using immunosuppression with tacrolimus and mycophenolate mofetil after fifteen days of the experiment.In the present study, the choice of immunosuppression was based on numerous studies that demonstrate the advantages of the combination of tacrolimus and mycophenolate mofetil (15).International study protocols (SWTC) see no statistical difference between immunosuppression with tacrolimus and cyclosporine as indices of acute rejection, but there is a trend of longer survival with the use of tacrolimus.The use of mycophenolate mofetil significantly reduces the incidence of rejection when compared with azathioprine (3).Tacrolimus should be monitored to prove the therapeutic concentration of the tacrolimus (16).In this study, drugs were only detected in the samples of the experimental group, with all doses falling within the therapeutic range of the drug.Many authors have reported the deleterious effect of immunosuppression on wound healing and most of these studies do not analyze type I and type III collagen separately.In practice, we observed that in tissue healing, type III collagen can indeed be a precursor of type I collagen as it has a lower quantity of fibers, less intertwined fibers and a lower quantity of local cellularity.Among the articles that studied tissue healing in non-urothelial tissue, one study that stands out is that of Kita et al. (18), who looked at the healing of the small intestine and colon of rats, observed that the tensile strength (bursting pressure of the anastomosis) of colonic anastomoses was less resistant in animals treated with tacrolimus at the end of seven days of treatment with tacrolimus for via intra-peritoneal.Furthermore, Schaffer et al. (19) studied the effects of tacrolimus in the healing of intestinal tissue and dermis and observed that the administration of 2mg/kg of tacrolimus led to a reduction in the healing dermis of animals.On the other hand, regarding the study of urothelial tissue, Ekici et al. (20) looked at the effects of immunosuppression with sirolimus in the healing of sutures in the bladder of rats and concluded that sirolimus affects all stages of healing of the bladder, including reducing the number of inflammatory cells, angiogenesis and the proliferation of myofibroblasts, thus delaying the healing process. Some more recent works are using immunosuppressant drugs such as tacrolimus in the study of the treatment of diseases involving cellular proliferation disorders.Of these articles on cellular biology, one that deserves to be mentioned is that of Wu et al. (21).Concerning the behavior of keloid fibroblasts activated with tissue growth factors (TGF-β1), this study concluded that tacrolimus inhibits the growth factor action on the fibroblast in vitro.Inhibiting the proliferation of fibroblasts and their tissue migration, the entire protein synthesis of tissue collagen is impaired.Following the same line of research, Nankoong et al. (22) studied the effect of the topical tacrolimus therapy in the healing of cutaneous wounds in the backs of mice.They observed that after 3, 7 and 11 days of healing, there was no significant alteration in the healing of tissue between the groups under study, but the group treated with tacrolimus had slightly reduced levels of expression of mRNA of IL-1α and TGF-β. Even topical therapy with tacrolimus appears to reduce local fibrosis.Ismailoglu et al. observed that topical therapy with tacrolimus in the dura-mater of rats submitted to laminectomy reduced the occurrence of local fibrosis.The animals treated after thirty days with tacrolimus had a reduced amount local distribution of fibroblasts and reduced local fibrosis (23).But the most interesting study was certainly that in which Raptis et al. (24) observed that tacrolimus, when employed in healing the colons of rats, after 4 and 8 days of study reduced the occurrence of inflammatory reactions and the presence of local type I collagenase, although it increased the hydroxyproline concentration, neo-angiogenesis and the bursting pressure of anastomosis in the colons of the rats.Finally, Que et al., studying the regeneration of sciatic nerves in rats observed that tacrolimus reduces the formation of scar tissue in the area of the wound.These authors also observed that this reduction is associated with reduced proliferation and the apoptosis of fibroblasts induced by tacrolimus (25). A joint analysis of our experiment with the literature shows that the immunosuppressive scheme that uses calcineurin inhibitors such as tacrolimus leads to a reduction in the proliferation of fibroblasts and the production of collagen.This reduces the amount of residual scar tissue.However, not all the studies that analyzed the bursting tension of anastomosis in these animals found that the animals treated with the immunosuppression scheme saw worsened bursting tension in their anastomosis, with some even noting increased bursting tension of anastomosis with the use of tacrolimus, in studies with a shorter trial period.More studies correlating the presence of type I collagen, type III collagen and tissue bursting pressure in the urothelium are required. However, we need to take into account that our study is experimental, conducted in rats and with a short time frame for evaluating the results.We do not know whether these alterations in qualitative production of collagen will be maintained beyond the fifteen days of this study.For the time being, we cannot consider these results directly for clinical practice on human beings, where the scenario tends to be more complex and involves some variables that were not evaluated during the present study. EDITORIAL COMMENT Nowadays the association of a calcineurin inhibitor (CNI) with Mycophenolate mofetil (MMF) represents the backbone of solid-organ transplant immunosuppression.Although CNIs [Cyclosporine A (CsA) and Tacrolimus (FK506)] remain the most effective and widely used immunosuppressive agents in organ transplantation, their prolonged use may result in renal toxicity, renal dysfunction and irreversible renal failure characterized by extensive tubulo-interstitial fibrosis.The immunosuppressive effect of CNIs depends on the formation of a complex with their cytoplasmic receptor that inhibits calcineurin and impairs the expression of several cytokine genes that promote T-cell activation such as IL-2, IL-4, INF-γ and TNF-α (1).Moreover CNIs induce the expression of TGF-β1, which contribute to IL-2 inhibition but it is the main responsible for the development of CNI-associated interstitial fibrosis.TGF-β1 is well recognized as the major inducer for tissue fibrosis due to its stimulatory effect on extracellular matrix (ECM) production and inhibitory effect on matrix metalloproteinases.Recently it has been suggested that epithelial-mesenchymal transition (EMT) could play a role in the progression and maintenance of fibrosis in many pathological conditions, including tubulo-interstitial fibrosis (2).EMT is defined as the acquisition by epithelial cells of the phenotypic and functional characteristics of mesenchymal cells, intermediate between fibroblast and smooth muscle cells.These myofibroblasts have the ability to produce and secrete the extracellular matrix components such as collagen I and III, fibronectin and express α-smooth muscle actin (α-SMA).It has been shown that long-term exposure to CsA, induces EMT in human proximal tubular cells and that this event is mediated by CsA-induced TGF-β1 secretion (2).Moreover it has been observed that Tacrolimus up-regulates the expression of TGF-β and Smad2 in renal graft, while MMF has opposite effects (3).In fact it has been reported that MMF can reduce transplant fibrosis in a rat model of chronic rejection possibly by reducing the expression of α-SMA, collagen and connective tissue growth factor (CTGF), a matricellular protein with an important role in fibrosis and EMT (3).In accordance with these findings, Jiang et al. showed that MMF treatment prevented the deterioration of renal function and interstitial fibrosis in a renal ischemia-reperfusion injury model (4).In particular MMF significantly reduced the macrophages infiltration and the tissue expression of TGF-β1 and MCP-1, a diagnostic marker of renal injury (5). In this scenario Wu et al. investigated the effects of Tacrolimus in wound healing, especially in a particular pathological process characterized by aberrant fibroblast activity with development of keloids (6).These authors showed that Tacrolimus could inhibit the TGF-β1-stimulated cell proliferation, migration and type I collagen production in keloid fibroblasts via Smad-related pathways inhibition (6).A fundamental characteristic of tissue fibrosis is the deregulated deposition of ECM, especially type I and III collagen.The imbalance between matrix metalloproteinases (MMPs) and their specific inhibitors (TIMPs: tissue inhibitors of MMFs) may lead to ECM accumulation and tissue fibrosis.Lan et al. showed that the use of Tacrolimus increased MMPs production and decreased TIMPs, with abrogation of TGF-β1-induced type I collagen synthesis (7). The present study provides new insights into biological effect of Tacrolimus-MMF combination on the collagen synthesis in bladder wound healing.Even if the specific effect of the single immunosuppressive drugs was not addressed separately, for the first time it has been clearly shown a qualitative alteration in collagen synthesis characterized by a switch from type I to type III deposition.Understanding the mechanisms involved in tissue fibrosis may lead to the development of novel strategies for the treatment of CNIs-associated nephrotoxicity with the aim to increase graft survival. Figure- 1 - Figure-1 -Means and standard deviations areas of total collagen. Figure- 2 - Figure-2 -Mean and standard deviations densities of type I and type III collagen (percentage area).
v3-fos-license
2018-04-21T14:05:50.978Z
2018-04-01T00:00:00.000
4942493
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.2002907&type=printable", "pdf_hash": "32f4b007b8ffbc8c77f83904ad6cc0c6a8099f3a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2795", "s2fieldsofstudy": [ "Biology" ], "sha1": "f80584befe940dbab10dba9aeff84bc3d1975437", "year": 2018 }
pes2o/s2orc
Organic cation transporter 1 (OCT1) modulates multiple cardiometabolic traits through effects on hepatic thiamine content A constellation of metabolic disorders, including obesity, dysregulated lipids, and elevations in blood glucose levels, has been associated with cardiovascular disease and diabetes. Analysis of data from recently published genome-wide association studies (GWAS) demonstrated that reduced-function polymorphisms in the organic cation transporter, OCT1 (SLC22A1), are significantly associated with higher total cholesterol, low-density lipoprotein (LDL) cholesterol, and triglyceride (TG) levels and an increased risk for type 2 diabetes mellitus, yet the mechanism linking OCT1 to these metabolic traits remains puzzling. Here, we show that OCT1, widely characterized as a drug transporter, plays a key role in modulating hepatic glucose and lipid metabolism, potentially by mediating thiamine (vitamin B1) uptake and hence its levels in the liver. Deletion of Oct1 in mice resulted in reduced activity of thiamine-dependent enzymes, including pyruvate dehydrogenase (PDH), which disrupted the hepatic glucose–fatty acid cycle and shifted the source of energy production from glucose to fatty acids, leading to a reduction in glucose utilization, increased gluconeogenesis, and altered lipid metabolism. In turn, these effects resulted in increased total body adiposity and systemic levels of glucose and lipids. Importantly, wild-type mice on thiamine deficient diets (TDs) exhibited impaired glucose metabolism that phenocopied Oct1 deficient mice. Collectively, our study reveals a critical role of hepatic thiamine deficiency through OCT1 deficiency in promoting the metabolic inflexibility that leads to the pathogenesis of cardiometabolic disease. Introduction Hepatic energy metabolism is a major determinant of systemic glucose and lipid levels as well as total body adiposity, which in turn are key risk factors for cardiovascular and metabolic diseases [1,2]. Genome-wide association studies (GWAS) have provided a wealth of information on the genes and pathways involved in hepatic energy metabolism, including apolipoprotein E (APOE), proprotein convertase subtilisin/kexin type 9 (PCSK9), and low-density lipoprotein receptor (LDLR) [3][4][5]. In follow-up studies in cells and in preclinical animal models, most of these genes have been linked mechanistically to lipid metabolism [6]. In contrast, the mechanisms responsible for the genome-wide-level significant association of SLC22A1 (encoding the organic cation transporter, OCT1) with total and low-density lipoprotein (LDL) cholesterol [3] remains unexplored. In humans, the OCT1 gene is highly polymorphic. A number of reduced-function variants with high prevalence in European populations have been characterized [7][8][9]. In particular, 40% of Caucasians carry one and 9% carry two reduced-function OCT1 variants [7,8]. OCT1, which is highly expressed in the liver, has been widely characterized as a drug uptake transporter. Reduced-function polymorphisms of OCT1 have been associated with changes in the pharmacokinetics and pharmacodynamics of several drugs, including the opiate receptor agonist, morphine, and the anti-diabetic drug, metformin [10][11][12]. Recently, GWAS and fine mapping analysis showed that OCT1 functional variants are associated with acylcarnitine levels through efflux mechanism [13]. Previously, through metabolomic studies in Oct1 -/mice and in cells overexpressing human OCT1, our laboratory identified thiamine, vitamin B1, as a major endogenous substrate for OCT1, and Oct1 knockout mice were shown to exhibit hepatic thiamine deficiency [14]. Although systemic thiamine deficiency is well known to cause nerve damage and lead to beriberi and Wernicke-Korsakoff syndrome [15,16], the pathophysiologic effects of thiamine deficiency in the liver are not understood. Thiamine pyrophosphate (TPP), the active metabolite of thiamine, is an essential cofactor for several metabolic enzymes, including pyruvate dehydrogenase (PDH), α-ketoglutarate dehydrogenase (α-KGDH), and transketolase (TK), which have fundamental roles in regulating cellular energy metabolism [15]. In particular, in 1963 Randle proposed that PDH acts as a key metabolic switch in the glucose-fatty acid cycle, which underlies the metabolic disturbance of diabetes. Under the theory of substrate competition between glucose and fatty acids, an increase in fatty acid oxidation and a reduction in glycolytic flux result in a critical imbalance in energy metabolism in tissues. As noted by Randle, regulation of PDH activity greatly influences selection of fuel source [17,18]. Failure to flexibly adjust the choice of fuel (e.g., fatty acids or glucose) for metabolic energy production has recently been proposed to underlie metabolic inflexibility and lead to the pathogenesis associated with metabolic disorders [19]. Metabolic inflexibility and indeed metabolic syndrome have been linked to an excess of macronutrients (e.g., carbohydrates or fat); however, the role of micronutrients such as thiamine in metabolic syndrome has been largely ignored. Although many reports have identified a high prevalence of thiamine deficiency in patients with diabetes or obesity [20][21][22][23] and a beneficial effect of thiamine supplementation in these patient populations [24][25][26], the molecular mechanisms contributing to thiamine-associated metabolic disturbance are unknown. Here, we hypothesize that reduced OCT1 function or reduced dietary thiamine intake leading to decreases in hepatic thiamine levels modulates the activity of multiple enzymes and the levels of key metabolites involved in glucose and lipid metabolism. These effects result in dyslipidemias, increases in circulating glucose levels, and peripheral adiposity. Through extensive experiments in Oct1 -/mice, our data show that Oct1 deficiency results in substantial changes in hepatic energy metabolism, i.e., reduction in glucose utilization, increased gluconeogenesis, and alterations in lipid metabolism. Similarly, feeding wild-type mice a thiamine deficient diet (TD) results in comparable effects on hepatic energy metabolism. Taken together, our studies suggest that hepatic thiamine deficiency, through deletion of Oct1 in mice, results in the development of metabolic inflexibility. Our studies provide a mechanistic explanation for the striking metabolic findings in large-scale human genetic studies, demonstrating that common OCT1 reduced-function polymorphisms are associated with dyslipidemias, obesity, and increased risk for type 2 diabetes. OCT1 reduced-function variants are strongly associated with human lipid levels The GWAS Catalog, database of Genotypes and Phenotypes (dbGAP) Association Results Browser, and Genome-Wide Repository of Associations Between SNPs and Phenotypes (GRASP) identified two major phenotypes (total cholesterol and LDL cholesterol levels) that were associated with genetic variants in SLC22A1 (OCT1) (Fig 1 and S1A and S1B Fig). In particular, rs1564348 and rs11753995 were associated with LDL cholesterol (p = 2.8 × 10 −21 ) and total cholesterol (p = 1.8 × 10 −23 ), respectively (Fig 1 and Table 1). Using HaploReg v4.1 to obtain linkage disequilibrium information from 1000 Genomes Project, we noted that these two SNPs are in linkage with the OCT1 with methinone420 deletion (420Del), a common genetic variant in OCT1 that shows reduced uptake and altered kinetics of its substrates. Thus, the results suggest that reduced OCT1 function is significantly associated with higher total cholesterol and higher LDL levels. The GRASP database identified other phenotypes with significant, but weaker, p-values, relevant to glucose traits and coronary artery disease. Recent results from the UK Biobank cohort (http://geneatlas.roslin.ed.ac.uk/), available in the Gene ATLAS database and from the Global Lipids Genetic Consortium, are also included in Table 1. As shown, the reduced-function OCT1 nonsynonymous variants, OCT1-R61C, OCT1-G401S, OCT1-420Del, and OCT1-G465R, were significantly associated with high total cholesterol, LDL cholesterol, and/or TG levels in at least one study (Table 1). In addition, two of the missense OCT1 variants, OCT1-P341L and OCT1-V408M, which are associated with lower SLC22A1 expression levels in several tissues [13,27,28], were also associated with higher cholesterol levels in at least one study. The OCT1 nonsynonymous variants in Table 1, except OCT1-P341L, are not in linkage disequilibrium (r 2 < 0.1) with SNPs in lipoprotein(a) (LPA) and lipoprotein(a) like 2 (LPAL2) genes (a known locus for plasma lipoprotein levels) [29][30][31] (S1C Fig), indicating that OCT1 constitutes an independent locus for association with plasma lipids, which was also recently shown in other studies [32,33]. Notably, the effect size of the OCT1 variants for associations with lipids traits are small; thus, larger sample sizes are needed for genome-wide level significance (p < 5 × 10 −8 ) (Table 1). In the Type 2 Diabetes Knowledge Portal, weaker but significant associations (p < 0.05) between OCT1 reduced-function variants and higher 2-hour glucose levels, higher fasting insulin levels, increased risk for type 2 diabetes, increased risk for coronary artery disease, and higher BMI were cataloged (Table 1). We performed burden test analysis using the data available in the portal. Interestingly, in the analysis, in which we included possibly or probably deleterious missense or protein truncating variants of OCT1, we observed strong associations of the reduced-function OCT1 variants with increased body weight (p = 0.0002-0.0005, beta = 0.23-0.3). When we performed a similar burden test analysis with type 2 diabetes, the significance was weaker and the results were only significant when we included only protein truncating variants of OCT1 (p = 0.015, odds ratio = 2.10). Deletion of Oct1 altered hepatic and peripheral energy homeostasis Consistent with our previous studies, deletion of Oct1 protected the mice from hepatic steatosis [14] (Fig 2A, S2A Fig). In this study, we observed that glycogen content was 3.3-fold greater in livers from Oct1 -/mice compared to livers from Oct1 +/+ mice after an overnight fast (Fig 2A and S2B Fig). Consistent with these results, hepatic glucose levels were 5.9-fold higher (p = 0.0006) in Oct1 -/mice compared to Oct1 +/+ mice (S2C Fig). Significantly greater body weights were observed for Oct1 -/mice compared to their wild-type counterparts, starting at the age of 6 weeks ( Fig 2B and S2D Fig). Body composition also differed, with dual-energy Xray absorptiometry (DEXA) scans showing a higher percent of body fat in Oct1 -/compared to Oct1 +/+ mice (p = 0.001) (Fig 2C). Consistent with the greater proportion of body fat, Oct1 -/mice had greater epididymal fat pad weights and reduced liver weight compared to Oct1 +/+ mice (p < 0.0001) (Fig 2D and S2E Fig). To further assess the potential mechanism leading to increased weight gain in Oct1 -/mice, we analyzed energy expenditure, food intake, and activity by the comprehensive laboratory in (A) and (B). Over 100 loci were associated with lipids at p < 5 × 10 −8 , including SLC22A1, which is the top locus in chromosome 6. The regional plots of the SLC22A1 locus for (C) LDL cholesterol levels and (D) total cholesterol. SNPs are plotted by position on chromosome 6 (hg19) against association with meta-analysis of (C) LDL cholesterol levels and (D) total cholesterol in up to 188,577 individuals. The plots show that rs1564348 and rs11753995 (purple circles) are the top signals for (C) LDL cholesterol (p = 2.8 × 10 −21 ) and (D) total cholesterol (p = 1.8 × 10 −23 ), respectively. Both SNPs have strong linkage disequilibrium with the SLC22A1-420 deletion (rs202220802) (r 2 = 0.78, D 0 = 0.99) (http://archive.broadinstitute.org/mammals/haploreg/haploreg.php). The red arrow points to a nonsynonymous SNP, rs12208357 (SLC22A1-R61C), which is associated with (C) LDL cholesterol (p = 6.6 × 10 −10 ) and with (D) total cholesterol (p = 1.3 × 10 −8 ). Blue arrows point to an intronic SNP in SLC22A1, rs662138, which is included in many genome-wide genotyping platforms and also has strong linkage disequilibrium with the SLC22A1-420 deletion (rs202220802) (r 2 = 0.78, D 0 = 0.99). The associations of rs662138 with other traits are shown in Table 1. Estimated recombination rates (cM/Mb) are plotted in a blue line to reflect the local linkage disequilibrium structure. The SNPs surrounding the most significant SNP, (C) rs1564348 and (D) rs11753995, are color coded to reflect their linkage disequilibrium with other SNPs in the locus, based on pairwise r 2 values from the HapMap CEU data. Genes, the position of exons, and the direction of transcription from the UCSC Genome Browser are noted. APOE, apolipoprotein E; CEU, Utah residents with Northern and Western European ancestry from the CEPH collection; IGF2R, insulin like growth factor 2 receptor; LDL, low-density lipoprotein; LDLR, low-density lipoprotein receptor; LOC729603, non-coding RNA; PCSK9, proprotein convertase subtilisin/kexin type 9; rs, reference single nucleotide polymorphisms (SNPs); SLC, Solute Carrier; TC, total cholesterol; UCSC, University of California, Santa Cruz. animal monitoring system (CLAMS). Before placing the mice into the CLAMS, the body composition of all mice was measured by EchoMRI. As shown in Fig 2E, Oct1 -/mice had greater fat and lower lean mass in comparison to Oct1 +/+ mice (p < 0.0001). When normalized to total body weight, Oct1 -/mice had significantly lower respiratory oxygen (O 2 ) consumption and energy expenditure (Fig 2F and 2G), indicating lower metabolic rates of Oct1 -/mice in comparison to Oct1 +/+ mice. These data are consistent with the lower lean mass of the Oct1 -/mice compared to Oct1 +/+ mice, because lean mass contributes more to energy expenditure than more inert tissue, such as adipose tissue [39,40]. In fact, no differences in respiratory O 2 consumption or energy expenditure normalized to lean mass were observed between Oct1 +/+ and Oct1 -/mice (S2F and S2G Fig). Thus, the differences in metabolic rate between Oct1 +/+ and Oct1 -/mice appear to be due to significant differences in body composition. Additionally, our Oct1 -/mice had no difference in activity but had slightly lower food intake and respiratory exchange ratio (RER) during the dark cycle compared to Oct1 +/+ mice (S2H- S2J Fig). There were no deleterious effects of Oct1 deficiency on hepatic function and, in fact, some of the liver function tests improved in the Oct1 knockout mice in comparison to wild-type mice (S2K Fig). There were no major differences in the expression levels of thiamine transporters (Slc19a2 and Slc19a3) in the liver. In contrast, levels of organic cation transporter, Oct2, which also transports thiamine, were increased, albeit the expression levels of Oct2 in the liver were extremely low relative to Oct1 and Slc19a2 (S2L Fig). Collectively, our data suggest that Oct1 deletion had a significant effect on hepatic and peripheral energy homeostasis. Deletion of Oct1 altered thiamine disposition and protected mice from beriberi We hypothesized that the systemic plasma levels of thiamine are higher in Oct1 -/mice as a result of reduced hepatic extraction of dietary thiamine ( Fig 3A). As expected, Oct1 -/mice had A single intraperitoneal injection of 2 mg/kg thiamine (with 4% 3 H-thiamine) was administered to four groups of mice (Oct1 +/+ mice treated with control shRNA, n = 6; Oct1 +/+ mice treated with Oct1 shRNA, n = 6; Oct1 -/mice treated with control shRNA, n = 3 and Oct1 -/mice treated with Oct1 shRNA, n = 3) Data are normalized to Oct1 +/+ mice treated with control shRNA. Data shown are mean ± SEM. Data were analyzed by unpaired two-tailed Student t test; Ã p < 0.05, ÃÃ p < 0.01, and ÃÃÃ p < 0.001. Underlying data are provided in S1 Data. significantly higher plasma levels of thiamine ( Fig 3B and S3A Fig) compared to Oct1 +/+ mice on thiamine-controlled and thiamine-enriched diets. In addition, Oct1 deletion preserved plasma thiamine levels in mice on TDs (Fig 3B). Thiamine deficiency is associated with lifethreatening diseases, such as beriberi and Wernicke-Korsakoff syndrome [15,41]. We hypothesized that preserved circulating thiamine levels would delay the development of severe thiamine deficiency syndromes and increase the rate of survival when mice were challenged with a TD. As shown in Fig 3C, there was a significant improvement in the overall survival of Oct1 -/mice (p = 0.012, Gehan-Breslow-Wilcoxon test; p = 0.018, log-rank test) compared to Oct1 +/+ mice. Modulation of Oct1 expression levels provides a means of studying the effect of hepatic thiamine levels per se as opposed to systemic thiamine levels or thiamine levels in other tissues. Manipulation of dietary thiamine may have additional effects, for example, in the central nervous system. Notably, Liu and colleagues determined that reduced levels of thiamine in the systemic circulation in mice resulted in neurological effects in the hypothalamus, with anorexia and resultant reduction in peripheral adiposity [42]. In human populations, the OCT1 gene is highly polymorphic [7][8][9]43]. Many loss-of-function polymorphisms of OCT1 have been characterized and found to affect hepatic uptake of drugs, leading to altered treatment response [43]. Here, in the uptake studies, cells expressing human OCT1 genetic variants (420Del or 420Del+G465R) had significantly reduced uptake of thiamine compared to the reference allele (Fig 3D), although they have comparable levels of OCT1 transcript (S3C Fig). In kinetic studies performed at 4 minutes, the maximum velocity (V max ) of thiamine in cells expressing human OCT1 with methinone 420 deletion (hOCT1-420Del) was 70% lower than in cells expressing the human OCT1 reference (hOCT1-Ref) (1.80 ± 0.09 nmol/mg protein/minute versus 5.36 ± 0.30 nmol/mg protein/minute) (Fig 3D). In contrast to humans, who express OCT1 primarily in the liver, mice express Oct1 in both the liver and the kidney; therefore, deletion of Oct1 in the kidney could potentially affect systemic levels of thiamine in mice. To address this limitation of the Oct1 knockout mice as a model for humans, we used hydrodynamic tail vein injection of mouse Oct1 short hairpin RNA (shRNA) lentiviral particle (or empty vector shRNA lentiviral particle as control) to specifically knock down Oct1 in the liver in both Oct1 +/+ and Oct1 -/mice. Following a single intraperitoneal injection of 2 mg/kg thiamine (with 4% 3 H-thiamine), we observed that the area under the plasma concentration-time curve (AUC) of thiamine was significantly greater in wild-type mice treated with Oct1 shRNA lentiviral particles compared to wild-type mice treated with vector control shRNA lentiviral particles ( Fig 3E). Although not significant, similar trends were observed in the maximum concentration (C max ) values (S3D Fig). Notably, the Oct1 shRNA did not affect Oct1 expression levels in the kidney (S3D Fig). Compared to wild-type mice with Oct1 shRNA lentiviral particle knockdown, higher systemic levels of thiamine were observed in Oct1 -/mice (Fig 3E), potentially reflecting an incomplete Oct1 knockdown (50% liver Oct1 expression reduction, S3D Fig) or an additive effect of renal Oct1 deletion in Oct1 -/mice. The data provide strong evidence that reduction of OCT1 expression in the liver alone can result in increased systemic thiamine exposure. Although the liver plays a role in pre-systemic thiamine metabolism, it should be noted that thiamine is metabolized in most tissues in the body; therefore, other tissues, such as the intestine, may contribute to pre-systemic metabolism of the vitamin. Collectively, alterations in OCT1 function through genetic polymorphisms affect thiamine uptake and disposition. Deletion of Oct1 disrupted hepatic glucose metabolism Our previous studies indicated that Oct1 deletion resulted in reduced hepatic thiamine levels and levels of TPP [14], the cofactor of PDH. It is shown that reduced TPP levels directly affect the activity of PDH [44,45]. As PDH plays a key role in energy metabolism linking glycolysis to the tricarboxylic acid (TCA) cycle and fatty acid metabolism [17], we hypothesized that the activity of hepatic PDH was impaired in Oct1 -/mice. Because phosphorylation of PDH results in inactive forms of the enzyme [46], we measured levels of phosphorylated PDH (at two phosphorylation sites, Ser 232 and Ser 300 ) and mRNA levels of pyruvate dehydrogenase kinase 4 (PDK4). Both phosphorylated PDHs and PDK4 transcripts were significantly higher in livers from Oct1 -/mice ( Fig 4A and S4A Fig). In addition, in Oct1 -/mice, glycogen synthase (GS), and glucose transporter 2 (Glut2) were present at significantly higher levels ( Fig 4A). Although . (E) PTT in mice fasted for 16 hours, adjusted for baseline, and associated glucose AUC (n = 6 per genotype). (F) ITT in mice fasted for 5 hours and associated glucose AUC (n = 6 per genotype). Data shown are mean ± SEM. Data were analyzed by unpaired two-tailed Student t test; Ã p < 0.05, ÃÃ p < 0.01, and ÃÃÃ p < 0.001. Underlying data are provided in S1 Data. AMPK, AMP-activated protein kinase; AUC, area under the curve; Glut2, glucose transporter 2; GTT, glucose tolerance test; GS, glycogen synthase; GSp 641 , phospho-glycogen synthase at S641; ITT, insulin tolerance test; Oct1, organic cation transporter 1; pACC, phosphorylate acetyl-CoA carboxylase; pAMPK, phosphorylate 5' adenosine monophosphate-activated protein kinase; PDH, pyruvate dehydrogenase; PDH-p, phospho-pyruvate dehydrogenase; PTT, pyruvate tolerance test; PYGL; glycogen phosphorylase. glycogen phosphorylase (PYGL), which plays a key role in breakdown of hepatic glycogen, was also expressed at higher levels, the ratio of GS to PYGL was significantly higher in livers from Oct1 -/mice (Fig 4A and S4B and S4C Fig). These data suggest that Oct1 -/mice had higher rates of glycogen synthesis, which could explain the higher hepatic glycogen content in Oct1 -/mice. Our data suggested that livers from Oct1 -/mice would have less activity of PDH, which in turn would result in a lower rate of conversion of pyruvate to acetyl-CoA entering the TCA cycle [47,48] and thus an overall reduction in oxidative phosphorylation of glucose. We hypothesized that the reduction of oxidative phosphorylation of glucose would increase the accumulation of the intermediates of gluconeogenic substrates. These intermediates would lead to increased gluconeogenesis as glycolysis and gluconeogenesis are reciprocally regulated and highly depend on the availability of gluconeogenic substrates [1,49]. The levels of glucose-6-phosphate (G6P), a strong allosteric activator of GS [50], were 2.3-fold (p < 0.0001) higher in the livers of Oct1 -/mice (Fig 4B). In addition, the ratio of phosphorylated GS to total GS was significantly lower in Oct1 -/mice (Fig 4C), consistent with a higher activity of GS in Oct1 -/mice. To further investigate the role of OCT1 in hepatic glucose metabolism, we performed three standard tests related to glucose homeostasis [51]. In the glucose tolerance test (GTT), the blood glucose rose following oral glucose dosing and fell back to normal in both Oct1 +/+ and Oct1 -/mice (S4D Fig), although the Oct1 -/mice had higher blood glucose levels at baseline. After adjusting for baseline, there was a trend toward higher blood glucose levels and an overall greater glucose AUC after a bolus dose of glucose in Oct1 -/mice ( Fig 4D). The GTT indicated that both Oct1 +/+ and Oct1 -/mice could produce insulin in response to rising glucose. In contrast, pyruvate tolerance tests (PTTs) were different between Oct1 +/+ and Oct1 -/mice ( Fig 4E and S4E Fig). In particular, blood glucose was significantly higher at each time point after pyruvate injection in Oct1 -/mice, which suggested that Oct1 -/mice had higher rates of hepatic gluconeogenesis. In the insulin tolerance test (ITT), there was a trend toward higher blood glucose levels after insulin injection in the Oct1 -/mice and an overall greater glucose AUC (Fig 4F). Blood glucose levels are maintained by glucose uptake mainly in peripheral tissues and glucose output primarily from the liver [52]. Data from the PTT suggested that the knockout mice had significantly higher hepatic gluconeogenesis, which may have contributed to the higher glucose exposure in Oct1 -/mice following the ITT. Thiamine deficiency impaired glucose metabolism To understand the role of thiamine in regulating glucose metabolism, age-matched mice were placed on dietary chow containing three different doses of added thiamine, following the experimental design shown in Fig 5A. Wild-type mice fed a TD for 10 days had higher levels of hepatic glycogen, hepatic glucose, and plasma glucose compared to mice fed control diets ( Fig 5B-5D). In contrast, varying thiamine content in the diet resulted in no significant differences in hepatic glycogen, hepatic glucose, or plasma glucose levels among Oct1 -/mice (Fig 5B-5D). Furthermore, wild-type mice fed TDs had similar levels of hepatic glycogen, hepatic glucose, and plasma glucose as Oct1 -/mice irrespective of the thiamine content in their diets, consistent with the idea that Oct1 deficiency mimics thiamine deficiency in wild-type mice. Levels of G6P, an activator of GS, were significantly higher in livers from wild-type mice fed a TD diet and were comparable to liver levels of G6P in Oct1 -/mice irrespective of thiamine content in the diet (Fig 5E). As shown by western blotting (Fig 5F), livers from Oct1 -/mice in the control thiamine diet group and from both Oct1 +/+ mice and Oct1 -/mice in the TD group had higher GS and Glut2 protein levels compared to Oct1 +/+ mice in the thiamine control group. Taken together, our data suggest that thiamine deficiency impairs glucose metabolism in wild-type mice and that Oct1 deficiency phenocopies thiamine deficiency in wild-type mice. Oct1 -/mice had higher adiposity and altered lipid metabolism Oct1 -/mice exhibited increased adiposity (Fig 2C and 2D), and examination of fat cells through staining revealed significantly larger adipose cells in the epididymal fat pad (epididymal white adipose tissue [eWAT], p = 0.004) and a trend toward larger adipose cells in retroperitoneal adipose tissue (rpWAT) from Oct1 -/mice ( Fig 6A). To probe the mechanism of increasing adiposity and adipose cell size in the Oct1 -/mice, we measured the mRNA expression levels of genes related to adipose metabolism. Fat gain may be due to imbalances between rates of TG synthesis and lipolysis. The mRNA expression of patatin-like phospholipase domain-containing protein 2 (Pnpla2) and lipase, hormone sensitive (Lipe) involved in adipose lipolysis was reduced in adipose tissue from Oct1 -/mice compared to adipose tissue from Oct1 +/+ mice (Fig 6B). In contrast, levels of genes involved in TG synthesis were similar between the two strains of mice (S5A Fig). Pnpla2 (coding for adipose triglyceride lipase [ATGL]), Lipe (coding for hormone sensitive lipase [HSL]), and Mgll (coding for monoglyceride lipase [MGLL]) are responsible for three major steps in mobilizing fat through hydrolysis of TGs to release free fatty acids from the adipocytes [53]. Lower expression levels of these genes are consistent with lower rates of lipolysis in adipose tissue from Oct1 -/mice. Insulin has antilipolytic effects in adipose tissue, regulating ATGL expression and promoting lipid synthesis, and chronic insulin treatment results in increased adipose mass [54,55]. Corresponding to the higher levels of glucose (Fig 6C), we observed higher circulating levels of insulin in the Oct1 -/mice (Fig 6D and S5B Fig), which suppressed lipolysis. Furthermore, fasting free fatty acid levels were lower in the plasma of Oct1 -/mice (Fig 6E), which may reflect the lower rates of lipolysis in adipose tissue [56]. Data in Oct1 knockout mice were corroborated by data from inbred strains of mice. In particular, Oct1 mRNA levels in the liver inversely associated with percent fat growth and fat mass among various strains of mice (S1 Table). In addition, down-regulation of mitochondrial uncoupling protein 2 (Ucp2) was observed in brown adipose in Oct1 -/mice (S5G Fig), which may associate with the reduced energy expenditure. Examination of total cholesterol, HDL cholesterol, LDL cholesterol, and TG in plasma samples revealed significant differences in the two strains of mice. Notably, Oct1 -/mice had higher plasma levels of total cholesterol and LDL cholesterol compared to Oct1 +/+ mice, without significant differences in TG and HDL cholesterol (Fig 6F). The increase in LDL was due primarily to smaller LDL particles (Fig 6G). We observed no differences in the transcript levels of lipoprotein lipase (Lpl) and Ldlr in livers from Oct1 +/+ mice and Oct1 -/mice. However, livers from Oct1 -/mice had higher transcript levels of 3-hydroxy-3-methylglutaryl-CoA reductase (Hmgcr), and Acyl-CoA: cholesterol acyltranferase 2 (Acat2) (S5E Fig). Consistent with lower activity of PDH, pyruvate levels were significantly higher in the livers from Oct1 -/mice, as less pyruvate was converted to acetyl-CoA. Interestingly, contrary to our expectation, Oct1 -/mice had higher levels of acetyl-CoA in their livers (Fig 6H and 6I). The higher accumulated acetyl-CoA may have resulted from higher fatty acid β-oxidation in Oct1 -/mice [17,48]. Our data suggest that up-regulation of enzymes involved in cholesterol synthesis and higher levels of the substrate precursor, acetyl-CoA, in the liver of Oct1 -/mice result in alterations in hepatic cholesterol metabolism, leading to increased production of LDL particles. Furthermore, lower thiamine levels were correlated with higher levels of cholesterol in plasma and liver in male mice from various inbred strains of mice (S2 Table and Discussion Through extensive characterization of Oct1 knockout mice, our data provide compelling evidence that Oct1 deficiency leads to a constellation of diverse effects on energy metabolism that are consistent with GWAS demonstrating strong associations between OCT1 polymorphisms and a variety of metabolic traits in humans (Fig 1 and Table 1). Our data support the notion that hepatic thiamine deficiency is the underlying mechanism for the phenotypes associated with reduced OCT1 function. Five major effects of OCT1 deficiency emerge from the current study: (1) a shift in the pathway of energy production from glucose to fatty acid oxidation due to lower activity of key thiamine-dependent enzymes in the liver; (2) increased gluconeogenesis and hepatic glucose output, with associated increases in liver glycogen and glucose levels; (I) Hepatic acetyl-CoA levels (n = 4 per genotype in 5-hour fasted group; n = 6 per genotype in 16-hour fasted group). Data shown are mean ± SEM. Data were analyzed by unpaired two-tailed Student t test; Ã p < 0.05, ÃÃ p < 0.01, and ÃÃÃ p < 0.001. Underlying data are provided in S1 Data. (J) Scheme of overall mechanism. The scheme illustrates how OCT1 deficiency affects disposition of thiamine and hence triggers a constellation of effects on hepatic and overall energy homeostasis. α-KGDH, α-ketoglutarate dehydrogenase; CHOL, cholesterol; CoA, coenzyme A; eWAT, epididimal white adipose tissue; Glut2, glucose transporter 2; HDL-C, High-density lipoprotein cholesterol; H&E, Haemotoxylin and Eosin; LDL-C, Low-density lipoprotein cholesterol; Lipe, lipase, hormone sensitive; Mgll, monoglyceride lipase; Oct1, organic cation transporter 1; p-ACC, phosphorylated acetyl co-A; p-AMPK, phosphorylated 5' adenosine monophosphate-activated protein kinase; PDH, pyruvate dehydrogenase; Pnpla2, patatin-like phospholipase domain-containing protein 2; rpWAT, retroperitoneal adipose tissue; TCA, tricarboxylic acid; TPP, thiamine pyrophosphate; TRIG, triglyceride. (3) increased peripheral adiposity stemming from alterations in energy metabolism; (4) changes in hepatic cholesterol homeostasis and plasma lipids that may contribute to cardiovascular disease risk; and (5) beneficial effects on life-threatening thiamine deficiency syndromes. As the major energy-generating organ, the liver has high metabolic flexibility in selecting different substrates to use in energy production in response to various metabolic conditions. The glucose-fatty acid cycle, first proposed by Randle in 1963, plays a key role in regulating metabolic fuel selection, and impairment in metabolic flexibility contributes to insulin resistance and metabolic syndrome [17,18,47,57,58]. Many studies have shown that there is a failure to shift from fatty acid to glucose oxidation during the transition from fasting to feeding in individuals with obesity or diabetes [48,57]. We observed lower hepatic steatosis (Fig 2A), largely due to increases in fatty acid oxidation in the liver [1,59,60]. Importantly, the observation that the Oct1 -/mice had significantly lower RERs during the dark cycle than their wildtype counterparts suggests that overall, Oct1 -/mice have a greater reliance on energy production from fatty acids over glucose during feeding than wild-type mice [61,62] (S2J Fig). PDH is the key enzyme switch for the glucose-fatty acid cycle [17,63]. Thus, alterations in its activity by reduced levels of the cofactor TPP in Oct1 -/mice (Fig 4A) disrupt the hepatic glucosefatty acid cycle, resulting in an impairment of hepatic energy homeostasis. In livers from Oct1 deficient mice, β-oxidation of fatty acids becomes a major source of energy production, leading to impaired homeostasis in both lipid and carbohydrate metabolism. In both the current study and our previous study [14], we observed increased levels of phosphorylated 5 0 adenosine monophosphate-activated protein kinase (AMPK)and its downstream target, acetyl-CoA carboxylase (ACC), in livers from Oct1 -/mice compared to livers from wild-type mice, indicative of a lower hepatic energy status in the knockout mice. As a result, fatty acid β-oxidation was stimulated in the livers from Oct1 -/mice. However, as was evident by lower hepatic ATP content, the increased rates of fatty acid oxidation were not sufficient to compensate for normal rates of ATP production. Reduced glucose oxidation as well as a reduction in flux through the TCA cycle due to reduced activity of α-KGDH, another TPP-associated enzyme, may have contributed to the lower ATP production. Consistent with a lower flux through the TCA cycle as well as increases in β-oxidation of fatty acids, we observed higher levels of acetyl-CoA in livers from Oct1 -/mice. Studies have shown that acetyl-CoA allosterically inhibits PDH, which results in further inhibition of glucose utilization [48,58,63]. Thus, in the livers from Oct1 -/mice, this loop continued to stimulate fatty acid β-oxidation, which further suppressed hepatic glucose utilization, shifting the major energy source from glucose to fatty acids. Increases in hepatic glycogen and glucose content in the Oct1 -/mice (Fig 2A, Fig 5B and 5C) were likely due to changes in intermediary metabolites resulting directly from alterations in the activity of the TPP-dependent enzyme, PDH (Fig 4A). Reduced PDH activity resulted in higher levels of pyruvate in the liver of Oct1 -/mice (Fig 6H), which is consistent with results from previous studies [63,64]. Higher pyruvate levels can drive hepatic gluconeogenesis, resulting in increased hepatic glucose production and associated increases in hepatic glucose and glycogen levels [1,52]. Whereas our data suggested that the glycogen accumulation in the livers of Oct1 -/mice resulted from changes in key intermediate metabolites that are involved in hepatic energy metabolism, other regulatory paths such as hormonal, transcriptional, and neural regulation need to be further studied. Oct1 -/mice exhibited increased adiposity (Fig 2), which was more likely due to downstream effects of reduced transporter expression in the liver rather than in extra-hepatic tissues. Multiple factors contributed to the increased adiposity, such as hyperinsulinemia, hyperglycemia, increased hepatic glycogen, and reduced energy expenditure. Consistent with the increased adiposity observed in the Oct1 -/mice, hepatic expression levels of Oct1 are inversely correlated with fat growth and fat mass in inbred strains of mice (S1 Table). In addition, hyperinsulinemia in Oct1 knockout mice may further result in increasing storage of TGs and suppression of lipolysis in peripheral adipose tissue, which reduced flux of fatty acids to the liver. Chronic insulin treatment has been shown to result in increased adipose mass due to suppression of lipolysis and increased lipid storage [54,55], and in the current study, a high correlation between plasma insulin levels and fat mass in both wild-type and Oct1 -/mice (S5D Fig) was observed. Furthermore, high insulin levels have been associated with low expression levels of the lipolytic enzyme Pnpla2 in adipose tissue [55], consistent with results in the Oct1 -/mice (Fig 6B), which is in agreement with the increased adiposity in these mice. In addition, high hepatic glycogen levels may have contributed to the increased adiposity. In particular, hepatic glycogen levels regulate the activation of the liver-brain-adipose axis [65]. Glycogen shortage during fasting triggers liver-brain-adipose neurocircuitry that results in stimulation of fat utilization. In contrast, in mice with elevated liver glycogen resulting from overexpression of GS or knockdown of PYGL, the liver-brain-adipose axis action is turned off, which preserves fat mass [65]. The greater stores of glycogen in the livers of Oct1 -/mice may have shut off the liver-brain-adipose axis, contributing to the increased peripheral adiposity in the mice. Overall, our data in Oct1 -/mice suggest that OCT1 plays a key role in regulation of peripheral metabolism, likely because of its effects on circulating glucose and insulin as well as increased stores of hepatic glycogen triggering a feedback loop mechanism between the liver, the brain, and the adipose tissue. Parallels between phenotypes observed in GWAS in humans and those in the Oct1 -/mice were striking. High plasma LDL, total cholesterol, and TG levels were observed in individuals with reduced-function polymorphisms of OCT1 (R61C, F160L, G401S, V408M, 420del, and G465R) ( Table 1) as well as in the Oct1 -/mice (Fig 6F and 6G). In particular, the Oct1 -/mice, and humans with reduced-function polymorphisms of OCT1 (S3 Table) [37], have increased levels of small dense LDL particles (Fig 6G), which in humans are predictive of increased risk of cardiovascular disease [66] and are a characteristic feature of the dyslipidemia associated with excess adiposity [67] and insulin resistance [68]. We speculate that the increased LDL levels in Oct1 -/mice may result from a relative deficiency of hepatic thiamine. Specifically, livers from rats with thiamine deficiency have been shown to have lower TG but higher cholesterol content [69], consistent with our data in inbred strains of mice, in which inverse correlations were observed between plasma thiamine and cholesterol levels in both plasma and liver (S2 Table and S6 Fig). Importantly, our in vivo studies in mice showed that reduction of liver Slc22a1 expression levels resulted in higher systemic thiamine levels (Fig 3B and 3E). Furthermore, individual OCT1 polymorphisms were nominally associated with systemic plasma levels of thiamine in humans (S4 Table) [70] as well as when combining six of the OCT1 nonsynonymous variants that were genotyped in the cohort by Rhee and colleagues (see S5 Table) [71]. Published studies in animals and humans suggest that thiamine supplementation may improve blood lipid profiles [24,25]. Higher levels of the precursor for cholesterol synthesis, acetyl-CoA, as well as higher expression levels of enzymes involved in cholesterol synthesis [59,72] (S5E Fig), potentially leading to greater rates of hepatic cholesterol production, could have contributed to the higher cholesterol and LDL levels in plasma. Although we were unable to detect differences in total hepatic cholesterol content between Oct1 +/+ mice and Oct1 -/mice (S5F Fig) corresponding to the observed differences in plasma cholesterol levels between the mouse strains, many factors that can modulate hepatic cholesterol content [73,74], including perhaps increased export, need further investigation. Further studies are warranted to investigate the mechanisms underlying the effects of reduced OCT1 function as well as thiamine bioavailability on cholesterol and lipoprotein metabolism. In addition to the metabolic changes that were observed in Oct1 -/mice, the knockout mice were found to survive substantially longer on TDs. This may have been due to the higher systemic levels of thiamine (Fig 3B), which would spare essential organs such as the brain and heart from thiamine deficiency, as well as to the increased adiposity in the knockout mice ( Fig 2C), which could protect the mice from the starvation that ensues from thiamine deficiency [42,75,76]. Nevertheless, the results have implications for human ancestors who harbored reduced-function genetic polymorphisms of OCT1. Because of differences in the tissue distribution of OCT1 between humans and mice, our study has limitations in directly extrapolating all the results obtained in mice to humans. In particular, because Oct1 is also abundantly expressed in the kidney of mice, OCT1-mediated renal secretion of thiamine is another important determinant of systemic thiamine levels in mice. In contrast, in humans, OCT1 plays a role in modulating thiamine disposition largely in the liver and not in the kidney. Deletion of Oct1, particularly in the kidney of the knockout mice, therefore, may have modulated systemic thiamine levels and, thus, survival during TDs as well as other phenotypes observed in the current study. Today, thiamine deficiency is associated with aging, diabetes, alcoholism, and poor nutritional status [23,[77][78][79]. In the setting of thiamine deficiency, OCT1 reduced-function polymorphisms today would have mixed effects. On the one hand, individuals who harbored reduced-function variants would have higher systemic levels of thiamine, which may protect essential organs from thiamine depletion. On the other hand, the individuals would have low hepatic thiamine levels, which may predispose them to the deleterious effects of dysregulated plasma lipids and to obesity and diabetes (Table 1). In fact, lower thiamine levels have been reported in individuals with diabetes [23,78] and, as noted, some studies have shown that high-dose thiamine supplementation has beneficial effects on diabetes [25,80,81]. Overall, the current study shows that OCT1 deficiency triggers a constellation of effects on hepatic and overall energy homeostasis (Scheme in Fig 6J). That is, reduced OCT1-mediated thiamine uptake in the liver leads to reduced levels of TPP and a decreased activity of key TPPdependent enzymes, notably PDH and α-KGDH. As a result, there is a shift from glucose to fatty acid oxidation, which leads to imbalances in key metabolic intermediates, notably, elevated levels of pyruvate, G6P, and acetyl-CoA. Because of these imbalances, metabolic flux pathways are altered, leading to increased gluconeogenesis and glycogen synthesis in the liver. In addition, the increased acetyl-CoA levels along with elevated expression levels of key enzymes involved in cholesterol synthesis likely contribute to increases in plasma levels of total and LDL cholesterol observed in mice with Oct1 deficiency and in humans with reduced-function genetic polymorphisms of OCT1. Although many of the details of the mechanisms have still to be worked out, our study provides critical insights into the role of thiamine in the liver in maintaining metabolic balance among energy metabolism pathways. Finally, our studies provide mechanistic insights into findings from GWAS implicating reduced-function variants in the SLC22A1 locus as risk factors for lipid disorders and diabetes. Ethics statement Animal experiments were approved by the Institutional Animal Care and Use Committee (IACUC) of University of California, San Francisco (AN119364), in accordance with the requirements of the National Research Council Guide for the Care and Use of Laboratory Animals and the Public Health Service Policy on the Humane Care and Use of Laboratory Animals. Humane end points were determined by body condition score of 2 or less or 15% body weight loss. Animals were euthanized once the humane end points were reached during the treatment in accordance with IACUC approved protocol. To limit pain and stress, mice were anesthetized deeply by isoflurane vaporizer and intraperitoneal injection of ketamine/medetomidine cocktail (75/1 mg/kg) prior to the physical cervical dislocation of euthanasia. Mining genetic association studies to identify phenotypes associated with SLC22A1 reduced-function variants Various publicly available databases were used to determine whether there are significant genetic associations of SLC22A1 reduced-function variants with human diseases and traits. The following databases were used: GWAS Catalog, dbGAP Association Results Browser, GRASP: Genome-Wide Repository of Associations Between SNPs [82], GIANT Consortium, Type 2 Diabetes Knowledge Portal, and Phenotypes and Genome Wide Associations Studies for Lipid Genetics. The first three databases, GWAS Catalog, dbGAP Association Results Browser, and GRASP, provide an easy to use interface to allow first-step information gathering of the types of human diseases and traits that have been reported in all published GWAS. Based on the results from the three databases, other specific databases relevant to the findings were then used. This includes searching for specific databases that have the GWAS summary statistics (beta coefficient and p-values). These are GIANT Consortium for body weight, Type 2 Diabetes Knowledge Portal for glucose and insulin traits, and all GWAS for lipid traits. These databases allow investigators to download the association studies for obtaining the pvalues and the beta coefficients for the associations. In this study, we focused our search on nonsynonymous variants of OCT1 (SLC22A1) with minor allele frequencies !1% in populations with European ancestries (1000 Genome Project): R61C (rs12208357), F160L (rs683369), P341L (rs2282143), G401S (rs34130495), M408V (rs628031), 420Del (rs202220802) (rs662138 and rs1564348, which are in linkage disequilibrium to 420Del with r 2 ! 0.77, D 0 > 0.95), and G465R (rs34059508). Animal studies All experiments on mice were approved by the IACUC of UCSF. Oct1 -/mice were generated as previously described [83] and backcrossed more than 10 generations to FVB/N background. Mice were housed in a pathogen-free facility with a 12-hour light and 12-hour dark cycle and given free access to food and water. Five-or six-week-old experimental mice were fed with thiamine control diet Cat# TD.09549 (thiamine 5 mg/kg) containing 17.5% protein, 65.8% carbohydrate, and 5.0% fat by weight (Envigo, Madison, WI). Other thiamine diets contained the same composition as the thiamine control diet but differed in thiamine levels (TD Cat# TD.81029, 0 mg/kg; adjusted thiamine diet with different thiamine doses added, Cat# TD.120472, 25 mg/kg; Cat# TD.140164, 50 mg/kg). The periods of dietary treatments and time for mouse being humanely killed are indicated in the Results section and figure legends. The animal studies were conducted in male mice; however, overall body weight and liver weight were assessed in female mice. Mice treated with TD developed thiamine deficiency syndromes, resulting in reduction of food intake and body weight loss. During the treatment period, mice were closely monitored and weighed daily. Animals were euthanized once the humane end points (body condition score of 2 or less or 15% body weight loss) were reached during the treatment. To limit pain and stress, mice were anesthetized deeply by isoflurane vaporizer and intraperitoneal injection of ketamine/medetomidine cocktail (75/1 mg/kg) prior to the physical cervical dislocation of euthanasia. Hydrodynamic tail vein injection Hydrodynamic tail vein injection procedure was conducted as described previously [84], with minor modifications. Briefly, the body weights of the mice were used to calculate the total volume (mL) required for injection based on the formula: body weight (g) Ã (mL/10g) + 0.1 mL (dead volume). The injection solution included 5 Ã 10^7 TU virus/mouse and saline to just the final volume. Instead of anesthetizing the mice, we used TransIT-QR kit MIR5210 (Mirus Bio LLC, US) and followed the online protocol (https://www.mirusbio.com/delivery/tailvein/). Dosing of 3 H-thiamine was performed 48 hours after hydrodynamic tail vein injection. Mouse Slc22A1 shRNA lentiviral particle (TRCN0000070156) and nonmammalian shRNA control pLKO.1 (SHC002V) were purchased from Sigma. Viruses were verified in HEK-293 cells. Briefly, pcDNA5 containing mouse Slc22a1 was cotransfected with the Slc22a1 knockdown vector or pLKO.1 control virus. mRNA was isolated after 48 hours transduction, and mRNA expression of Oct1 was measured. Body composition and metabolic cages Before and during dietary treatment, body composition was determined by either quantitative magnetic resonance on the EchoMRI-3in1 body composition analyzer (EchoMRI, Houston, TX) or by DEXA. For DEXA, live animals were anesthetized with isoflurane and scanned on the Lunar PIXImus densitometer (Lunar PIXImus Corporation Headquarters, Madison, WI). After 8 weeks diet treatment, mice were placed in single housing cages for 3 days before initiating the CLAMS (Columbus Instruments, Columbus, OH) experiments. CLAMS was used to monitor food and water intake, oxygen consumption (VO 2 ) and carbon dioxide production (VCO 2 ), and locomotor activities for a period of 96 hours. All these experiments were performed in the Diabetes and Endocrinology Research Center Mouse Metabolism Core at UCSF. In vivo studies Blood glucose levels from mice were measured using the FreeStyle Freedom Lite blood glucose meter (Abbott Laboratories, Chicago, IL) in samples obtained by the tail milking method. For oral glucose tolerance tests (OGTTs), mice were fasted for 5 hours and dosed with glucose 2 g/ kg (Sigma-Aldrich, St. Louis, MO) by oral gavage. For ITTs, mice were fasted for 5 hours and dosed with 0.75 U/kg humulin R insulin 100 U/ml (Henry Schein Animal Health, Dublin, OH) by intraperitoneal injection. Blood was sampled at 0, 15, 30, 60, 90, and 120 minutes. For PTTs, mice were fasted overnight for 16 hours and dosed with pyruvate 2 g/kg (Sigma-Aldrich, St. Louis, MO) by intraperitoneal injection. Blood was sampled at 0, 15, 30, 60, 90, 120, 150, and 180 minutes. Tissue staining and histology For adipose tissues and liver glycogen staining, mice were fasted for 16 hours and perfused with 20 mL 4% paraformaldehyde (PFA) in PBS. Epididymal fat pad, rpWATs, and liver were incubated in 4% PFA for 48 hours at 4˚C and transferred to 70% ethanol. For Oil Red-O (ORO) staining in liver, mice were fasted for 16 hours and perfused with 10 mL PBS. Livers were fixed via sucrose infiltration steps prior to freezing. After incubating in 30% sucrose in PBS at 4˚C for 24 hours, tissues were frozen in OCT molds. Fixed or frozen tissues were transferred to the histology and light microscopy core at Gladstone Institutes for staining, imaging, and analysis. For the cell-size analysis, hematoxylin and eosin-stained paraffin-embedded sections (https://labs.gladstone.org/histology/pages/section-staining-haematoxylin-and-eosinstaining) of mouse adipose tissues were imaged using a Nikon Eclipse E600 upright microscope equipped with a Retiga camera (QImaging, Vancouver, BC, Canada) and a Plan Fluor 20×/0.3NA objective. For each sample, four independent fields were imaged for analysis and adipocyte size was determined using ImageJ (v.2.0.0-rc-3) software (US National Institutes of Health) and the Tissue Cell Geometry macro (http://adm.irbbarcelona.org/image-j-fiji). For quantifying the lipid droplets, ORO stained frozen sections of mouse liver were imaged as above. For each sample, four independent fields were imaged for analysis. RGB images were then color thresholded to ORO, and the total area of ORO-positive pixels was summed for each image using the Analyze Particles function. For quantifying the glycogen levels in liver sections, Periodic-Acid Schiff's stained mouse liver was imaged as above. For each sample, four independent fields were imaged and the mean intensity and integrated density values were averaged for each image using the Analyze Particles function. Metabolic parameters measurements Mice were humanely killed and blood was collected via posterior vena cava to BD microtainer tubes with dipotassium EDTA (365974) or heparin (365985). Plasma was sent to the Clinical Laboratory of San Francisco General Hospital for measurement of total, LDL, and HDL cholesterol and TGs and liver panel. Plasma was send to Children's Hospital Oakland Research Institute for the measurement of lipoprotein particles size, as described previously [85]. Glucose (GAGO20), glycogen (MAK016), free fatty acid (MAK044), acetyl-Coenzyme A (MAK039), and G6P (MAK014) quantification kits were purchase from Sigma-Aldrich (St. Louis, MO). Pyruvate (ab65342), cholesterol (ab102515), and TG (ab65336) quantification kits were purchase from Abcam (Cambridge, MA). Plasma insulin was measure by ELISA (EMINS) from Thermo Fisher Scientific (Waltham, MA). Plasma was sent to Molecular MS Diagnostics, Inc. (Warwick, RI), for thiamine quantification, as previously described [14]. Gene expression analysis Total RNA from mouse tissues or cell lines was isolated using RNeasy Mini kit (Qiagen, Valencia, CA). Total RNA (2 μg) from each sample was reverse transcribed into cDNA using SuperScript VILO cDNA Synthesis kit (Life Technologies, CA). Quantitative real-time PCR was carried out in 384-well reaction plates using 2X Taqman Fast Universal Master Mix (Applied Biosystems, Foster City, CA), 20X Taqman specific gene expression probes, and 10 ng of the cDNA template. The reactions were carried out on an Applied Biosystems 7900HT Fast Real-Time PCR System (Applied Biosystems, Foster City, CA). The relative expression level of each mRNA transcript was calculated by the comparative method (ΔΔCt method), normalized to the housekeeping gene, β-actin. Transporter uptake studies The stably overexpressing pcDNA5 empty vector, mouse OCT1, human OCT1-reference, OCT1-420 del, and OCT1-420 del+G465R cell lines were maintained in Dulbecco's Modified Eagle Medium (DMEM H-21) supplemented with hygromycin B (100 ug/mL) (Thermo Fisher Scientific, Waltham, MA), penicillin (100 U/mL), streptomycin (100 mg/mL), and 10% fetal bovine serum. Cell culture supplies were purchased from the Cell Culture Facility (UCSF, CA). Cells were cultured on poly-D-lysine coated 96-well plates for 24 hours to reach 95% confluence. Before the uptake experiments, the culture medium was removed and the cells were incubated in Hank's balanced salt solution (HBSS) (Life Technology, CA) for 15 minutes at 37˚C. Radiolabeled thiamine [3H(G)] hydrochloride (20 Ci/mmol) was purchased from American Radiolabeled Chemicals Incorporation (St. Louis, MO). Thiamine hydrochloride was purchased from Sigma-Aldrich (St. Louis, MO). Chemicals and radiolabeled compounds were diluted in the HBSS for uptake experiments. The details for drug concentrations and uptake time are described in the Results section and figure legends. The uptake was performed at 37˚C, and then the cells were washed three times with ice-cold HBSS. After that, the cells were lysed with buffer containing 0.1 N NaOH and 0.1% SDS, and the radioactivity in the lysate was determined by liquid scintillation counting. For the transporter study, the Km and Vmax were calculated by fitting the data to a Michaelis-Menten equation using GraphPad Prism software 6.0 (La Jolla, CA). Statistical analysis All mice were randomly assigned to the control or each treatment group. No statistical method was used to predetermine sample size, and sample size was determined on the basis of previous experiments. Numbers of mice for each experiment are indicated in figure legends. Mice that were dead or sick before the end of experiments were excluded from the final analysis. Investigators were not blinded during experiments. Data were expressed as mean ± SEM. Appropriate statistical analyses were applied, as specified in the figure legends. Data were analyzed using GraphPad Prism software 6.0 (La Jolla, CA). Differences were considered statistically significant at p < 0.05; Ã p < 0.05, ÃÃ p < 0.01, and ÃÃÃ p < 0.001. with European Ancestry. The data are plotted using the results available from the Global Lipids Genetics Consortium, http://csg.sph.umich.edu/abecasis/public/lipids2013/. The names of the genes in the top locus for each chromosome were labeled. Over 100 loci were associated with lipids at p < 5 × 10 −8 , including SLC22A1, which is the top locus in chromosome 6. (C) The plot shows the correlation, R 2 , among the SNPs in the SLC22A1, SLC22A2, SLC22A3, LPAL2, and LPA genes. Darker red showed R 2 > 0. 8 (bottom right region: rs3798220, rs7767084, rs10755578, rs6415084, rs74617384, rs3798221, rs191555775, rs55730499, rs41272114, rs10455872, rs7770628, rs9355814, rs7759633, rs1367211, rs1406888, rs2315065, rs186696265). One of the missense SNPs in OCT1, rs2282143 (P341L), has a weak correlation, r 2 = 0.54, with a missense variant Ile1891Met (rs3798220). This plot was generated using LDLink, https://analysistools.nci.nih.gov/LDlink/? tab=ldmatrix. The R 2 information is generated using genotype data from 1000 Genomes population from European Ancestry. shRNA lentiviral particle knockdown experiments in mouse OCT1 overexpressing cells and wild-type mice. Data show mRNA levels in the livers and kidneys of control mice and mice that received a hydrodynamic tail vein injection of shRNA to OCT1. The maximal plasma concentration of thiamine. A single intraperitoneal injection of 2 mg/kg thiamine (with 4% 3 H-thiamine) was administered to four groups of mice (Oct1 +/+ mice treated with control shRNA, n = 6; Oct1 +/+ mice treated with Oct1 shRNA, n = 6; Oct1 -/mice treated with control shRNA, n = 3; and Oct1 -/mice treated with Oct1 shRNA, n = 3) Data are normalized to Oct1 +/+ mice treated with control shRNA. Data shown are mean ± SEM. Data were analyzed by unpaired two-tailed Student t test; Ã p < 0.05, ÃÃ p < 0.01, and ÃÃÃ p < 0.001. Underlying data are provided in S1 Data. EV, empty vector; hOCT1-Ref, human OCT1 reference; hOCT1-420Del, human OCT1 with methinone 420 deletion; hOCT1-420Del+G465R, human OCT1 with mutation in glycine 465 -toarginine in addition to 420Del; OCT1, organic cation transporter 1; shRNA, short hairpin RNA; Slc, solute carrier.
v3-fos-license
2017-09-17T19:25:26.895Z
2012-09-27T00:00:00.000
19566674
{ "extfieldsofstudy": [ "Geology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://cdn.intechopen.com/pdfs/41478.pdf", "pdf_hash": "7a9a1a8f0a08d0b09cb2583864ae05956e945124", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2796", "s2fieldsofstudy": [ "Geology" ], "sha1": "41761db14d67f6d4cd5aafbd071c90513ab9a86d", "year": 2012 }
pes2o/s2orc
Monogenetic Basaltic Volcanoes: Genetic Classification, Growth, Geomorphology and Degradation Plate motion and associated tectonics explain the location of magmatic systems along plate boundaries [1], however, they cannot give satisfactory explanations of the origin of intra‐ plate volcanism. Intraplate magmatism such as that which created the Hawaiian Islands (Figure 1, hereafter for the location of geographical places the reader is referred to Figure 1) far from plate boundaries is conventionally explained as a result of a large, deep-sourced, mantle-plume [2-4]. Less volumetric magmatic-systems also occur far from plate margins in typical intraplate settings with no evidence of a mantle-plume [5-7]. Intraplate volcanic sys‐ tems are characterized by small-volume volcanoes with dispersed magmatic plumbing sys‐ tems that erupt predominantly basaltic magmas [8-10] derived usually from the mantle with just sufficient residence time in the crust to allow minor fractional crystallization or wallrock assimilation to occur [e.g. 11]. However, there are some examples for monogenetic eruptions that have been fed by crustal contaminated or stalled magma from possible shal‐ lower depths [12-19]. The volumetric dimensions of such magmatic systems are often com‐ parable with other, potentially smaller, focused magmatic systems feeding polygenetic volcanoes [20-21]. These volcanic fields occur in every known tectonic setting [1, 10, 22-28] and also on other planetary bodies such as Mars [29-33]. Due to the abundance of monogen‐ etic volcanic fields in every tectonic environment, this form of volcanism represents a local‐ ized, unpredictable volcanic hazard to the increasing human populations of cities located close to these volcanic fields such as Auckland in New Zealand [34-35] or Mexico City in Mexico [36-37]. . Overview map of the location of the volcanic field and zones mentioned in the text. The detailed location of specific volcanic edifices mentioned in the text can be downloaded as a Google Earth extension (.KMZ file format) from http://www.intechopen.com/. Eruption of magma on the surface can be interpreted as the result of the dominance of magma pressure over lithostatic pressure [50,[75][76]. On the other hand, freezing of magma en route to the surface are commonly due to insufficient magma buoyancy, where the lithostatic pressure is larger than the magma pressure, or insufficient channelling/focusing of the magma [50,[76][77][78]. Once these small-volume magmas (0.001 to 0.1 km 3 ) intrude into the shallow-crust, they are vulnerable to external influences such as interactions with groundwater at shallow depth [79][80][81][82]. In many cases, the eruption style is not just determined by internal magma properties, but also by the external environmental conditions to which it has been exposed. Consequently, the eruption style becomes an actual balance between magmatic and environmental factors at a given time slice of the eruption. However, a combination of eruption styles is responsible for the formation of monogenetic volcanoes with wide range of morphologies, e.g. from conical-shaped to crater-shaped volcanoes. The morphology that results from the eruption is often connected to the dominant eruptive mechanisms, and therefore, it is an important criterion in volcano classifications. Diverse sources of information regarding eruption mechanism, edifice growth and hazards of monogenetic volcanism can be extracted during various stages of the degradation when the internal architecture of a volcano is exposed. Additionally, the rate and style of degradation may also help to understand the erosion and sedimentary processes acting on the flanks of a monogenetic volcano. The duration of the construction is of the orders of days to decades [83][84]. In contrast, complete degradation is several orders of magnitude slower process, from ka to Ma [68,71,73]. Every stage of degradation of a monogenetic volcano could uncover important information about external and internal processes operating at the time of the formation of the volcanic edifice. This information is usually extracted through stratigraphic, sedimentary, geomorphic and quantitative geometric data from erosion landforms. In this chapter, an overview is presented about the dominant eruption mechanism associated with subaerial monogenetic volcanism with the aim of understanding the syn-and post-eruptive geomorphic and morphometric development of monogenetic volcanoes from regional to local scales. Typical ascent of the magma feeding eruptions through a monogenetic volcano starts in the source region by magma extraction from melt-rich bands. These melt-rich bands are commonly situated in a low angle (about 15-25°) to the plane of principal shear direction introduced by deformation of partially molten aggregates [95,[102][103]. The degree of efficiency of melt extraction is dependent on the interconnectivity, surface tension and capillary effect of the solid grain-like media in the mantle, which are commonly characterized by the dihedral angle between solid grains [104][105]. When deformation-induced strain takes place in a partially molten media, it increases the porosity between grains and triggers small-scale focusing and migration of the melt [104]. With the continuation of local shear in the mantle, the total volume of melt increases and enhances the magma pressure and buoyancy until it reaches the critical volume for ascent depending on favourable tectonic stress setting, depth of melt extraction and overlying rock (sediment) properties [16,42]. The initiation of magma (crystals + melt) ascent starts as porous flow in deformable media and later transforms into channel flow (or a dyke) if the physical properties such as porosity/permeability of the host rock are high enough in elastic or brittle rocks in the crust [50,75,[106][107]. The critical vol-ume of melt essential for dyke injections is in the range of a few tens of m 3 [76], a volume which is several orders of magnitude less than magma batches feeding eruptions on the surface, usually ≥0.0001 km 3 [39,108]. An increase in melt propagation distance is possible if small, pocket-fed initial dykes interact with each other [50,76], which is strongly dependent on the direction of maximum (σ 1 ) and least principal stresses (σ 3 ), both in local and regional scales [109] and the vertical and horizontal separation of dykes [50,76,110]. These dykes move in the crust as self-propagating fractures controlled by the density contrast between the melt and the host rock from the over-pressured source zone [50]. The dykes could remain connected with the source region or propagate as a pocket of melt in the crust [111][112]. The geometry of such dykes is usually perpendicular to the least principal stress directions [108,111]. The lateral migration of the magma en route is minimal in comparison with its vertical migration. This implies the vent location at the surface is a good approximation to the location of melt extraction at depth, i.e. the magma footprint [42,54,108]. The important implication of this behaviour is that interactions between magma and pre-existing structures are expected within the magma footprint area [54,108]. Correlation between preexisting faults and dykes are often recognized in volcanic fields [10,53,108,[113][114][115]. The likelihood of channelization of magma by a pre-existing fracture such as a fault, is preferable in the case of high-angle faults, i.e. 70-80°, and shallow depths [53] when the magma pressure is less than the tectonic strain taken up by faulting [42,53]. These monogenetic eruptions have a wide variation in eruptive volumes. Volumetrically, two end-members types of volcanoes have been recognized [5,109,116]. Large-volume (≥1 km 3 or polygenetic) volcanoes are formed by multiple ascent of magmas that use more or less the same conduit system over a long period of time usually ka to Ma and have complex phases of construction and destruction [86,[117][118][119]. The spatial concentration of melt ascents, and temporally the longevity of such systems are usually caused by the formation of magma storage systems at various levels of the crust beneath the volcanic edifices [120][121][122]. In this magma chamber stalled magma can evolve by differentiation and crystallization in ka time scales [123]. On the other hand, a small-volume (≤1 km 3 or monogenetic) volcano is referred to as " [it] erupts only once" [e.g . 116]. The relationship between large and small volume magmatic systems and their volcanoes is poorly understood [1,5,109,[124][125][126][127]. Nevertheless, there is a wide volumetric spectrum between small and large (monogenetic and polygenetic) volcanoes and these two end-members naturally offer the potential for transition types of volcanoes to exist. An ascent event is not always associated with a single batch of magma, but commonly involves multiple tapping events (i.e. multiple magma batches), creating a diverse geochemical evolution over even a single eruption [9, 11, 16-17, 45, 128]. Multiple melt batches involved in a single event may be derived from the mantle directly or from some stalling magma ponds around high density contrast zones in the lithosphere such as the upper-mantle/crust boundary [9,128] and/or around the ductile/brittle boundary zone in the crust [16]. A volcanic eruption on the surface is considered to be a result of a successful coupling mechanism between internal processes, such as melt extraction rate and dyke interaction en-route to the surface [50,76,110], and external processes, such as local and regional stress fields in Construction of monogenetic volcanoes The ascent of magma from source to surface usually involves thousands of interactions between external and internal processes, thus the pre-eruptive phase works like an open system. Once single or multiple batch(es) of magma start their ascent to the surface, there is continuous degassing and interactions with the environment at various levels en route. On the surface, the ascending magma ascent can feed a volcanic eruption that can be explosive or effusive. Important characteristics of the volcanic explosion are determined at shallow depth (≤1-2 km) by the balance between external and internal factors such as chemical composition or availability of external water. The volcanic eruptions are usually characterised by discrete eruptive and sedimentary processes that are important entities of the formation and emplacement of a monogenetic vent itself. Internal versus external-driven eruptive styles The current classification of volcanic eruptions is based mainly on characteristics such as magma composition, magma/water mass ratio, volcanic edifice size and geometry, tephra dispersal, dominant grain-size of pyroclasts and (usually eye-witnessed) column height [e.g. 149]. If the ascending melt or batches of melts reach the near-surface or surface region, it will either behave explosively or intrusively/effusively. Explosive magma fragmentation is triggered either by the dissolved magmatic volatile-content [150] or by the thermal energy to kinetic energy conversion and expansion during magma/water interactions [151][152], producing distinctive eruption styles. These eruption styles can be classified on the basis of the dominance of internal or external processes. Internally-driven eruptions are promoted by dissolved volatiles within the melt that exsolve into a gas-phase during decompression of magma [153][154][155]. The volatiles are mainly H 2 O with minor CO 2 , the latter exsolving at higher pressure and therefore greater depths than H 2 O [e.g . 156]. Expansion of these exsolved gases to form bubbles in the magma suddenly lowers the density of the rising fluid, causing rapid upward magma acceleration and eventually fragmentation along bubble margins [150,155,[157][158][159]. The growth of gas bubbles by diffusion and decompression in the melt occurs during magma rise, until the volume faction exceeds 70-80% of the melt, at which point magma fragmentation occurs [160][161]. Magmas with low SiO 2 contents, such as basalts and undersaturated magmas have low viscosity, allowing bubbles to expand easily in comparison to andesitic and rhyolitic magmas. Thus these low-silica magmas generate mild to moderate explosive types of eruptions such as Hawaiian [e.g. 162], Strombolian [e.g. 153], violent Strombolian [e.g. 163] and in very rare instances sub-Plinian types [e.g. 164,165]. There is a conceptual difference between Hawaiian and Strombolian-style eruptions because in the former case magmatic gases rise together with the melt [154], whereas in Strombolian-style eruptions an essentially stagnant magma has gas slugs that rise and bubble through it -generating large gas slug bursts and foamcollapse at the boundary of the conduit [153,166]. According to the rise speed-dependent model, bubbles form during magma ascent [150], while in the case of the foam collapse model, bubbles up to 2 m in diameter are generated deeper, in the upper part of a shallow magma chamber, based on acoustic measurements at the persistently active Stromboli volcano in the Aeolian Islands, Italy [153]. A Hawaiian eruption results from one of the lowest energy magma fragmentation that are driven mostly by the dissolved gas content of the melt, which produces lava fountaining along fissures or focussed fountains up to 500 m in height [150,162,[167][168]. The lava fountaining activity ejects highly deformable lava 'rags' at about 200-300 m/s exit velocity with an exit angle that typically ranges between 30-45° from vertical [169][170]. The nature and the distribution of the deposits associated with lava fountaining depend on the magma flux and the magma volatile content [162,[171][172]. Magmatic discharge rates during lava fountain activity range typically between 10 and 10 5 kg/s [162,166]. The duration of typical lava fountaining activity may last only days or up to decades. An example for the former is Kilauea Iki, which erupted in 1959 [167,170], while an example for the latter is Pu'u 'O'o-Kupaianaha, which began to erupt in 1983 [173]. Both are located on the Kilauea volcano in the Big Island of Hawaii, USA. Pyroclasts generated by lava fountaining are coarsely fragmented clots of magma which do not travel far from and above the vent [170][171]. They commonly land close to the vent and weld (i.e. mechanical compaction of fluid pyroclasts due to overburden pressure), agglutinate (i.e. flattening and deformation of fluid pyroclasts) or coalesce (i.e. homogenously mixed melt formed by individual fluidal clots) due to the high emplacement temperature of fragmented lava lumps on the depositional surface and/or the fast burial of lava fragments, which can retain heat effectively for a long time [171][172]174]. The degree of welding and agglutinating of lava spatter is dependent on the [170-172, 174- As a result of the limited energy involved in this type of magma fragmentation, the coarsely fragmented lava clots are transported ballistically, while the fines are transported by a low eruption column, as in the case of Plinian eruptions [162,176]. The fragments tend to accumulate in proximal position, forming a cone-shaped pile, a spatter cone (Figure 2), which is built up by alternation of lava spatter and lava fountain-fed flows <100 m in diameter and a few tens of meters in height [170][171][172][177][178][179]. Based on the grain size and the limited areal dispersion of tephra associated with typical Strombolian-style eruptions, it is considered a result of a mild magma fragmentation [149,155]. However, larger volumes of tephra are produced than Hawaiian-style eruptions [159,180]. Tephra production is derived from relatively low, non-sustained eruption columns [111,153,158,181]. Individual explosions last <1 min and eject 0.01 to 100 m 3 of pyroclasts to <200 m in height with an exit velocity of particles of 3-100 m/s [180]. The magma discharge rate of 10 3 to 10 5 kg/s is based on historical examples of volcanoes erupted from water-rich, subduction-related magma [156]. The near surface fragmentation mechanism and limited energy released in a single eruption results in coarse lapilli-to-block-sized pyroclasts, predominantly between 1 and 10 cm in diameter, accumulating in close proximity to the vent [84,[182][183]. The exit velocity and angle of ballistic trajectories of particles of 20-25° determines the maximum height of the edifice and produces a limited size range of clasts in these volcanic edifices [184][185]. The repetition of eruptions produces individual, moderately-tohighly vesicular pyroclasts, called scoria or cinder, that do not agglutinate in most situations, but tend to avalanche downward forming talus deposits on the flanks of the growing cone [185][186][187]. Due to the mildly explosive nature of the eruptions, and the relatively stable pyroclast exit angles and velocity, a well-defined, conical-shaped volcano is constructed and is commonly referred to as a scoria or cinder cone ( Figure 3). These cones have a typical basal diameter of 0.3 to 2.5 km, and they are up to 200 m high [153,179,182,185,[188][189]. A more energetic magma fragmentation than is normally associated with Strombolian-activity cause violent Strombolian eruptions [163,190]. In the 'normal' Strombolian-style eruptions, the magma is separated by gas pockets, which rise periodically in the magma through the conduit forming a coalescence of gas pockets, or a slug flow regime [153]. When the gas segregation increases, the eruptions become more explosive due to episodic rupture of liquid films of large bubbles, causing alternation of the flow regime from slug flow to churn flow, which is a typical characteristic of the violent Strombolian activity [163]. Based on numerical simulations, the increases in the gas flux, which creates the "churn flow", is caused by factors such as an increased length of conduit, the change in magma flux from 10 4 to 10 5 kg/s, the gas content, and/or the ascent speed variations that allow magma to vesiculate variably within the conduit [156,163,191]. Larger energy release during more explosive eruptions produces a higher degree of fragmentation, and hence finer-grained, ash-lapilli dominated beds [191], as well as higher eruption columns (<10 km) that disperse tephra efficiently over longer distances [83,163]. Externally-driven fragmentation occurs when the melt interacts with external water leading to phreatomagmatic or Surtseyan-style eruptions [152,[192][193][194][195]. These explosive interactions take place when magma is in contact with porous-or fracture-controlled groundwater aquifers or surface water [151,194,[196][197][198][199][200][201][202]. In special cases when explosive interactions take place between lava and lake, sea water or water-saturated sediments, littoral cone [203][204] and rootless cone [205][206][207][208] are generated. Processes and eruption mechanisms associated with these eruptions are not discussed in the present chapter. The evidences of the role of water in the formation of tuff rings and maar have been proofed by many studies [e.g. 152,201,209]. However, there are similar eruptive processes and eruption styles have been described from eruptions of silica-undersaturated magmas (e.g. foidite, melilitite and carbonatite) in environments, where the role of external water on the eruptive style is limited [e.g. [210][211][212]. figure) showing the typical volcano-sedimentary processes and geomorphologic features. Note that the left-hand side represents the characteristics of a maar-diatreme volcano formed in a hard-substrate environment, while the right-hand side is the soft rock environment. Abbreviations: PDC -pyroclastic density current. Phreatomagmatic eruptions (rarely called Taalian eruptions) are defined by some as being in subaerial environments [194]. These eruptions may produce a series of volcanic craters which vary in size between 0.1 km and 2.5 km in diameter [213]. The largest ones are very likely to be generated by multiple eruptions forming amalgamated craters such as Lake Coragulac maar, Newer Volcanics Province, south-eastern Australia [214] and/or formed in specific environment such as Devil Mountain maar, Seward Peninsula, Alaska [215]. The fragmentation itself is triggered by a molten fuel-coolant interaction (MFCI) processes requiring conversion of magmatic heat to mechanical energy [151,[193][194][216][217][218][219][220]. The MFCI proceeds as follows [151,220]: 1. coarse premixing of magma and water producing a vapour film between fuel and coolant, 2. collapse of the vapour film, generating fragmentation of magma and producing shock waves, 3. rapid expansion of superheated steam to generate thermohydraulic explosions, as well as 4. post-eruption (re)fragmentation of molten particles. Surtseyan-style eruptions occur when the external water is 'technically' unlimited during the course of the eruption when eruptions occur through a lake or the sea [222,[251][252][253][254]. In contrast with phreatomagmatic eruptions, Surtseyan-style eruptions require a sustained bulk mixing of melt and coolant, which generates more abrupt and periodic eruptions [194,196]. During Surtseyan-style eruptions, water is flashed to steam which tears apart large fragments of the rising magma tip [222,252,[254][255][256]. This process is far less efficient than self-sustained typical MFCI and causes a near continuous ejection of tephra [194]. This tephra feeds subaqueous pyroclastic density currents, which build up a subaqueous volcanic pile that may emerge to become an island in the course of the eruption [222,[253][254][257][258][259], as was the case during the well-documented eruption of Surtsey tuff cone, Vestmannaeyjar Islands, Iceland in 1963-1967 AD [260][261]. After emergence, a conical volcano can cap the edifice and build a typical steep-sided tuff cone ( Figure 5). The tuff cone gradually grows by rapidly expelled and frequent (every few seconds) tephra-laden jets that eject muddy, water-rich debris, which may initiate mass flows later on in the inner-crater wall and on the outer, steepening flank of the growing cone [253,260,[262][263][264][265]. These shallow explosions eventually produce a cone form, although it often has irregular geometry with a breached or filled crater by late-stage lava flows or asymmetric crater rim [223,234,259,264]. The diameters of craters of these tuff cones are comparable to the tuff rings and maars, but the elevation of the crater rims are higher, reaching up to 300 m [223]. Monogenetic volcanoes that formed by Surtseyan-eruptions typically have no diatreme below their crater, however, some recent research suggested that diatremes may exist beneath a few tuff cones, such as Saefell tuff cone, south Iceland [266] or Costa Giardini diatreme, Iblean Mountains, Sicily [267]. Spectrum of basaltic monogenetic volcanoes As documented above, five types of monogenetic volcanoes are conventionally recognized [177, 179, This classification is primarily based on the morphological aspects and dominant eruption styles of these volcanoes. Furthermore, there is a strong suggestion that a given eruption style results in a given type of volcanic edifice, e.g. Strombolian-style eruptions create scoria cones [e.g. 111,182]. The conventional classification also fails to account for the widely recognized diversity or transitions in eruption styles that may form 'hybrid' edifices, e.g. intra-maar scoria cones with lava flows or scoria cones truncation by late stage phreatomagmatism [229,244,[268][269][270][271]. The variability in the way a monogenetic volcano could be constructed also means that the conventional classification hides important details of complexity that may be important from volcanic hazard perspective (e.g. a volcano built up by initial phreatomagmatic eruptions and later less dangerous Strombolian eruptions). The diversity of pyroclastic successions relates to fluctuation of eruption styles that may be triggered by changing conduit conditions, such as geometry, compositional change, and variations in both magma and/or ground water supply [41,52,150,163,[272][273]. Due to the abundance of intermediate volcanoes, a classification scheme is needed, where the entire eruptive history can be parameterized numerically. In the present study, the construction of a small-volume volcano is based on two physical properties ( Figure 6): 1. eruption style and associated sedimentary environment during an eruption and 2. number of eruption phases. A given eruption style is a complex interplay between internal and external controlling parameters at the time of magma fragmentation. The internally-driven eruption styles are, for example, controlled by the ascent speed, composition, crystallization, magma degassing, number of magma batches involved, rate of cooling, dyke and conduit wall interactions, depth of gas segregation and volatile content such as H 2 O, CO 2 or S [9, 11, 17, 111, 128, 150, 154-156, 163, 188, 191, 274-276]. These processes give rise to eruption styles in basaltic magmas that are equivalent to the Hawaiian, Strombolian and violent Strombolian eruption styles. However, due to the small-volume of the ascending melt, the controls on magma fragmentation are dominated by external parameters, including conduit geometry, substrate geology, vent stability/migration, climatic settings, and the physical characteristics of the underlying aquifers [39,82,234,[277][278][279]. Another important parameter in the construction of a monogenetic volcanic edifice is the number of eruptive phases contributing to its eruption history ( Figure 6). The complexity of a monogenetic landform increases with increasing number or combination of eruptive phases. These can be described as "single", "compound" and "complex" volcanic edifices or landforms [280][281]. In this classification, the volcano is the outcome of combinations of eruption styles repeated by m phases. For example, a one-phase volcano requires only one dominant eruption mechanism during its construction. Due the single eruption style, the resulting volcano is considered to be a simple landform with possibly simple morphology. However, monogenetic volcanoes tend to involve two or multiple phases ( Figure 6). Their construction requires two or more eruption styles and the result is a compound or complex landforms respectively, e.g. maar-like scoria cones truncated by late stage phreatomagmatic eruptions [e.g. 82,270,282] or a tuff cone with late-stage intra-crater scoria cone(s) [e.g. 265,283]. These phases may occur at many scales from a single explosion (e.g. a few m 3 ) to an eruptive unit comprising products of multiple explosions from the same eruption style. Figure 6. Eruption history (E) defined by a spectrum of eruptive processes determined by internal and external parameters at a given time. The initial magma (in the centre of the graph in red) is fragmented by the help of internal and external processes which determine the magma fragmentation mechanism and eruption style (phase 1). If a change (e.g. sudden or gradual exhaustion of groundwater, shift in vent position or arrival of new magma batch) occurs, it will trigger a new phase (phase 2, 3, 4,…, n); moving away from the pole of the diagram) until the eruption ceases. Note that black circles with white "L" mean lava effusion. If the eruption magma is dominantly basaltic in composition, the colours correspond to Surtseyan (dark blue), phreatomagmatic (light blue), Strombolian (light orange), violent Strombolian (dark orange) and Hawaiian (red) eruption styles. To put this into a quantitative context, this genetic diversity can be expressed as set of matrices, similar to Bishop [280]. In Bishop [280], the quantitative taxonomy is represented by matrices of volcanic landforms that were based on surface morphologic complexity and eruption sequences. The role of geomorphic signatures is reduced due to the fact that in the case of an eruption centre built up by multiple styles of eruptions, not all eruption styles contribute to the geomorphology. For example, a scoria cone constructed by tephra from a Strombolian-style eruption, might be destroyed by a late stage phreatomagmatic eruption, as documented from Pinacate volcanic field in Sonora, Mexico [82,284] and Al Haruj in Libya [282]. In these cases, the final geomorphologies resemble to maar craters, but the formation such volcanoes are more complex than a classical, simple maar volcano. In the proposed classification scheme, the smallest genetic entity (i.e. eruption style and their order) was considered to define the eruption history of a monogenetic volcano quantitatively. Considering only the typical, primitive basaltic composition range (SiO 2 ≤52% w.t.), internally-driven eruption styles are the Hawaiian, Strombolian and violent Strombolian eruption styles [111]. At the other end of the spectrum, externally-driven eruption styles are the phreatomagmatic and Surtseyan-types [81,285]. In addition, the effusive activity can also be involved in this genetic classification. The abovementioned eruption/effusion styles can build up a volcano in the following combination: 6×1, 6×6, 6×6 2 or 6×m (or n×m) matrices, depending on the number of volcanic phases involved in the course of the eruption. This means that an eruption history (E) of a simple volcano (E simple ) could be written as: where the elements 1, 2, 3, 4 and 5 corresponds to explosive eruptions such as Hawaiian, Strombolian, violent Strombolian, Taalian (or phreatomagmatic) and Surtseyan-type eruptions, respectively, while the 6 is the effusive eruption. For a more complex eruption history involving two (E compound ) and multiple (E complex ) eruption styles can be written as: For instance, a monogenetic volcano with an eruption history of fire-fountain activity associated with a Hawaiian-type eruption and effusive activity could be described as having a compound eruption history (or E 16 in Figure 6). While an example of a monogenetic volcano with a complex eruption history could be a volcanic edifice with a wide, 'maar-crater-like' morphology, but built up from variously welded or agglutinated scoriaceous pyroclastic rock units (e.g. E 1264 in Figure 6), similar to Crater Elegante in Pinacate volcanic field, Sonora, Mexico [284]. In some cases, gaps, paucity of eruptions or opening of a new vent site after vent migration between eruptive phases is observed/expected based on reconstructed stratigraphy [e.g. 52,279] and geochemistry [e.g. 9,286]. In this classification system, the recently recognized polymagmatic or polycyclic behaviour of monogenetic volcanoes, e.g. an eruption fed by more than one batch of magma with distinct geochemical signatures [17,23,287], can be integrated. For example, the volcano could be E 44 if the controls on eruption style remained the same or E 42 if that chemical change is associated with changes in eruption style. The number of rows and columns in these matrices could be increased until all types of eruption style are described numerically, thus an n×m matrix is created. Increasing the number of volcanic phases will increase the range of volcanoes that could possibly be created. Of course, the likelihood of various eruptive combinations described by these matrices is not the same because there are 'unlikely' (e.g. E 665 ) and 'common' eruptive scenarios (e.g. E 412 ). In summary, a volcano from monogenetic to polygenetic can be described as a matrix with elements corresponding to discrete volcanic phases occurring through its evolution. The major advantage of these matrices is that their size is infinite (n×m), thus an infinite number of combinations of eruption styles could be described ( Figure 6). In this system each volcano has a unique eruptive history, in other words, each volcano is a unique combination of n number of eruption styles through m number of volcanic phases ( Figure 6). This matrixbased classification scheme helps to solve terminological problems and to describe volcanic landforms numerically. For example, the diversity of scoria cones from spatter-dominated to ash-dominated end-members [68,288] cannot be easily expressed within the previous classification scheme. This completely quantitative coding of volcanic eruption styles into matrices could be used for numerical modelling or volcanic hazard models, e.g. spatial intensity of a given eruption style. Historical perspective The combination of eruption styles (listed above) and related sedimentary processes are often considered to be the major controlling conditions on a monogenetic volcano's geomorphic evolution [84]. Thus, the quantitative topographic parameterization of volcanoes is an important source of information that helps to reveal details about their growth, eruptive processes and associated volcanic hazards and its applicable to both conical [119,190,[289][290][291][292] and non-conical volcanoes [199,293]. These methods are commonly applied to both polygenetic [119,289,[294][295][296][297] and monogenetic volcanic landforms [68][69][298][299]. Morphometric measurements on monogenetic volcanoes began with the pioneering work of Colton [300], who noticed a systematic change in the morphology of volcanic edifices over time due to erosional processes such as surface wash and gullying. A surge of research in volcanic morphometry, focused mostly on scoria cones, occurred from the 1970s to 1990s, when the majority of morphometric formulae were established and tested [70-71, 84, 179, 185, 199, 213, 223, 293, 301-304]. This intense period of research was initiated by National Aeronautics and Space Administration (NASA) in the 1960s and 1970s due to an increasing interest in extraterrestrial surfaces that could be expected to be encountered during landings on extraterrestrial bodies such as the Moon or Mars [e.g. 33]. Additional interests were to understand magma ascent, the lithospheric settings of extraterrestrial bodies, the evolution of volcanic eruptions, the geometry of volcanoes in different atmospheric conditions, surface processes and seeking H 2 O in extraterrestrial bodies [30,33,[305][306][307][308]. Given the lack of field data from extraterrestrial bodies, many parameters that were able to be measured remotely, such as edifice height (H co ), basal (W co ) and crater diameters (W cr ) were introduced. There were measured manually from images captured by Mariner and Viking orbiter missions [e.g. 33] and Luna or Apollo missions for the Moon [e.g 309] in order to compare these data with the geometry of volcanic landforms on the Earth [e.g. 179,293]. Dimensions, such as crater diameter, were measured directly from these images, while the elevation of the volcanic edifices was estimated from photoclinometry (i.e. from shadow dimensions of the studied landform) [33,[309][310]. Because elevation measurements were indirect, the horizontal dimensions such as W co and W cr were preferred in the first morphometric parameterization studies [179]. The increased need for Earth analogues led to intense and systematic study of terrestrial small-volume volcanoes [179,185,189,293]. The terrestrial input sources, such as topographic/geologic maps and field measurements, were more accurate than the extraterrestrial input resources; however, they were still below the accuracy required (i.e. the contour line intervals of ≥20 m were not dense enough to capture the topography of a monogenetic volcano having an average size of ≤1500-2000 m horizontally and of ≤100-150 m vertically). The extensive research on monogenetic volcanoes identified general trends regarding edifice growth, eruption mechanism and subsequent degradation [71,84,185,293]. In addition, morphometric signatures were recognized that associated a certain type of monogenetic volcanic landform with the discrete eruption style that formed it. The morphometric signatures of Earth examples were then widely used to describe and identify monogenetic volcanoes on extraterrestrial bodies such as the Moon and Mars [179]. Basic morphometric parameters were calculated and geometrically averaged to get morphometric signatures for four types of terrestrial, monogenetic volcanoes [179], including spatter cones (W co = 0.08 km, W cr /W co = 0. 36 Morphology quantified via morphometric parameters could be a useful tool to address some of these questions in volcanology, geology and geomorphology. The morphology of a volcanic edifice contains useful information from every stage of its evolution, including eruptive processes, edifice growth and degradation phases. However, the geomorphic information extracted through morphometric parameters often show bi-or even multi-modality, i.e. the morphometry is a mixture of primary and secondary attributes [e.g. 338]. The following section explores the dominant volcanological processes that influence the geomorphology of a monogenetic volcano. Syn-eruptive process-control on morphology The eruption styles shaping the volcanic edifices may undergo many changes during the eruption history of a monogenetic volcano ( Figure 6). A given volcano's morphology and the grain size distribution of its eruptive products are generally viewed as the primary indicator of the eruption style that forms a well-definable volcanic edifice (i.e. "Stromboliantype scoria cones"). This oversimplification of monogenetic volcanoes, together with the widely used definition that "they erupts only once" [116], suggest a simplicity in terms of magma generation, eruption mechanism and sedimentary architecture. This supposedly simple and homogenous inner architecture of each classical volcanic edifice, such as spatter cones, scoria cones, tuff rings and maars, led to the identification of a "morphometric signature". The morphometric signature of monogenetic volcanoes was used in the terrestrial environment, e.g. to ascribe a relationship between morphometry and "geodynamic setting" [337], as well as extraterrestrial environments, e.g. for volcanic edifice recognition [31,179,[339][340][341]. Certain types of volcanoes could be discriminated from each other based on their morphometric signature, but some general assumptions need to be made. For example, 1. the morphometric signature concept is entirely based on the assumption that a volcanic landform directly relates to a certain well-defined eruption style, 3. the resultant volcanic landform is emplaced in a closed-system with no transitions between eruption styles, especially from externally to internally-driven eruption styles and vice versa. Consequently, the edifice studied was believed to have a relatively simple eruption history, which is a classical definition of a "monogenetic volcano". As demonstrated above, monogenetic volcanoes develop in an open-system. This section explores the volcanological/ geological constrains of geomorphic processes responsible for the final volcanic edifice and the morphometric development of two end-member types of monogenetic volcanoes such as crater-type (4.2.1.) and cone-type edifices (4.2.2). Crater-type monogenetic volcanoes Crater-type monogenetic volcanoes such as tuff rings and maar volcanoes (Figure 4), are characterized by a wide crater with the floor above or below the syn-eruptive surface, respectively [81,152,223,234]. Their primary morphometric signature parameters are major/ minor crater diameter and depth, crater elongation and breaching direction, volume of ejecta ring, and crater or slope angle of the crater wall [313,[342][343][344][345]. Of these morphometric parameters, the crater diameters were used widely for interpreting crater growth during the formation of a phreatomagmatic volcano. For the genetic integration of crater growth and, consequently, the interpretation of crater diameter values of terrestrial, dominantly phreatomagmatic volcanoes, there are fundamentally two end-member models. The first model is the incremental growth model ( Figure 7A). In this model, the crater's formation is related to many small-volume eruptions and subsequent mass wasting, shaping the crater and underlying diatreme [81, 151, 199-200, 209, 221, 285, 346-347]. Growth initiates when the magma first interacts with external water, possibly groundwater along the margin of the dyke intrusions, triggering molten-fuel-coolant interactions (MFCI) [192-193, 220, 348]. These initial interactions excavate a crater on the surface, while the explosion loci along the dyke gradually deepen the conduit beneath the volcano towards the water source, resulting in a widening crater diameter [199]. This excavation mechanism initiates some gravitational instability of the conduit walls, triggering slumping and wall rock wasting, contributing to the growing crater [81,195,199,223,[229][230]349]. This classical model suggests that 1. crater evolution is related to diatreme growth underneath, and 2. the crater's growth is primarily a function of the deep-seated eruption at the root zone. However, it is more likely that the pyroclastic succession created at the rim of the crater preserves only a certain stage of the evolution of whole volcanic edifice. For instance, the possibility of juvenile and lithic fragments being erupted and deposited within the ejecta ring from a deep explosion (i.e. at the depth of a typical diatreme, about 2 km) is highly unlikely. Rather than being dominated by the deep-seated eruptions, explosions can occur at variable depths within the diatreme [347]. The individual phreatomagmatic eruptions from various levels of the volcanic conduit create debris jets (solids + liquid + magmatic gases and steam), which are responsible for the transportation of tephra [285,347,350]. Every small-volume explosion causes upward transportation of fragmented sediment in the debris jet, giving rise to small and continuous subsidence/deepening of the crater floor [198][199]. This is in agreement with stratigraphic evidence from eroded diatremes, such as Coombs Hills, Victoria Land, Antarctica [285,350] and Black Butte diatreme, Missouri River Breaks, Montana [351]. The second model is where the crater geomorphology is dominated by largest explosion event during the eruption sequence. Thus, the crater size directly represents the 'peak' (or maximum) energy released during the largest possible shallow explosion [202,313,[343][344]352]. This model of crater growth ( Figure 7B) for phreatomagmatic volcanoes is proposed on the basis of analogues from phreatic eruption, such as Uso craters, Hokkaido, Japan in 2000 [352], and experiments on chemical and nuclear explosions [344]. In this model, the crater diameter (D) is a function of the total amount of ejected tephra (V ejecta ) [313]: which can further be converted into explosion energy (E) as: E = 4.45 x 10 6 D 3.05 (5) which approximates the largest energy released during the eruptions. This relationship between crater size and ejected volume was based on historical examples of phreatomagmatic eruptions [313]. These historical eruptions are, however, associated with usually polygenetic volcanoes, such as Taupo or Krakatau, and not with classical monogenetic volcanoes, except the Ukinrek maars, near Peulik volcano, Alaska. In this model, the largest phreatomagmatic explosion governs the final morphology of the crater, so the crater size correlates with the peak energy of the maar-forming eruption directly [343]. Scaled experiments showed that there is a correlation between the energy and the crater depth and diameter [353] if the explosions take place on the surface [344]. Most of the explosions modelled in Goto et al. [344] were single explosions, only a few cratering experiments involved multiple explosions at the same point [344], which is more realistic for monogenetic eruptions. In these multiple explosions, the crater did not grow by subsequent smaller explosions, possibly because the blast pressure was lower than the rock strength when it reached the previously formed crater rim [344]. As noted by Goto et al. [344], such experimental explosions on cratering are not applicable to underground eruptions; therefore, they do not express the energy released by deepseated eruptions generating three-phase (solid, gas and fluid) debris jets during diatreme formation [e.g. 350]. Morphologically, these deep-seated eruptions have a minor effect on crater morphology and diameter, and their deposits rarely appear within the ejecta ring around the crater. Theoretically, both emplacement models are possible because both mechanisms can contribute significantly to the morphology of the resulting landform. The incremental growth model is based on statigraphy, eye-witnessed historical eruptions and experiments [81,152,[198][199][346][347], while the largest explosion model is based on analogues of chemical or nuclear explosion experiments, phreatic eruptions and impact cratering [313,344,[353][354]. Based on eye-witnessed eruptions and geological records, the crater diameter as a morphometric signature for maar-diatreme and tuff ring volcanoes is the result of complex interplay between the eruptions and the substrate. The dominant processes, such as many, small-volume explosions with various energies migrating within the conduit system vertically and horizontally, as well as gradual mass wasting depending on the physical properties of rock strength, are what control the final crater diameter. On the other hand, the substrate beneath the volcano also plays an important role in defining crater morphology, as highlighted for terrestrial volcanic craters [209,229,355], as well as extraterrestrial impact craters [356]. In different substrates, different types of processes are responsible for the mass wasting. For example, an unconsolidated substrate tends to be less stable due to explosion shock waves that may liquefy water-rich sediments, and induce grain flow and slumping, enlarging the crater [229]. On the other hand, in a hard rock environment, the explosions and associated shock waves tend to fracture the country rock, depending on its strength, leading to rock falls and sliding of large chucks from the crater rim [229]. The crater walls in these two contrasting environments show different slope angles [229,345]. These differences in the behaviour of the substrate in volcanic explosions may cause some morphological variations in the ejecta distribution and the final morphology of the crater (Figure 8). The crater diameter is an important morphometric parameter in volcanic landform recognition; however, the final value of the crater diameter is the result of a complex series of processes, usually involving syn-eruptive, mass wasting processes of the crater walls, e.g. the 1977 formation of the Ukinrek maars in Alaska [357][358]. This makes the direct interpretation of crater diameter values more complicated than predicted by simple chemical and nuclear cratering experiments [313,344]. Thus, the incremental growth (multiple eruptions + mass wasting) model seems to be a better explanation of the growth of a crater during phreatomagmatic eruptions [198-200, 267, 346, 359] and kimberlite volcanism [360][361]. Thus, the morphometric data of a fresh maar or tuff ring volcano contain cumulative information about the eruption energy, the location and depth of (shallow) explosion loci, as well as the stability of the country rock and associated mass wasting. The largest eruption dominated model may only be suitable to express energy relationships without the effects of mass wasting from the crater walls. This probably exists in only a few limited sites. For example, these eruptions should take place from a small-volume of magma, limiting the duration of volcanic activity and reducing the possibility of development of a diatreme underneath ( Figure 8). Additionally, these eruptions should be in a consolidated hard rock environment, with high rock strength and stability. The following model for the interpretation of crater diameter and morphology data can be applied for phreatomagmatic volcanoes ( Figure 8). This model integrates both conceptual models for crater and edifice growth, but the majority of craters experience complex development instead of the dominance of the largest explosion event. The likelihood of the largest explosion event dominated morphology is limited to simple, short-lived eruptions due to limited magma supply or vent migration in a hard rock environment ( Figure 8). If the magma supply lasts, the development of a diatreme underneath starts that has a further effect on the size, geometry and morphology of the crater of the resultant volcano. The crater diameter is often a function of basal edifice diameter (W co ) or height (H co ), creating ratios which are commonly used in landform recognition in extraterrestrial environments [177,179,340]. W co is often difficult to measure because of the subjectivity in boundary delimitation [293]. This high uncertainty in delimitation of the crater boundary is a result of the gradual thinning of tephra with distance from the crater, with usually a lack of a distinct break in slope between the ejecta ring's flanks and the surrounding tephra sheet, e.g. the Ukinrek maars, Alaska [357][358]. Any break in slope could also be smoothed away by post-eruptive erosional processes. The crater height estimates vary greatly for maar-diatreme and tuff ring volcanoes, but they are usually ≤50 m [199,223,293]. This small elevation difference from the surrounding landscape gives rise to some accuracy issues, particularly regarding the establishment of the edifice height. To demonstrate the limitations (e.g. input data accuracy, data type, and genetic oversimplification) of morphometric signature parameters on phreatomagmatic volcanoes, two examples (Pukaki and Crater Hill) were selected from the Quaternary Auckland volcanic field in New Zealand. Both volcanoes above were used to establish the average morphometric signature of an Earth analogue phreatomagmatic volcano [e.g. 293]. The early morphometric parameters were measured from topographic maps having coarse contour line intervals (e.g. 20-30 m), which cannot capture the details of the topography accurately. Some cross-checks were made on the basic morphometric parameters established from topographic maps and Digital Elevation Models (DEMs) derived from airborne Light Detection And Ranging (Li-DAR) survey. The results showed that the differences in each parameter could be as high as ±40%. In addition, both the Pukaki and the Crater Hill volcanoes from Auckland were listed as "tuff rings" [293], due to the oversimplified view of monogenetic volcanism in the 1970s and 80s. Their eruption history, including volume, facies architecture and morphology, are completely different. The present crater floor of Pukaki volcano is well under the syn-eruptive surface, thus it is a maar volcano sensu stricto following Lorenz [199]. This was formed by a magma-water interaction driven phreatomagmatic eruption from a small volume of magma of 0.01 km 3 estimated from a DEM and corrected to Dense Rock Equivalent (DRE) volume [362][363]. The present facies architecture of this volcano seems quite simple (e.g. like an E 4 volcano in Figure 6). On the other hand, Crater Hill has a larger eruptive volume of 0.03 km 3 [362][363] and experienced multiple stages of phreatomagmatism (at least 3) and multiple stages of magmatic eruptions (at least 5) with many transitional layers between them, forming an initial tuff ring and an intra-crater scoria cone [80]. Later eruption formed an additional scoria cone and associated lava flow that filled the crater with lava up to 120 m in thickness [80,364]. Consequently, Crater Hill is an architecturally complex volcano with complex eruption history, i.e. at least an E 4226 . The important implication of the examples above and the usual complex pattern and processed involved on the establishment of the final geomorphology (e.g. incremental crater growth) are that morphometric signature (if it exists) can be only used for phreatomagmatic volcanoes if the eruption history of the volcano is reconstructed. In other words, parameters to express morphometric signature cannot be compared between volcanoes with different eruption histories (i.e. they are characterized by different phases with different eruption styles). Comparison without knowledge of the detailed eruption history could be misleading. Furthermore, the morphometric signature properties used in extraterrestrial volcano recognition for crater-type volcanoes should be reviewed, using a volcanological constraint on the reference volcano selection. Cone-type monogenetic volcanoes Cone-type monogenetic volcanoes, such as spatter- (Figure 2), scoria (or cinder; Figure 3) and tuff cones ( Figure 5), are typically built up by proximal accumulation of tephra from low to medium (0.1-10 km in height) eruption columns and associated turbulent jets, as well as block/bombs that follow ballistic trajectories [188,223,288,[365][366][367]. Deposition from localized pyroclastic density currents is possible, mostly in the case of tuff cones [223,265,283] and rarely in the case of scoria cones from violent Strombolian eruptions [188]. The primary morphology of cone-type monogenetic volcanoes could be expressed by various morphometric parameters, including height (H co ), basal (W co ) and crater diameter (W cr ) and their ratios (H co /W co or W cr /W co ), inner and outer slope angle or elongation. On a fresh edifice, where no post-eruptive surface modification has taken place, these morphometric parameters are related to the primary attributes of eruption dynamics and syn-eruptive sedimentary processes. However, there are potentially two valid models to explain their dominant construction mechanisms, including a ballistic emplacement with drag forces and fallout from turbulent, momentum-driven jets at the gas-thrust region [84,182,185,[368][369]. In both models, the angle of repose requires loose, dry media. This criterion is rarely fulfilled in the case of a tuff cone [e.g. 223,234] and littoral cones that form during explosive interactions between lava and water [e.g. 203,204]. In these cases the ejected fragments have high water-contents that block the free avalanching of particles upon landing [223,252,259,283,286]. This is inconsistent with other magmatic cone-type volcanoes; the growth processes of tuff and littoral cones are not discussed in further detail here. The ballistics model with and without drag for scoria cone growth was proposed as a result of eye-witness accounts of eruptions of the NE crater at Mt. Etna in Sicily, Italy [185]. This model is based on the assumption that the majority of the ejecta of a volcanic cone is coarse lapilli and block/bombs (≥8-10 cm in diameter), thus they follow a (near) ballistic trajectory after exiting the vent ( Figure 9A). Consequently, the particle transport is momentum-driven, as documented for the bomb/block fraction during bursting of large bubbles in the upper conduit during Strombolian style explosive eruptions [180,183,370]. The particle velocity of such bomb/blocks was up to 70-80 m/s for a sensu stricto Strombolian style eruption measured from photoballistic data [371][372]. However, recent studies found that the typical exit velocities are about 100-120 m/s [180,183] and they could reach as high as 400 m/s [373]. These studies also showed that the typical particle diameter is cm-scale or less instead of dm-scale [180,183], which cannot be derived purely from impact breakage of clasts upon landing [182,368]. Especially during paroxysmal activity at Stromboli [374] or more energetic violent Strombolian activity [83,163], the dominance of fine particles in the depositional records contradicts the ballistics emplacement model for the cones. To solve the debate about cone growth, the jet fallout model was proposed [182], based on the fact that there is a considerably high proportion of fines in cone-building pyroclastic deposits [60,182,288]. This proposed behaviour is similar to the proximal sedimentation from convective plumes, forming cones with similar geometry to scoria cones, e.g. the 1886 eruption of Tarawera, Taupo Volcanic Zone, New Zealand [375] or the 1986 eruption of Izu-Oshima volcano, Japan [368]. The fines content does not fulfil the criteria of pure ballistic trajectory, thus turbulent, momentum-driven eruption jets ( Figure 9A) should be part of the cone growth mechanisms [182,368,376]. As documented above, scoria cones demonstrate a wider range of granulometric characteristics [182] than previously thought [185]. These slight or abrupt changes of grain size within an edifice imply that the term "scoria cone" is not as narrow and well-defined as proposed in earlier studies [71,185,189,301]. Consequently, there should be a spectrum of characteristics within the "scoria cones" indicating the existence of spatter-, lapilli-and ash-dominated varieties [68,288]. Such switching from classical lapilli-dominated to ash-or block-dominated cone architectures reflects syn-eruptive reorganization of conduit-scale processes, including 1. multiple particle recycling and re-fragmentation during conduit cleaning [180,377] or 2. changes in magma ascent velocity (i.e. increase or decrease in the efficiency of gas segregation) that in turn effect the viscosity of the magma [150,163,378]. The latter case may or may not cause a change in eruption style (e.g. a shift from normal Strombolian-style to violent Strombolian-style eruption) that could effectively lead to changes in the grain size distribution of the ejecta by possible skewing towards finer fractions (≤1-2 cm). This switching has significant consequences on pyroclast transport as well. The higher efficiency of magma fragmentation and production of finer pyroclasts (e.g. ash) causes more effective pyroclast-to-gas heat transfer in the gas-thrust region. A buoyant eruption column is created when the time of heat transfer is shorter than the residence time of fragments in the lowermost gas-thrust region [111,182,379]. Thus, particle transport shifts from momentum-to buoyancy-driven modes [182][183]. Based on numerical simulations, these changes in the way pyroclasts are transported are consistent with modelled sedimentation trends from jet fallout as a function of vent distance [182]. Once the eruption has produced medium (Mdϕ ≤ 10-20 mm) and coarse fragments (Mdϕ ≤ 50-100 mm), pyroclasts show an exponential decrease in sedimentation rates away from the vent [182]. This is in agreement with the trend predicted by the ballistic emplacement model [185]. However, once the overall fragment size is dominated by fines (Mdϕ ≤ 2-3 mm), the maximum sedimentation rate departs further towards the crater rim [182]. The threshold particle launching velocity is about ≥50 m/s [182]. The fragment diameter of Mdϕ ≤ 2-3 mm is consistent with the calculated threshold for formation of buoyant, eruptive columns during violent Strom-bolian activity [111], but is significantly finer than fragments (Mdϕ ≤10-12 mm) generated during some paroxysmal events recorded at Stromboli in Italy [374]. In the case of scoria cone growth the final morphology is not only dependent on the mode of pyroclast transportation via air, but also on significant post-emplacement redeposition. If particles are sufficiently molten and hot, their post-emplacement sedimentation processes usually involve some degree of welding/agglutination and rootless lava flows may form [171][172]. In this case, high irregularity in the flank morphology is expected due to the variously coalescent large lava clots and spatter [171][172]380]. If the particles are brittle and cool, as well as having enough kinetic energy to keep moving, they tend to avalanche on the inclined syn-eruptive depositional surface [84,141,172,185,234,288]. The avalanching grain flows often give rise to inversely-graded horizons or segregated lenses within the overall homogenous, clast-supported successions of the accumulating pyroclast pile, while the hot particles cause spatter-horizons in the statigraphy ( Figure 10). In the earlier case, the criterion to sustain efficient grain flow processes on the initial flank of a pyroclastic construct is that the particles have to be granular media (i.e. loose and sufficiently chilled). The properties such as grain size, shape and surface roughness determine the angle of repose, which is a material constant [381]. These are all together responsible for the formation of usually smooth cone flank morphologies. Classically, scoria cones are referred to as being formed by Strombolian-style eruptions, in spite of the fact that Stromboli volcano, Aeolian Islands, is not a scoria cone. In reality, scoria cones are formed by "scoria cone-forming" eruptions. Thus, the term scoria cone includes every sort of small-volume volcano with a conical shape and basaltic to andesitic composition. Additionally, during scoria cone growth, three major styles of internally-driven eruption types can be distinguished, Hawaiian, Strombolian and violent Strombolian, and an additional externally-driven eruption style, such as phreatomagmatism-dominated, is also expected (e.g. Figure 6). From the eruption styles listed above, at least the first three could individually form a "scoria cone", or similar looking volcano, which is rarely or never taken into account during interpretation of geomorphic data of a monogenetic volcano. The cone growth mechanism is, here, considered to be a complex interplay between many contrasting modes of sedimentation of primary pyroclastic materials, including transport via air (by turbulent jets and as ballistics) and subsequent redistribution by particle avalanching ( Figure 9A). It is also important to note that cone growth is not only a constructive process; there could also be destructive phases. These processes (e.g. flank failure or crater breaching) alter the morphology in a short period of time. Consequently, the edifice growth is not a straightforward process (e.g. a simple piling up of pyroclastic fragments close to the vent), but rather a combination of constructive and destructive phases at various scales. The spatial and temporal contexts of such constructive and destructive processes are important factors from a morphometric stand point. In this chapter, two modes of cone growth mechanisms are recognized ( Figures 9B and C) cones formed by: 1. a distinct and stable eruption style (e.g. E simple ) and by 2. various magma eruption styles with transitions between them (e.g. E compound or E complex ), during the eruption histories. The simple cone growth model is applicable to cone growth from a single and stable eruption style, e.g. Strombolian, Hawaiian or violent Strombolian styles only ( Figure 9B). Theoretically, if an edifice is formed by a repetition of one of these well-defined and stable eruption style without fluctuation of in efficiency or any changes, the crater excavation and diameter, as well as the mode of pyroclast transport, should vary in a narrow range, i.e. fragmentation mechanism 'constant' (Figure 9B). The first explosions when the gas bubbles manage to escape from the magma leading to the explosive fragmentation of the melt, usually take place on the pre-eruptive surface or at a few tens of meters deep [57,84,185,[382][383]. After the first eruption, there is a time involved to either excavate the crater or pile up ejecta around the vent, which is in turn dependent on the eruption style and the tephra accumulation rate. Once the threshold crater rim height is reached, the height and its position are attached to properties of certain eruption dynamics ( Figure 9B). In other words, the eruption style and related pyroclast transport distribute tephra to limited vertical and horizontal directions. Due to the steadiness of eruption style, particle fallout from the near-vent, dilute jets at the gas-thrust region and ballistics have an 'average' vertical distance that they travel. This 'average' will determine the width and relative offset of the crater rim above the crater floor, which grows rapidly during the initial establishment of the crater morphology (first cartoon in Figure 9B) and then slows down (second and third cartoons in Figure 9B). Of course the location, morphology of the crater rim and floor are not just dependent on the efficiency of the magma fragmentation, but the subsequent wall rock failure, as documented by Gutmann [369], similar to the development of maar-diatreme volcanoes (Figure 7). Due to the stability of the conduit and a single eruption style, the role of such failures in the control of morphology is minimal in comparison with complex modes of edifice growth (see later). This growth model is applicable to simple eruptions, with steady eruption styles and possibly steady magma discharge rates, such as the violent-Strombolian eruptions during the Great Tolbachik fissure eruptions in Kamchatka, Russia [84,382] or the Strombolian-style eruption during the growth of the NE crater at Mt. Etna [185]. During the Tolbachik fissure eruptions, the rim-to-rim crater diameter of Cone 1 grew rapidly from 56 m to 127 m during the first 5 days, and later slowed and stayed in a narrow range around 230-280 m during the rest of the eruptions [84,[382][383][384][385]. This is similar to certain stages of growth of the NE crater at Mt. Etna [84,185]. This means the crater widens initially until a threshold width is reached, which corresponds to the maximum strength of the pyroclastic pile and the limits of the eruption style ( Figure 9B). This trend seems to be consistent with an exponential growth of crater width over time until the occurrence of lava flows, as documented at Cone 1 and 2 of the Tolbachik fissure eruptions [e.g. 383]. The maximum range of the crater width appeared to be reached once the lava outflowed from the foot of the cones. This can be interpreted as an actual decrease in magma flux fuelling the explosion, and therefore the pyroclast supply for flank growth. Assuming that the magma is torn apart into small particles and are launched with sufficiently high initial velocity to the air, e.g. Strombolian eruptions, the particles have enough time to cool down, thus upon landing they initiate avalanching due to their kinetic energy. These processes will smooth the syn-eruptive surface to the angle of repose of the ejected pyroclasts if the pyroclastsupply is high enough to cover the entire flank of the growing edifice. If the angle of repose of the tephra, θ, depending on granulometric characteristics, and the height of the crater rim, H, are known at every stage of the eruption (i = 1,2,..., n), the flank width (W i ) would indicate at a certain stage of growth ( Figure 9B): Figure 9. Growth of cone-type volcanoes (e.g. scoria cones) through ballistics' emplacement and jet fallout models. After the initialization of the volcanism (A), there are two different types of cone growth: simple and complex. The simple cone growth model (B) supposes a steady fragmentation mechanism and associated eruptive style and sedimentary processes, thus the angle of repose is near constant, θ 1 =θ 2 =θ 3 over the eruption history. The variation of the relative height of the crater rim (H) above the location of the explosion locus and the radius of the crater (R) is 'fixed' or varies in a narrow range (H 1 ≤H 2 =H 3 and R 1 ≤R 2 =R 3 ), after the initial rapid growth of rim height and crater width. The simple cone growth model implies that constructional and flank morphologies are the results of a major pyroclasttransport mechanism (e.g. grain flows in the case of a scoria cone or welding, formation of rootless lava flows in the case of spatter cones). On the other hand, the complex cone growth model (C) involves gradual or abrupt changes in eruption style, triggering multiple modes of pyroclast transport, and possible changes in the relative height and diameters of the crater rim (H 1 ≠H 2 ≠H 3 ). The granulometric diversity allows for post-emplacement pyroclast interactions which permits or blocks the free-avalanching of the particle on the outer flanks. Therefore, the angle of repose is constant and may not always be reached (θ 1 ≠θ 2 ≠θ 3 ), especially when clast accumulation rate and temperature is higher, causing welded and agglutinated spatter horizons. When the eruption style is constant during the eruption and produces pyroclasts of the same characteristics, the pyroclasts behave as granular media (i.e. they are controlled by the angle of repose); the aspect ratio between the height and flank width should be in a narrow range until the 'tandem' relationship is established between the crater rim and explosion loci. Once the eruption is in progress, the explosion locus stay either at the same depth (e.g. at the pre-eruptive ground level) or migrates upwards if enough material is piled up within the crater resulting in a relative up-migration of the crater floor over time ( Figure 9B). Consequently, this upward migration of the explosion locus should result in an elevation increase of the crater rim, if the eruption style remains the same. Once the crater rim rises, the majority of the pyroclasts avalanche downward from higher position, which creates a wider flank and increases the overall basal width of the edifice. The repetition of such crater rim growth and flank formation takes place until the magma supply is exhausted. Due to the dominance of loose and brittle scoria in the edifice, the H co /W co ratio is in a narrow range in accordance with Eq. 3. Although, there is a slight difference between H co /W co and H i /W i , because the former contains the crater. The crater is possibly the most sensitive volcanic feature of a cone that could be modified easily (e.g. shifting in eruption style in the course of the eruptions and/or vent migration). Finally, this growing process is in agreement with the earlier documented narrow ranges of the H co /W co ratio that are governed by the angle of repose of the ejecta [84,301]. The complex cone growth model assumes the cone is the result of many distinctive eruption styles and changes between them, which trigger a complex cone growth mechanism from various eruptive and sedimentary processes ( Figure 9C). Such changes usually relate to changes from one eruption style to another, which have consequences for the morphological evolution of the growing cone. The switching in efficiency of magma fragmentation can be triggered by the relative influence of externally and internally-governed processes or reorganization of internal or external controls without shifting from one to the other. An example for the first change is a gradual alteration in the abundance of ground water and consequent shift from phreatomagmatism to magmatic eruption styles. On the other hand, the reorganization of processes in either in the internally-or externally-driven eruption styles could be related to the changes in degree of vesiculation and efficiency of gas segregation in the conduit system. Each of these changes could modify the dominant eruption style that determines the grain size, pyroclast transport and in turn edifice growth processes. An internal gas-driven magma fragmentation leading to a Hawaiian eruption produces larger (up to a few meters) magma clots that are emplaced ballistically, while fines are deposited from turbulent jets and a low-eruption column, in agreement with the processes observed at Kilauea Iki in Hawaii [e.g. 176]. In this eruption style, the dominance of coarse particles (e.g. lava clots up to 1-2 m in diameter) are common. These large lava clots cannot solidify during their ballistic transport, and therefore after landing they could deform plastically, weld and/or agglutinate together or with other smaller pyroclasts [171,380], depending on the accumulation rate and the clast temperature [172]. As a result of the efficient welding processes, the landing is not usually followed by free-avalanching, unlike loose, sufficiently cooled, brittle particles from other eruption styles, e.g. normal Strombolian styles. At the other end of the spectrum, energetic eruptions styles, such as violent Strombolian or phreatomagmatic eruptions, tend to generate localized sedimentation from pyroclastic density currents such as base surges. Similar to spatter generation, pyroclasts from pyroclastic density currents do not conform to the angle of repose. In contrast, the pyroclasts from sensu stricto Strombolian-eruptions are sufficiently fragmented to suffer rapid cooling during transport either as ballistics or fallout from turbulent jets and eruption columns ( Figure 9C). Thus, the particle with sufficient kinetic energy post-emplacement can fuel grain avalanches on the outer flanks of a growing pyroclastic pile. The active grain flows deposit pyroclasts on flanks in accordance with the syn-eruptive depositional surface properties and granulometric characteristics that provide the kinetic and static angle of repose of the granular pile. This is sustained until the ejected material is characterized by the same granulometric characteristics. With increasing degree of magma fragmentation, the dominant grain size of the tephra decreases. The increased efficiency of the magma fragmentation (e.g. violent Strombolian eruptions) commonly results in a higher eruption column, and therefore broader dispersion of tephra. The high variability and fluctuating eruption styles form a wide range of textural varieties of edifice such as Lathrop Wells in Southwest Nevada Volcanic Field, Nevada [60,188], Pelagatos in Sierra Chichinautzin, Mexico [14,378] or Los Morados, Payún Matru, Argentina [273]. Due to the variability in eruption styles, particles have different surfaces or granulometric characteristics. These differences induce some fine-scale post-emplacement pyroclast interactions. The effect of these syn-eruptive pyroclast interactions could prevent effective grain flow processes, 'reset' or delay (previous) sedimentary processes on the flanks of a growing conical volcano. Thus it could be a key control to determine the flank morphology of the resultant volcanic edifice. Examples for syn-eruptive granulometric differences can be found in the pyroclastic succession of the Holocene Rangitoto scoria cone in Auckland, New Zealand ( Figure 10). During deposition of beds with contrasting dominant particle sizes, the angle of repose is not always a function of granulometric properties, but can be a result of the mode of pyroclast interactions with the syn-eruptive depositional surface. There is wide range of combinations of pyroclast interactions among ash, lapilli, block and spatter ( Figure 10). Such transitions in eruption style and resulting pyroclast characteristics are important due to their blocking of freshly landed granular particles conforming (immediately) to the angle of repose expected if they were circular, dry and hard grains. For instance, the deposition of an ash horizon on a lapilli-dominated syn-eruptive surface must fill the inter-particle void, causing ash 'intrusions' into the lapilli media ( Figure 10). These pyroclast interactions possibly slow the important cone growing mechanisms (e.g. grain flow efficiency) down, creating a sedimentary delay when the angle of repose is established on the syn-eruptive surface. In the complex cone growth model, the changes of eruptions style have an effect on the relative position of the crater rim shifting the position of the maximum sedimentation and the mode of pyroclast transport [182]. For example, switching from normal Strombolian to violent Strombolian style eruptions could increase the initial exit velocity from the usual 60-80 m/s [180,183,372,386] to higher values, ≥150-200 m/s [111,382,[387][388]. This shift in eruption style causes further decrease in the grain size of the ejecta [163,367,389]. This change could altogether cause migration of the location of the maximum sedimentation point further towards the crater rim if the tephra is transported by jet instead of pure ballistic trajectories [182]. Such changes in eruption styles introduce significant horizontal and vertical crater rim wandering during the eruption history of a monogenetic volcano ( Figure 9C). This wandering modifies, in turn, the sedimentary environment leading the formation of cones with complex inner facies architectures, which may or may not be reflected by the morphology. Consequently, the major control in this cone growth model is on the eruption styles and their fluctuations over the eruption history. In both cone growth models, lava flows can occur as a passive by-product of the explosive eruptions 'draining' the degassed magma away from the vent. Once the magma reaches the crater without major fragmentation it can either form lava lakes and/or (later) intrude into the flanks, increasing the stress [111,298,369]. When the magma pressure exceeds the strength of the crater walls, the crater wall may collapse or be rafted outwards [111,369]. Sometimes, the magma flows out from foot of the cone, possibly fed from dykes [188,273,367,369,388,[390][391]. In this case the flank collapse is initiated by the inflation of the lava flow by discrete pulsation of magma injected beneath the cooler dyke margins [392][393] beneath a certain flank sector of a cone [111]. If the lava yield strength is reached and overtakes the pressure generated by the total weight of a certain flanks sector, flank collapse and subsequent rafting of remnants are common [111,273,390], leading to complex morphologies with breached craters and overall horse-shoe shape [322,394]. The direction of crater breaching of scoria cones may not always be the consequence of effusion activity, but may coincide with regional/local principal stress orientations or fault directions [298,315,[395][396]. If the magma supply is sufficient, the edifice that has been (partially) truncated by slope failure could be (partially) rebuilt or 'reheal' as documented from Los Morados scoria cone in Payun Matru, Argentina [273] or Red Mountain, San Francisco Volcanic Field, Arizona [397]. The direction of lava outflow from a cone commonly overlaps with the overall direction of background syn-eruptive surface inclination. This overall terrain tilt could cause differences in the tension in the downhill flank sector [82,298,391,398]. Any kinds of changes during basaltic monogenetic eruptions could cause sudden decompression of the conduit system, leading to a change in the eruption style [273,399]. Consequently, these destructive processes are more likely to occur during complex edifice growth, instead of simple cone growth. These changes could account for the fine-scale morphometric variability and architectural diversity observed in granular pile experiments and field observations [68,182,298,314,398]. Evidence of fine-scale morphometric variability due to lava outflow and crater breaching is observed through systematic morphometric analysis on young (≤4 ka) scoria cones in Tenerife [398]. Two types of morphometric variability were found: intra-cone and inter-cone variability. Intra-cone variability was characterised among individual flank facets. The slope angle variability was calculated to be as high as 12° between flanks sectors along (±45°) and perpendicular (±45°) to the main axis of the tilt direction of the syn-eruptive terrain [398]. This is about a third of the entire range of the natural spectrum of angles of repose of loose, granular material, i.e. scoria-dominated flanks [84,182]. Inter-cone variability was detected on cones of the same age. According to Wood [71], these fresh cones should be in a narrow morphometric range (e.g. slope angle of 30.8±3.9°) based on fresh and pristine scoria cones analysed from the San Francisco Volcanic Field, Arizona [71,324]. In contrast to this expected high value, the average slope angles of the studied cones from Tenerife turned out to vary from 22° to 30°, which has significant impact on many traditionally used interpretations of morphometric data including morphometric-based dating [e.g. 338], the morphometric signature concepts and erosion rate calculations. Both inter-and intra-cone variability were interpreted as a sign of differences in syn-eruptive processes coupled between internal, such as changes in efficiency of fragmentation, magma flux, effusive activity and associated crater breaching [398], and external controlling factors, such as interaction between pre-existing topography and the eruption processes [398]. All of these diverse eruptive and sedimentary processes are somehow integrated into the fresh morphology that is the subject of morphometric parameterization. A few of these processes could be detected while others could not, using morphometric parameters at one or multiple scales. For instance, some of this morphometric variability is usually undetectable using topographic maps and manual geomorphic analysis. This narrow variability, possibly associated with syn-eruptive differences in cone growth rates and trends, is in some instances in the range of the accuracy of the morphometric parameterization technique. For example, a manual calculation of slope angle from a 1:50 000 topographic maps with contour intervals of 20 m is ±5° [e.g. 312]. The fine-scale morphometric variability cannot be assessed accurately with this high analytical error range. On a DEM (either contour-based or airborne-based) with high vertical and horizontal accuracies (e.g. Root Mean Square Error under a few meters), this small-scale variability can be detected [e.g. 398]. An important consequence of this variability is that the initial geometry of the cone-type volcanoes, such as scoria cones, is not in a narrow range as previously expected [e.g. 71]. In other words, the morphometric signature of cone-type volcanoes are wider than described before, limiting the possibility of morphometric comparisons of individual edifices (especially eroded edifices) due to the lack of control on their initial geometries. It seems on the basis of the presented eruptive diversity, comparative morphologic studies should be focused on comparing cones that have similar processes involved in their formation (i.e. E simple , E compound or E complex ) and limited post-eruptive surface modifications (i.e. younger than a few ka in age). The morphology of a fresh (≤ a few ka) cone-type volcano is the result of primary eruptive processes; therefore, the morphometric parameters should be interpreted as the numerical integration of such eruptive diversity and mode of edifice growth. As stated by Wood [84], only fresh cones must be used for detecting causes and consequences of changes in morphology. When cone-type volcanoes from a larger age spectrum, e.g. up to a few Ma [e.g. 312,337], are studied, the primary, volcanic morphometric signatures are modified by post-eruptive processes. Thus, they contain a mixture of syn-and post-eruptive morphometric signatures. Hence, interpretation of large morphometric datasets should be handled with care. Furthermore, it is also evident that not all geomorphic changes experienced by the edifice during the eruption history are preserved in the final volcano morphology. This could be due to, for instance, rehealing of the edifice after a collapse event, or changing eruption style, reflecting the complex nature of cone growth. Future research should focus on finding the link between the eruptive processes and morphology, as well as finding out how syn-eruptive constructive and destructive processes can be discriminated from each other on an 'unmodified', fresh cone. Degradation of monogenetic volcanoes Once the eruption ceases, a bare volcanic surface is created with all of the primary morphologic attributes that have been determined by the temporal and spatial organization of the internally-and externally-controlled eruptive and sedimentary processes during the eruption history ( Figure 6). The fresh surface is usually 'unstabilized' and highly permeable due to the unconsolidated pyroclasts, but there is often some degree of welding/agglutination, the presence of compacted ash, or lava flow cover. The degradation processes of a volcanic landform have a significant effect on the alteration of primary, volcanic geomorphic attributes (e.g. lowering of H co /W co or slope angle values). The significant transition from primary (pristine) volcanic to erosion landforms fundamentally starts when, for example, soil is formed, vegetation succession is developed or the surface is dissected over the primary eruptive products. However, modification of the pristine, eruption-controlled morphology could happen by non-erosion processes. For example, rapid, post-eruptive subsidence of the crater of a phreatomagmatic volcano due to diagenetic compaction, or lithification of the underlying diatreme infill during and immediately after the eruptions [400][401]. This usually leads to deepening of the crater or thickening of the sediments accumulated within the crater. Some compaction and post-eruptive surface fracturing, due to the gradual cooling down of the conduit and fissure system, is also expected at cone-type volcanoes, such as after the formation of Laghetto scoria cones at Mt. Etna, Italy [184] or Pu'u 'O'o spatter/scoria cone in Hawaii [173]. These processes could cause some geomorphic modification that may affect the morphometric parameters. On the other hand, the long-term surface modification of a monogenetic volcanic landform is related to degradation and aggradation processes over the erosion history. The structure of the degradation processes that operate on volcanic surfaces can be classified into two groups based on their frequency of occurrence and efficiency: 1. long-term (ka to Ma), slow mass movements, called 'normal degradation', as well as 2. short-term (hours to days), rapid mass movements, called 'event degradation'. In the following section a few common degradation and aggradation processes are discussed briefly. Long-term, normal degradation of monogenetic volcanoes Normal degradation is a long-term (ka to Ma) mass wasting process that occurs by a combination of various sediment transport mechanisms and erosion processes such as rill and gully erosion, raindrop splash erosion, abrasion or deflation. Normal degradation requires initiation of the erosion agent that is usually the 'product' of the actual balance between many internal and external degradation controls at various levels, such as the climate or inner architecture ( Figure 11). The external environment (e.g. annual precipitation, temperature or dominant wind direction etc.) is recognized as a major control on degradation [68,71,[324][325][337][338] A combination of the abovementioned controls and processes on chemical weathering interacts in many ways depending on the internal composition and characteristics of the volcanics exposed to the environment ( Figure 11). The importance of internal controls on degradation seems to be neglected by earlier studies on monogenetic volcanic edifices [e.g. 71] in contrast with recent studies [e.g. 68]. The facies architecture and granulometric characteristics of a volcanic surface govern how the edifice reacts to the environmental impacts, e.g. the flanks drain the rainwater 'overground' leading to the formation of rills and gullies or allow infiltration [71,[408][409][410][411][412]. The pyroclast-scale properties are determined by fluctuation of eruption styles during the eruption history, leading to accumulation of pyroclasts with contrasting geochemical, textural and granulometric characteristics. This pyroclast diversity will be responsible for the various rates of chemical weathering. Additionally, this diversity has an effect on the mode and efficiency of sediment transport during the course of degradation. For instance, the 'stability' (or amount of loose particles on the flanks) causes slight differences in rates, styles and susceptibility for erosion. The stability could increase with the formation of mature/immature soil, thick accumulation of weathering products, denudation of a lava-spatter horizon and/or heavy vegetation cover, which altogether help to stabilize the landscape. These changes on the mineral-to pyroclast-scales lead to transitions from 'unstabilized' to 'stabilized' stages. The duration of the transition depends on many factors (e.g. Figures 12A and B), such as the initial surface morphology, granulometric characteristics, volcanic environment and climatic settings [408][409][410][413][414][415][416][417]. The transition could be as short as a couple of years if the volcanic surface is characterized by the dominance of fines, e.g. ejecta ring around a tuff ring, and typically exposed to a humid, tropical climate [408]. In arid climates the lag time between soil formation is significantly longer (if it takes place at all), up to 0.1-0.2 Ma [188,413]. There are extreme environments where the soil/vegetation cover can barely be developed due to the high rates of volcanic degassing and acid rain, e.g. the intra-caldera environment in Ambrym, Vanuatu [e.g. 408] or cold polar regions, e.g. Deception Island, Antarctica [e.g. 418,419]. Changes in surface stability could be governed by gradual denudation of inner, texturally compacted (e.g. welded or agglutinated spatter horizons or zones) pyroclastic units. This leads to rock selective erosion and higher preservation potential of an edifice in the long-term [e.g. 68]. Consequently, the degradation processes cannot be separated from the architecture of the degrading volcanic edifice, and therefore the erosion history is strongly attached to the eruption history. In this respect, the erosion history and rates seem to be governed (at least on one hand) by the time-lagged denudation of pyroclastic beds with varying susceptibility to erosion. In other words, the rate and style of degradation are theoretically the 'inverse' of the eruption history if the external controls are steady over the erosion history. The actual balance between the internal and external controls determines the dominant rates and mode of sediment transport mechanism at a given point on the flanks of a monogenetic volcano ( Figure 11). The mode and style of erosion of monogenetic volcanic landforms can be subdivided into 'overground' and 'underground' erosion. The long-term, overground degradation of volcanic surfaces can be accounted for through water-gravity (including rainfall, sea or freshwater, underground water or ice or various lateral movements of sediment/soil cover due to gravitation and water), and wind erosion agents . This equivalent to a long-term erosion rates of 0.1 t/km/yr, calculating with 2.5 mm median rain drop diameter on a flank with 10° of slope angle [422]. If rainfall intensity exceeds the soil's infiltration capacity at any time during a rainfall event, overground flow, such as unchannelized sheet flow can be generated [e.g. 423]. The erosion capacity of sheet flow is higher than the rain-splash, but it is significantly lower than mass-wasting once the rill and gully network is developed (e.g. Figures 12E and F). Drainage system development on the flanks of monogenetic volcanoes is found to correlate with the age of the volcanic edifice [70,321,324,424]. Although, the required time for their formation could be as short as a couple of months or years and it could develop on the gentle flanks (e.g. a tuff ring with slope angle of 5-10° in Figures 12E and F) which anomalously short period of time could introduce error in the relative morphology-based dating [425]. The fluvial erosion could remove sediment in a range of 10-100 000 t/km 2 /yr [e.g. 426]. These overground sur- Figure 11. Conceptualized model for the configuration of internal and external degradation controls on determining erosion agents and sediment transport processes at a given point on 'unstabilized' and 'stabilized' volcanic surfaces over the erosion history. Degradation of a monogenetic volcano takes place by both long-term (ka to Ma; dark and light yellow boxes) and short-term, event-degradation processes (hours to years; green boxes) with different rates over the erosion history. The effects of such degradation processes take place both 'overground' and 'underground'. face processes, such as rill and gullies, and various soil/sediment creep [70-71, 323, 408, 425] have been accounted as a major mass wasting mechanism over the degradation history of a monogenetic volcanoes. The effect of these overground degradation processes were modelled mostly on scoria cones [324-325, 327, 337]. All varieties of soil and sediment creep and solifluction processes, such as soil and frost creep and gelifluction, are usually slow processes [e.g. 427], in comparison with surface runoff. Thus, these processes modify the volcano flanks' morphology constantly and over a longer period of time. The rates of erosion vary depending on the topography (e.g. slope gradient), sediment/soil properties (e.g. proportion of fines, moist content) and predominant climate (e.g. amount and type of precipitation, annual temperature), but rarely exceed the downhill movement rates of 1 m/yr and the volumetric velocity of between 1×10 -10 -1×10 -8 km 3 /km/yr [e.g . 427]. The sediment transport rates introduced by solifluction are many orders of magnitude smaller than the erosion loss of a volcano by fluvial processes. In cold semi-arid and arid regions, the ice plays the major role on the sediment transport [e.g. 428]. Consequently, the surface modification and movements are related to diurnal and annual frost-activity, such as ground freezing and thawing cycles. On scoria cones at the periglacial Marion Island, South Indian Ocean, dominant sediment transportation processes on scoria cone flanks are the needle-ice-induced frost creep related to the diurnal and possibly annual frost cycles [428]. The frost creep rates are 53.2 cm/yr for ash (≥70% of grains ≤2 mm), 16.1 cm/yr for lapilli (≥30% of grains between 2-60 mm), and 2.6 cm/yr for bomb/blocks (≥70% of grains ≤60 mm), based on measurements on painted rocks [428]. The rates are primarily controlled by the predominant grain-size of the sediment, the slope angle of the underlying terrain and altitude [428]. Based on the transportation rates, the processes are the same or an order of magnitude faster than rain-splash-induced pyroclast transport in a semi-arid environment, such as San Francisco volcanic field in Arizona [323]. Probably the most effective overground degradation process on a freshly created volcanic surface is the wave-cut erosion. In rocky, coastal regions, the wave cut notch moving back and sideward removes mass by hydraulic action and abrasion [e.g. 429]. Wave-cut erosion mostly affects tuff cones that are located on or offshore (Figures 12C, D, E and F). In this environment abrasion is a common syn-eruptive [430] or post-eruptive erosion process, e.g. Surtsey [431][432] or the early formation Jeju Island, Korea [433]. For instance, post-eruptive coastal modification through abrasion generated an area-loss of about 0.2 km 2 between 1975 and 1980 on Surtsey island in Iceland [431]. The rate of volume-loss of non-volcanic coastal regions is in the range of 10 000 to 100 000 t/km/yr [e.g. 434]. This enhanced rate of massremoval is in agreement with advanced states of erosion on a recently formed tuff cone within Lake Vui in the caldera of Ambae volcano, Vanuatu, which formed in 2005 [263]. The initial surface has been intensively modified by wave-cut erosion and slumping from the crater walls, leading to an enormous enlargement of the crater and crater breaching over 10 months (Figures 12C and D). The effect of wind deflation is often limited, especially in humid climates. However, there are examples in volcanic environments where, in spite of the high annual precipitation, the sediment transport rates are still significantly high due to wind action, e.g. in some parts of south Iceland, around 600 t/km/yr [435]. Expressing the long-term effect and rate of sediment transport by wind is complicated due the high variability of wind intensities (i.e. storm events versus normal background intensities) and directions [e.g. 436]. Sediment transport fluxes can vary in a wide range as a function of wind energy and surface characteristics (e.g. sediment availability, vegetation cover or water saturation). Long-term sediment transport by wind in volcanic areas (e.g. Iceland) is in the range of 100 to 1000 t/km/yr [435][436]. However, this sediment transport rate is in relation, but it is not equivalent to the erosion rates. Furthermore, efficient wind transportation as bed load by creep and saltation, and suspended load, is limited to particles generally ≤8 mm in diameter [e.g. 436]. This granulometric limit is crucial for the long-term erosion of volcanic landforms that built up from coarser pyroclasts, such as coarse lapilli-dominated scoria cones. Significant increase is observed during storm events, when these sediment transport rates could reach as high as a couple of percentage of the annual fluxes within an hour [435]. Thus, the long-term approximation of the sediment transport rates could be interpreted as cumulative values of normal, background and increased, storm/related erosion rates [e.g. 435]. In addition, there are a few examples such as Surtsey, where the wind deflation is considerable. In Surtsey, the strong wind is responsible for the polishing of palagonitized tephra surfaces and transporting and redistributing unconsolidated tephra on the freshly created island [431]. Direct observation of short-term volumetric change of a young scoria cones (e.g. Laghetto or Monte Barbagallo, ca. 2700-2800 m asl) is through deflation by wind in the summit region of Mt. Etna, Sicily [326]. Surface modification is inferred to occur on the windward side of the Monte Barbagallo cones [326]. The wind likely induces some minor pyroclast disequilibrium on the flanks that may lead to minor rock fall events or initiate grain flows [326]. In real semi-arid areas, wind-erosion is an important transport agent and surface modificator over unstabilized volcanic surfaces such as the Carapacho tuff ring in the Llancanelo Volcanic Field, Mendoza, Argentina ( Figure 12A). The layer-by-layer stripping of the volcanic edifice is completely visible on the windward side of the erosion remnant facing the Andes. On the other hand, the wind-blown sediments can accumulate over time leading to sometimes expressible aggradation on volcanic surfaces ( Figure 13A). Accumulation of aeolian addition could significantly contribute to the soil formation by gaining excess material, e.g. quartz or mica [415,437]. Due to the generally high roughness of pyroclast-or lava rock-dominated surfaces (e.g. highly vesicular scoria or a'a lava flow), the wind slows down, leading to sedimentation and later accumulation of wind-transported particles [415][416][437][438]. The wind-induced aggradation helps to reduce the transition time between an 'unstabilized' to a 'stabilized' surface by developing desert pavement in semi-arid/arid desert environments [415][416]. The previously mentioned, generally long-term overground degradation processes often account for most of the volumetric loss and surface modification of monogenetic volcanoes [e.g. 71]. In the case of the underground degradation, the surface water leaves the system through the groundwater if the actual soil infiltration capacity exceeds the rainfall intensity [423]. This underground water can remove weathering products (e.g. leaching of cations from the rego- ing and erosion rates, the time scales of complete erosion of the scoria cone can be calculated. Assuming a 1200 kg/m 3 average density for moderately to highly vesiculated scoria deposits (i.e. 3.4 to 341.6 m 3 /yr volume loss rates), the time scale of complete degradation would be between 6.4 Ma and 0.06 Ma, using the constant degradation rates mentioned above. Of course, the rates of chemical weathering tend to slow down when the soil coverage becomes thicker and the weatherable parental rocks are reduced [e.g. 442]. Apart from this, the underground erosion of volcanic edifices by infiltrating surface water (e.g. initial stages of scoria cone degradation) and groundwater flow could be very important and effective long-term degradation process that should have some influence on the morphology of monogenetic volcanoes, especially for volcanic areas with strong chemical weathering rates (e.g. humid, tropical areas with high annual temperature). In summary, the degradation of a monogenetic volcano is many orders of magnitude longer (≥100 ka to ≤ 50 000 ka) than their formation (≤0.01 ka). For example, the degradation of a small-volume (≤0.1 km 3 ) volcanic edifice usually takes place in a couple or 10s of Ma for welded and/or spatter-dominated edifices, such as in the Bakony-Balaton Highland Volcanic Field in Hungary [68] or in Sośnica hill volcano is Lower Silesia, Poland [335]. Phreatomagmatic volcanoes, especially those with diatremes, could degrade over a longer period of time due to their significant vertical extent, e.g. the Oligocene Kleinsaubernitz maar-diatreme volcano in Eastern Saxony, Germany [401]. During such a long degradation time, the rates and style of post-eruptive surface modification of monogenetic volcanic landforms are generally vulnerable to changes in the configuration and balance between internal and external degradation controls ( Figure 11). These could be triggered internally, e.g. denudation of a spatter-dominated or a fine ash horizon (e.g. Figure 13B), or externally, such as long-term climate change or climate oscillation [39], initializing a gradual shift in the dominant mode of sediment transport. Each of these gradual changes (e.g. soil formation, granulometric and climatic changes etc.) causes a partial or complete reorganization of the controls on degradation. This adjustment of erosion settings could result in a change in erosion agent that may or may not increase or decrease sediment yield on the flanks of a volcano. All of these changes over the long erosion history open systemically new potential 'pathways' for erosion, leading to diverse erosion scenarios. The long-term degradation seems to be an iterative process, repeating a constant erosion agent adjustment that is triggered by many gradual changes over the erosion history of a volcano. In many previous erosion studies, the edifices are usually treated as individuals sharing the same internal (i.e. configuration of pyroclastic successions) and initial geometry [e.g. 71] and degrading in accordance with the climate of the volcanic field [324][325]337]. Of course the climate is in general a important control on degradation, but the climatic forces are in continuous interactions with the volcanic surface, promoting the importance of architecture and granulometric characteristics of the exposed pyroclasts and lava rocks in the volcanic edifice. In extreme cases such as Pukeonake scoria cone (having typical monogenetic edifice dimensions of 150 m in height and 900 m in basal width) in the Tongariro Volcanic Complex in New Zealand, there is an unusual wide granulometric contrast within the pyroclastic succession ( Figure 13B). Additionally, the trends and processes in degradation are in close relationship with the exposed pyroclast characteristics which determine the rates of chemical weathering, soil characteris-tics, surface permeability and, in turn, the mode of sediment transport on the flanks (e.g. Figures 13C and D). Short-term, event degradation of monogenetic volcanoes The event degradation processes take place in a short time frame (hours to days), but they could cause sudden disequilibrium in the degradation and sedimentary system. A monogenetic magmatic system tends to operate inhomogeneously both spatially, forming volcanic clusters, and temporally, forming volcanic cycles. Additionally, there are monogenetic volcanoes that can be found as parasitic or satellite vents on the flanks of larger, polygenetic volcanoes, such as Mt. Etna in Sicily [317] or Mauna Kea in Hawaii [314]. The spatial and temporal closeness of volcanic events, however, pose a generally overlooked problem related to the degradation of monogenetic volcanic landforms, such as tephra mantling or geomorphic truncation by eruptive processes of a surrounding volcano. Tephra mantling is considered to be an important process for the degradation of volcanic edifices as stated by Wood [71] and White [412]. The average distance between neighbouring volcanoes in intraplate settings, such as Auckland in New Zealand, is about 1340 m (i.e. 5.6 km 2 ), while on the flank of a polygenetic volcanic/volcanic island, such as Tenerife in Canary Islands, that average is about 970 m (i.e. 2.9 km 2 ). On the other hand, the typical area of a tephra blanket 1-2 cm thick ranges from 10 km 2 for Hawaiian eruptions [162] to 10 3 km 2 for violent Strombolian eruptions [163,[443][444] and for phreatomagmatic eruptions [358]. Consequently, the individual edifices commonly overlap each other's eruption footprint (i.e. area affected by the primary sedimentation from the eruptions), showing the importance of tephra mantling. Furthermore, there are monogenetic volcanic edifices that are developed on flanks of larger polygenetic volcanoes where the mantling by tephra could be more frequent and more significant than in intraplate volcanic fields (e.g. Mt. Roja in the southern edge of Tenerife in the Canary Islands, Figure 13E). A few cm thick tephra cover could cause complete or partial damage to the vegetation canopy [411,[445][446][447]. Mantling could reset all dominant surface processes, including sediment transport systems, erosion agents, vegetation cover or soil formation processes. This leads to similar reorganization of the degradation controls to those seen with the long-term gradual changes of external or internal factors during normal degradation, but in much shorter time-scales (hours to years). The sedimentary responses to mantling could occur instantly or with a slight delay. Increased erosion rates of older cones were documented instantly after the tephra mantling by fine/coarse ash from Paricutin, Michoacán-Guanajuato, Mexico between 1943 and 1952 [411]. The tephra that mantled the topography was fine (Mdϕ = 0.1-0.5 mm) and relatively impermeable, which led to the formation of new, extensive incisions by rill channels and significant deepening of older gullies by the increased sediment yield [411]. In contrast, the sedimentary response for the Tarawera eruption in New Zealand was delayed by the well-sorted, coarse and high permeability of the tephra accumulated over the landscape [448]. The mantling may have an effect on vegetation coverage (e.g. cover or burn the vegetation) and the erosional agent responsible for shaping the morphology of the volcano. The long-term effect of this may be the longer preservation of the landform, or increased dissection which temporarily enhances the overall rates of erosion. These changes will have an important influence on the majority of the morphometric parameters and their pattern of changes over the erosion history. Surface modification of an already formed monogenetic volcanic edifice could also be triggered by the formation of another monogenetic vent close by [52,412,449]. The amalgamated or nested volcanic complexes that have some time delay between their formation are common in volcanic fields, for example Tihany in the Bakony-Balaton Highland, Hungary [41], Rockeskyllerkopf volcanic complex in the Eifel, Germany [450] or Songaksan in Jeju Island, Korea [9,278]. The eruption of nearby volcano(es) seems to be a common process that may lead to the minor truncation of surfaces, bomb/block-dominated horizons and discordances in the stratigraphic log. This type of 'event' degradation by monogenetic eruption could modify the previously formed topography instantly. On pyroclast surfaces with limited permeability (e.g. fine ash, lava spatter horizon or lava flows), the rain water tends to simply runoff depending on the actual infiltration rate and the rain fall intensity rate [409-410, 447, 451-452]. On this fine ash surface, the infiltration rates are an order of magnitude lower than on a loose, lapilli-covered flank. This is visible on the flank of La Fossa cone in Vulcano, Aeolian Islands ( Figure 13C). The La Fossa cone is not a typical monogenetic volcano, but it has similar geometry and size to a typical monogenetic volcanic edifice. Erosion on the La Fossa cone is characterized by surface runoff on the upper steeper flanks (≥30°) built up by fine indurated ash (Mdϕ = 100 µm), while the erosion of the lower flanks (≥28° and Mdϕ = 1-2 mm) is usually due to debris flows forming levees and terminal lobes [410]. The strikingly different style of mass wasting mechanism is interpreted to be the result of the lack of vegetation cover and strong contrast in permeability and induration of the underlying pyroclastic deposits [410]. Erosion by debris flows forms deep and wide gullies even on a flank built up by permeable rocks, e.g. La Fossa in Vulcano [410] or Benbow tuff cone in Ambrym, Vanuatu [408]. The triggering mechanism for a volcaniclastic debris flow is limited to a period of intense, heavy rainfall [408,410,451,453]. Thus it operates infrequently and it tends to typically redistribute a pocket of a few tens of m 3 of sediments [410]. Similarly to the volcaniclastic debris flows, landslides could also be part of the event degradation processes, especially on steep flanks (e.g. cone-type morphology). The susceptibility for landslides that remove large chunks from the original volcanic edifice, increases by diversity of the pyroclast in the succession. In other words, the layer-cake, usually bedded, inner architecture of either the ejecta ring around a phreatomagmatic volcano or a scoria cone, is extremely susceptible to landsliding triggered, for instance by heavy rain, earthquake, animal activity or surface instability of freshly deposited mantling tephra [e.g. 411]. Another event degradation process is the wild fire that is responsible for the temporal increase (by 100 000 times the 'background' sediment yield) in erosion rates and sediment yields on steep flanks [e.g. 456,457]. The major effects on a surface by a wild-fire include accumulation of ash, partial or complete damage of vegetation, and organic matter in the soil, and modification of soil structure, if any, and its nutrient content [e.g. 456]. These changes of the surface properties lead to modifications of porosity, bulk density and infiltration rates of the surface, promoting overground flow which is able to carry the increased sediment yield [e.g. 456]. The overground flow removes the fine sediment (e.g. volcanic ash and lapilli and non-volcanic ash) and the topsoil, causing enrichment of coarser sediment on the surface. Animal activity is a commonly recognized erosion type due to its effect on the compaction of the uppermost soil [e.g. 458,459] and/or linear dissection of the surface by trampling [e.g. 460,461]. As a result of compaction, the rainwater cannot penetrate through the soil cover easily, leading to overground flow that increases the erosion rate and sediment yield [e.g. 458]. Wild animal tramping could be a source of rill and gully formation on flanks, mostly in semi-arid and arid environments [e.g. 460], particularly in, the crater-type volcanoes that commonly host post-eruptive lakes within the crater basins, such as Laguna Potrok Aike in Pali Aike Volcanic Field, Patagonia, Argentina [462] or Pula maar, Bakony-Balaton Highland, Western Hungary [195,463]. These maar lakes create special habitats that could increase animal activity, creating more opportunity for animal activity-induced erosion. The event degradation processes, such as heavy rain-induced debris flow, tephra mantling, landslide, post-wild fire runoff or animal activity, could individually trigger rapid geomorphic modifications that affect the long-term degradation rates of the volcanic edifices (Figure 11). The rates of sediment yield could be a thousand times larger in response to event degradation than in the case of normal degradation. These events are usually randomly or inhomogeneously distributed over the erosion history of a volcanic landform, making the quantification to the total erosion-loss complicated. Due to significant surface modification, their effect on the morphometric parameters could be large and difficult to quantify. Post-eruptive erosion of monogenetic volcanoes by normal and event degradation processes The geomorphic state of (monogenetic) volcanoes is commonly expressed by various morphometric parameters, including edifice height, slope angle or H co /W co ratios. The values of these morphometric parameters usually show a decreasing trend over the course of the erosion history [70-71, 290, 299, 317, 320, 464-465]. Consequently, the morphometric parameters show strong time-dependence. This systematic change in morphology was observed mostly on scoria cones in many classical volcanic fields, such as San Francisco volcanic field in Arizona [71] or Cima Volcanic Field in California [70]. This recognition led to the most obvious interpretation: the morphology is dependent on the degree of erosion, which is a function of time and the climate [e.g. 71]. Therefore, morphometric parameters may be used as a dating tool for volcanic edifices if the final geometry and internal architecture are similar among the volcanic edifices being compared [e.g. 71,324]. These fundamental assumptions (regardless if stated or not) are valid for all comparative morphometric studies targeted volcanoes. The concept outlined above is, however, sometimes oversimplified and the assumptions are not always fulfilled. The concern about the classical interpretation of morphometric parameters and their change over time is derived from various sources. • The architecture of a single monogenetic edifice is commonly inhomogeneous in terms of internal facies characteristics. This architectural diversity usually results in diversity in erosion-resistance and susceptibility to chemical weathering of the pyroclastic rocks exposed to the external environment (e.g. Figures 10 and 13B or [e.g. 68]. Due to the continuous denudation of internal beds, the internal architectural irregularities could cause different rates of weathering and erosion leading to hardly predictable trends in degradation. • The previous concept of monogenetic volcanism implied that the morphology of a volcanic landform is linked only to the specific eruption styles (i.e. Strombolian-type scoria cone). However, this is an oversimplification and belies the complex pattern in edifice growth (e.g. Figures 6-9). • The final, pristine edifice morphology is mostly controlled by syn-eruptive processes (e.g. explosion energy, substrate stability, mass wasting and mode of pyroclast transport, e.g. Figures 7-9). Any change in either the internal or external controls during the course of an eruption could modify the final morphology of the edifice partially or dramatically. This is in agreement with the measured high variability of slope angle [e.g. 398] or aspect ratios [e.g. 314] on relatively fresh edifices. This supposedly eruptive process-related morphometric variability is observed on both the intra-edifice scale (e.g. between various parts of an edifice) and inter-edifice scale [398]. • There is a large difference in the rates and mode of chemical weathering and sediment transport operating on different types of pyroclastic deposits or lava rock surfaces (e.g. Figures 12 and 13). For instance, there is a contrast between mass wasting rates by underand overground flow processes, e.g. spatter or a higher degree of welding/agglutination could cause asymmetric patterns in the permeability and, therefore, the subsequent initialization of the erosion on a freshly created surface. • One theoretical concern about morphometric parameters, such as edifice height, aspect ratio or slope angle, is that they are intra-edifice, 'static' descriptors. Thus, they only express the current geomorphic state of the volcano. In contrast, they are often used to reveal and describe 'dynamic' processes, such as erosion patterns over time. It is obvious that trends in erosion processes cannot be seen based on these intra-edifice, 'static' parameters unless they are compared with other edifice parameters, or measure direct geomorphic modification by erosion processes over short periods of time (e.g. Figures 12C and D • Comparative morphometric studies often lack or have limited age constraints (e.g. a few % of the total population of the studied edifices are dated) on the morphology, or inversely in special cases, the dating is the purpose of the comparison. There are just a few studies with complete age constraint [e.g. 68,70,326]. • The long-term surface modification is often believed as a result of the climate forces and climate-induced erosion processes. Wood [71] stated the importance of tephra mantling as a possible source of acceleration of erosion rates, but such event-degradation (e.g. tephra mantling, edifice truncation by eruption nearby, landsliding, wild fire, animal activity etc.), are usually neglected. They occur infrequently, but they could cause rapid and significant modification that may influence the patterns of future degradation. Due to the concerns and arguments listed above, the morphometric parameters and their classical interpretations should be revised. Referring back to the complexity of construction of monogenetic volcanoes ( Figure 6) and their primary geomorphic development ( Figures 7-9), it is obvious that on a fresh volcanic landform the geomorphic feature is determined by syn-eruptive processes, which in turn are governed by the internally and externally-driven processes during the eruption history. Once the eruption ceases, the 'input' configuration of a monogenetic volcano in terms of architecture, pyroclast granulometric characteristics, geometry and geomorphology is given. The erosion agents at the start of the erosion history are determined by the interactions between the internal (e.g. pyroclastic rocks on the surface) and external processes (e.g. climate; Figure 11). The results of such series of interactions between these properties lead to surface and subsurface weathering, soil formation and development of vegetation succession over time. Each of these developments on the flanks of a monogenetic volcano has a feedback to the original controls modifying the actual balance towards one side. This leads to disequilibrium in the system and subsequent adjusting mechanism. These processes are called normal degradation, operating at a longer-time scale (ka to Ma). However, the degradation mechanism sometimes does not function as 'nor-mal'. During the erosion history of a monogenetic volcano, there are some environmental effects called 'events' such as tephra mantling or heavy-rain-induced grain flows. These 'events' are documented to cause orders of magnitude larger surface modification and possibly initialize new rates and trends in the dominant sediment-transport system and increase the sediment yield [e.g. 411-412, 447-448, 457]. Consequently, the erosion history of a monogenetic volcano comprises both normal ('background') and event degradation processes ( Figure 11). The cumulative result of many interactions, reorganization of erosion agents, and effects of event degradation processes over the erosion history, are integrated into the geomorphic state at the time of examination. The degradation of the volcanic edifice leads to aggradation at the foot of the edifice and the development of a debris apron ( Figure 13A). Based on the behaviour and changes of intensity of the abovementioned major sediment transport processes, it is evident that the individual contribution of such erosion processes is not constant over time. It is more likely that they are enhanced or eased by each other at certain stages of the degradation. The gradual changes in style, rate and mode of sediment transport on the flank of a monogenetic volcanic edifice are likely triggered by the shifting of dominant external (e.g. climate change) and internal environments (e.g. variability of erosion resistant layer within the edifice as observed in Figure 14A). Consequently, the degradation of the monogenetic edifice as a whole cannot be linear (or maybe just certain parts of the erosion history) and must erode faster at the beginning and slower at the end of the degradation [71,324] in accordance with the wide range of rates and time-scales of sediment transport processes. Consequently, a single geomorphic agent cannot account for a volcano's degradation. Instead, it seems to be the result from the overall contribution of all processes with complex temporal distribution. Without event degradation processes, given the fact that the erosion history lasts at least over a timescale of a couple of ka for a typical monogenetic volcanoes, this increases the likelihood of some changes in the external environment that could modify the degradation trend. These surface modifications and degradation processes should be in correlation with the values of morphometric parameters, but their interpretation is possibly not a straightforward process. The pristine unmodified geomorphic stage of a monogenetic volcano is predominantly controlled by the processes that occurred in the eruption history. Once the degradation proceeds (e.g. erosion surface modification, soil formation or development of vegetation cover), these primary geomorphic attributes are gradually replaced by excess 'signatures' of the various post-eruptive processes. This will result in 'noise' in the original syn-eruptive state of morphometric parameters extracted from the topographic attributes. The soil cover on the surface creates a buffer zone between the pyroclastic deposits and the environment. In this buffer zone, most of the weathering and erosion processes take place (e.g. overground flow). During the degradation the actual erosion surface, regardless of whether it is 'unstabilized' or 'stabilized', could contain pyroclasts with contrasting granulometric and textural characteristics (e.g. Figure 13B). For instance, the rates of weathering, weathering product transport and soil formation could be different at the base of a volcanic cone than at the crater rim, due to the differences in flank morphology, aspect or microclimate. These differences are demonstrated for various sectors of a cone-type volcano by the variation in microclimatic setting, e.g. insolation, freeze-thaw cycles or snow cover [437]. If there are a couple of meter difference in sediment accumulation/loss, chemical weathering and soil formations, it could cause a variation of a few degrees in the slope angle values. In extreme cases, these differences could cause misinterpretation of the morphometric parameters, thus these should be taken into account or stated as an assumption of the interpretation. The increase of post-eruptive 'noise' of the morphometric parameters will possibly increase over the erosion history, and possibly the largest in the late stage degradation of the edifice (e.g. Ma after its formation). In the case of older scoria cones, the architectural control could increase, as the well-compacted and welded units are exposed, leading to rock-selective erosion styles and longer preservation potential for a volcanic landform. This is found to be important to the good preservation of the Pliocene (2.5-3.8 Ma) scoria cones such as Agár-tető or Bondoró at the Bakony-Balaton Highland in Hungary [68]. For instance, these scoria cones are old but they resemble considerably younger cone morphologies, due to their higher morphometric values (e.g. height about 40-80 m, slope angle of 10-15°). These parameters could be similar to the degradation signatures of a much younger cone, e.g. Early Pleistocene cones (slope angle of 13±3.8°) from Springerville volcanic field, Arizona [324]. Many lines of evidence suggest that the neglected internal architecture, initial variability in geomorphic state or effect of 'event' degradation processes play an important role on edifice degradation rates and trends. Once the degradation histories for various edifices are characterized by 1. different 'input' morphometric conditions (e.g. Figure 14B), and 2. large variability of rates and trends in mass wasting processes in accordance with the susceptibility of chemical weathering of the underlying volcanic rocks and the total capacity of sediment transport, it is possible that the same geomorphic state can be reached not only by ageing of the edifice, but via a combination of other processes. This further implies that the monogenetic volcanic edifice has a unique eruptive (e.g. Figure 6) and erosion history (e.g. Figures 12-14). As a result of the eruptive diversity, the erosion history is not independent from the eruption history (i.e. the complexity of the monogenetic edifices). In this interpretation, there is a chance to have edifices showing the same 'geomorphic state' (in terms of the basic geometric parameters) reached through different 'degradation paths'. An example for this could be the case of the two scoria cones in Figures 14C and D. In Figure 14C, the geometry of the edifice is strongly attached to the erosion-resistance and to the position of the spatter-dominated collar along the crater rim. In this eruptive history and subsequent erosion, the slope angle can be increased due to the undermining of the flanks. Consequently, the morphology of the cone is becoming 'younger' over time, that is the slope angle or H co /W co ratio will increase rather than decrease. On the other hand, a classical-looking cone ( Figure 14D) that has a homogenous inner architecture, experiences different rates and degrees of erosion over different time scales. Therefore, both cones degrade through different patterns and rates. To confidently say that the decreasing trend in morphometric parameters is associated with age, it is important to reconstruct the likely environment where the edifice degradation has taken place, including the number of 'event' and major changes in the degradation controls. This includes understanding the combination and diversity of facies architecture [68,468], the stratigraphic position of the edifice within the stratigraphic record of the volcanic field [412,469], the approximate likelihood of aggradation by, for example, tephra mantling [411], and spatial and temporal combination and fluctuation of 'normal' and 'event' degradation processes over the erosion history. Figure 14. A) Difference in mode of erosion (rock fall or surface runoff) due to spatter accumulation on the crater rim of a 1-3 Ma old scoria cone in the Al Haruj Volcanic Field in Libya [288,466]. (B) Variability in slope angle on the flanks of spatter-dominated and lapilli-dominated cones (1256 AD) in the last eruptions at the Harrat Al-Madinah Volcanic Field, Saudi Arabia [467]. Due to the young ages, these differences could be the results of differences in syn-eruptive processes (e.g. fragmentation mechanism, degree of welding and granulometric properties). These different 'input' geomorphic states alone can also lead to the large variability of degradation paths of monogenetic volcanic landforms. (C and D) Architecturally-controlled erosion pattern on Pleistocene scoria cones in the Harrat Al-Madinah Volcanic Field, Saudi Arabia. The ages are between 1.2 and 0.9 Ma for the cone in Figure C, and only a couple of ka for the cone in Figure D [467]. The geomorphic contrast between the edifices is striking in the slope angles, θ, calculated as θ = arctan(H max /W flank ) from basic morphometric data. The erosion resistant collar on the crater rim changes the erosion patterns by keeping the crater rim at the same level over even Ma. This results in the 'undermining' of the flanks (small black arrows at the foot of the cones represented by a dashed line in Figure D) leading to a gradual increase of the slope angles in contrast to all previously proposed erosion models for cone-type monogenetic volcanoes. The white arrow near the rim ( Figure C) indicates the significant surface modification by event degradation (e.g. mass wasting of the erosion-resistant spatter collar). It is speculative, but the consequence of this irreversible and possibly 'random' event may have initialized the formation of a deeper gully (white dashed lines) leading to crater breaching over a longer time-scale. Conclusions: towards understanding the complexity of monogenetic volcanoes A typical monogenetic volcanic event begins at the magma source region, usually in the mantle, and ends when the volcanics have been fully removed by, for example, erosion processes. Within this conceptualized life cycle of a monogenetic volcano, there is an active stage (e.g. propagation of the magma towards the surface feeding a monogenetic eruption; Figure 15) and a passive stage (e.g. post-eruptive degradation until the feeder system is exposed; Figure 15). The active stage of evolution is dependent on many interactions between internally-or externally-driven factors. The magma (left hand side on Figure 15) intrudes into shallow parts of the crust that can be fragmented in accordance with the actual balance between the magmatic and external conditions at the time of the fragmentations. This could result in 6 varieties of volcanic eruptions if the composition is dominantly basaltic, which is responsible for the construction of a monogenetic volcanic edifice with a simple eruption history (E simple , that is 6 1 combinations of eruption styles). Once there is some disequilibrium in the system during the course of the eruption that will result in changing eruption styles adjusting the balance in the system, forming compound eruption histories (E compound that is 6 2 combinations of eruption styles). Each number of shifts in dominant eruption style opens a new phase of edifice growth and therefore increases the complexity of the eruption history towards E complex (that is 6 3 or more combinations of eruption styles). There could be even thousands of theoretical combinations of eruption styles if the volcano is built up by more than 4 phases with different eruption styles, until the magma supply is completely exhausted or new vent is established by migration of the magma focus. With increasing complexity of the eruption history, the complexity of the facies architecture of the volcanic edifice increases. Conceptually, these eruption histories can be numerically described by matrices, based on spatial and temporal characteristics of eruption styles (e.g. Figure 6). The coding of eruption styles could be 1. Hawaiian, effusive activity , if the erupting melt is characterised by basaltic to basaltic andesitic in composition. This systems can be modified by adding further eruption styles such as sub-Plinian. The syn-eruptive geomorphology of a volcano is, however, not only the result of the eruption style and associated pyroclast transport mechanism, but there are stages of destructive processes, such as flank collapse during scoria cone growth (e.g. Figure 3) or wall rock mass wasting during excavation of a maar crater (e.g. Figure 4). These common syn-eruptive processes (constructive and destructive phases during the eruption history) have an important role on the resulting morphology, but they are not always visible/detectable in the morphology of the edifice. On the other hand, after the eruption ceases a passive stage of surface modification takes place (right hand side on Figure 15). In the passive stage, the erosion history is also governed significantly by a series of interactions between the exposed pyroclastic deposits and lava rocks (and their textural and granulometric characteristics determining permeability) and external influences such as climate, location or hydrology of the area. These interactions determine the long-term (ka to Ma) degradation processes and rates. However, it is important to note that the erosion history is often a function of 'normal' and 'event' degradation. The effect of event degradation is expected to be larger, in some cases, than the cumulative surface modification by normal degradation processes. The relationship between event and normal degradation should be a subject for future studies. Due to the large number of combinations of eruption styles that can generate edifices with different pyroclastic successions and different initial geometries (at least a broader range than previously thought), volcanoes can have very different susceptibilities for erosion. This implies that degradation trends and patterns of monogenetic volcanoes should be individual volcano-specific (right hand side on Figure 15). In addition, the combination of erosion path of individual monogenetic volcanoes is an order of magnitude larger than during the eruption history due to the larger number of controlling factors (6 eruption styles versus varieties of 'normal' and 'event' mass wasting processes) and the longer time-scale of degradation (<<ka versus >Ma). This has an important practical conclusion: there are certain stages during the degradation when some morphometric irregularity occurs if two or more volcanic edifices are compared. The morphometric irregularity refers to the state when two volcanoes appear similar through morphometric parameters such as H co /W co ratio or slope angle, but they have different absolute ages (black double-headed arrows of the top graph and black circles on the bottom graph in Figure 15). An important practical application of the volcano-specific degradation is that the correlation between the morphology of the edifice is not always a function of the time elapsed since formation of the volcanic edifice. As a consequence of the diverse active and passive evolution of a volcanic edifice, age grouping based on geomorphic parameters, such as H co /W co ratio or average or maximum cone slope angle, should be avoided. In terms of interpretation of the morphometric data, the post-eruptive surface modification causes unfortunate 'noise' in the primary morphometric signatures, which can be only reduced by using edifices with absolute age constraints. Due to the long-lived evolution of monogenetic volcanic fields (Ma-scale), there are usually volcanoes that are freshly formed, sometimes close to volcanoes with no primary morphological features at the time of the examination. The large contrasting and dynamic geological environment of such monogenetic volcanoes makes the interpretation of available topographic information more complicated than previously thought. Future studies should target this particular issue and define the meaning of morphology of these monogenetic volcanic edifices at many scales. (7), 833-46.
v3-fos-license
2022-01-20T16:11:02.967Z
2022-01-17T00:00:00.000
246049403
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2071-1050/14/2/1042/pdf?version=1642562043", "pdf_hash": "7f2fe13aaaab3c36ed113ad415332955612fd055", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2797", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "sha1": "cf2a05b400648c7f9286ccea76b1c62ca6b13b5f", "year": 2022 }
pes2o/s2orc
Sustainability of Vertical Farming in Comparison with Conventional Farming: A Case Study in Miyagi Prefecture, Japan, on Nitrogen and Phosphorus Footprint : The reduced requirement for nutrients in vertical farming (VF) implies that the potential for lower environmental impact is greater in VF than in conventional farming. In this study, the environmental impacts of VF were evaluated based on a case study of VF for vegetables in Miyagi Prefecture in Japan, where VF has been utilized in post-disaster relief operations in the wake of the 2011 Great East Japan Earthquake. The nitrogen (N) and phosphorus (P) footprints of these VFs were determined and analyzed to quantify the potential reduction in N and P emissions. First, the N and P footprints in conventional farming were calculated. Then, those footprints were compared with three different scenarios with different ratios for food imports, which equate to different levels of food self-sufficiency. The results show a decrease in the N and P footprints with increased prefectural self-sufficiency due to the introduction of VF. In addition to reducing the risks to food supply by reducing the dependence on imports and the environmental impacts of agriculture, further analysis reveals that VF is suitable for use in many scenarios around the world to reliably provide food to local communities. Its low vulnerability to natural disasters makes VF well suited to places most at risk from climate change anomalies. Importance of Nutrient Management Nutrient input and water use for crop production result in the environmental pollution of aquatic ecosystems. One of the most studied environmental pollution problems is eutrophication, which occurs in water bodies due to excess nitrogen (N) and phosphorus (P) [1,2]. In the 21st century, one of the largest global challenges is to continuously increase crop production to ensure adequate food supply for the growing population while protecting the environment. In order to achieve this goal, it is essential to improve the nutrient use practices in agriculture, with particular emphasis on N and P [3][4][5]. For the conservation of aquatic ecosystems and food systems, it is important to manage nutrient inputs and outputs and reduce nutrient loss in production from agricultural systems by an integrated assessment based on life cycle processes [6,7]. Vertical Farming as an Emerging Technology in Agriculture Vertical farming (VF) is an indoor method of growing crops with a controlled nutrient solution and recycled water in several layers with stable productivity (e.g., plant factories) [8][9][10][11][12]. The crops productivity of VF is higher than in conventional farming, and that of most other developed countries: in Canada, self-sufficiency based on calories is 255%, and based on production capacity, it is 120%, while the numbers for Australia are 233% and 133% [45]. Countries, such as Japan, that rely on other countries to meet their food demand are effectively outsourcing great environmental impacts to those countries [46]. However, supply from abroad is not guaranteed: to ensure domestic food supply, some countries have reduced their food exports, which has resulted in increasing food prices and has affected the global food supply [47]. To address this situation, a plan, known as "The food, agriculture, and village basic plan in Japan" was proposed in 2018. This plan is based on the need to increase selfsufficiency based on caloric needs to 45%, and one of the goals is to increase the production capacity to 75% by 2030 [48]. Although the average import ratio of all kinds of vegetables in Japan was only 22% in 2018 [49][50][51], a survey of the local production and consumption of vegetables in all 47 prefectures of Japan reveals the distribution of vegetables was unbalanced [49,50]. For example, in 2018, Miyagi Prefecture was not able to meet the demand for certain vegetables without relying on supplies from outside the prefecture and imports from other countries [52,53]. To increase food self-sufficiency in Japan, it was proposed that domestic vegetable production should be expanded with the introduction of sustainable agriculture [54,55]. While domestic crop production increases by the introduction of VF, a higher level of food self-sufficiency relieves the dependency on imports. It is important to improve food self-sufficiency using sustainable agriculture. In order to ascertain the sustainability of VF on a long-term basis in terms of N and P, the indispensable nutrients for crop production [56,57], the N and P environmental emissions associated with VF need to be monitored. The N and P footprints are defined as quantitative indicators of the total environmental emissions of N and P at a prefectural level or in certain areas based on consumption in a one-year period [58,59]. While the footprint concept considers the whole supply chain, it has been reported that crop cultivation is the largest contributor to N and P footprints [60]. The importance of confirming N and P emissions in agriculture using a footprint analysis to evaluate the sustainability of VF and provide data on how the agricultural environment is affected has been highlighted in earlier studies [58,59]. To date, little research of this nature has been conducted on VF in Japan. In addition, it was highlighted in a recent review paper on sustainable agricultural practices that N and P use efficiencies (NUE, PUE) should be determined in efforts to optimize nutrient use: these indicate the proportions of N and P that are absorbed and used by the plants from the total N and P inputs [61]. In other words, the challenge is to increase crop production while reducing environmental impacts and minimizing resource depletion due to agricultural demand by utilizing N and P more effectively and sustainably [62]. The NUE and PUE have been increasingly used as indicators to assess the nutrient balances of N and P in nutrient use practices [61][62][63][64]. Objective Within the context of considering the environmental impacts of replacing imported vegetables with production by VF in Japan, the objective of this study was to quantify the extent of the reduction in the N and P footprints with VF as a replacement of conventional farming from the footprint perspective. The feasibility and effectiveness of VF is assessed for its ability to increase NUE, PUE, and food self-sufficiency; prevent water degradation; and stabilize crop production. The role of VF in disaster-resilient and post-disaster reconstruction is also discussed in areas not only damaged by the triple disaster of March 2011 in Miyagi Prefecture (earthquake, tsunami, and nuclear accident), but the results are also expected to be applicable to other areas of the world affected by natural disasters. To achieve this objective, the trends in VF in Miyagi Prefecture were assessed. The first step was to conduct a survey to determine how widespread VF has become in post-disaster Miyagi Prefecture and to create a distribution map. Then by considering 36 different vegetables consumed in Miyagi Prefecture (strawberries, melon, and watermelon are classified as vegetables in Japan) [50], the extent of the reduction in the N and P footprints for increasing food self-sufficiency by introduction of VF was quantified. In all, nine vegetables were chosen as target vegetables: these nine vegetables represented 22% of vegetable imports in Japan in 2018 (including frozen and processed products) [52]. The N and P footprints in conventional farming and VF were calculated based on consumption within the prefecture with a focus on crop cultivation both within and outside the prefecture, including abroad. Here, "the prefectural self-sufficiency" of food in Miyagi Prefecture is defined as the proportion of food produced locally (that is, the crops produced within the prefecture) of the total prefectural consumption, whereas "the self-sufficiency" in the context of international trade is defined as the proportion of the domestic production (that is, the crops produced within Japan) of the total consumption. To evaluate the extent to which the N and P footprints were reduced in VF, a scenario analysis was conducted with changes in the dependencies of conventional farming and VF based on food self-sufficiency focusing on the nine target vegetables with relatively lower self-sufficiency at the national level. Management of Vertical Farming in Japan In 2020, over 85% of the tomato and strawberry market was represented by VF crops, while cucumbers, bell peppers, and asparagus grown in VF facilities represented between 60% and 70% of the market [50,65]. VF is becoming more widespread around Japan in recent years, but it is not possible for VF to replace conventional farming. At this point in time, the crop variety suitable for cultivation in VFs is severely limited, and VF techniques for crop production require further development. There are roughly three types of VF in Japan: VF using natural light, VF using artificial lighting, and a combination of both. According to an annual survey in 2020 based on 2019 VF practices in Japan [66], a total of 164 factories used natural light, 187 used artificial lighting, and 35 used a combination of both. The number of factories using artificial lighting increased until 2015 and remained stable from 2015 to 2020, while those using natural light gradually increased [66]. Distribution of Vertical Farming in Miyagi Prefecture Miyagi Prefecture, with an area of 7282 km 2 and population of 2,303,100 in 2018, is located in the Tohoku region in Japan ( Figure 1). The six prefectures of the Tohoku region have a population of 8,842,610, and Sendai City, the capital of Miyagi Prefecture, is the largest city in this region with a population of approximately one million people (1,062,585 in 2018) [67]. There were 21 VF operators in Miyagi Prefecture in 2019 [66]. A total of 15 operators utilized natural light, 5 used artificial lighting, and 1 used both. The VF operators were concentrated in coastal areas, with two major areas, the surrounding area of Sendai City and the Yamamoto-cho area in the south of Miyagi Prefecture (Figure 1). Fourteen operators were established after the Great East Japan earthquake in 2011 ( Table 1). The cultivated area per operator was more than 8000 m 2 for those using natural light, while those using artificial light used less than 5000 m 2 . Considering the damage done to the soils of Miyagi Prefecture by the tsunami [68,69], the soilless nature of VF makes it highly suitable for agriculture in damaged areas. The uptake of VF has been supported by government subsidies, creating employment opportunities and helping with regional development [70,71]. There were 21 VF operators in Miyagi Prefecture in 2019 [66]. A total of 15 operators utilized natural light, 5 used artificial lighting, and 1 used both. The VF operators were concentrated in coastal areas, with two major areas, the surrounding area of Sendai City and the Yamamoto-cho area in the south of Miyagi Prefecture ( Figure 1). Fourteen operators were established after the Great East Japan earthquake in 2011 ( Table 1). The cultivated area per operator was more than 8000 m 2 for those using natural light, while those using artificial light used less than 5000 m 2 . Considering the damage done to the soils of Miyagi Prefecture by the tsunami [68,69], the soilless nature of VF makes it highly suitable for agriculture in damaged areas. The uptake of VF has been supported by government subsidies, creating employment opportunities and helping with regional development [70,71]. Footprint Calculation In this study, the conventional farming method was set as the current condition of agricultural production in 2018. First, the loss of N and P in the production of the 36 vegetable crops mainly consumed in Japan was calculated (Table S1). Due to data limitations, we assumed that all the vegetables were grown by conventional farming in 2018. The prefectural-level N and P footprints were then estimated based on the amount of prefectural consumption of target vegetables, including those grown and consumed in Miyagi Prefecture, grown in 46 other Japanese prefectures and consumed in Miyagi Prefecture, and grown overseas and consumed in Miyagi Prefecture. The loss of N and P in the production of imported vegetables, in Mg (i.e., 10 6 g) N or Mg P loss per annual Mg production, were assumed to be the weighted average of the other Japanese prefectures. Three scenarios were developed with a focus on the 9 vegetable crops with imported ratios higher than the average imported ratios of the 36 vegetable crops in Japan in 2018, and the 9 vegetables were compared considering changes in the N and P footprints in conventional farming for various scenarios. To estimate the N and P footprints of 36 vegetables, 2018 data were used as the baseline. Nine vegetables with high import ratios in Japan were identified, and the scenarios described in Section 2.3 were developed with the assumption that various percentages of those vegetables were grown using VF rather than conventional farming. The vegetable N footprint of Miyagi Prefecture was calculated using Equations (1)-(5) and the vegetable P footprint of Miyagi Prefecture was calculated similarly with an adjustment for the differences in the chemical nature of N and P, explained after Equation (5). In Equation (1), F PP is a one-year N footprint of the crop j produced within the prefecture α and F DIM is a one-year N footprint of the crop j imported from outside the prefecture α including transported from other prefectures and international import from overseas defined as "domestic import". Here, F PPjα is defined as in Equation (2), where L is the loss of N in production (Mg N year −1 ), Q is the prefectural production amount (Mg year −1 ) taken from the Statistical Survey on Crops [50], and C PP is the local consumption of crop j produced locally in prefecture α (Mg year −1 ) taken from a wholesale market survey [53]. Assuming that N fertilizer input ratios were as recommended by prefectural governments, L of crop j in the target prefecture α is calculated by subtracting harvested N, N taken out of field, and N plowed into soils with residue from the total N input by fertilizer, as in Equation (3), where f Chem and f Org are the chemical fertilizer and the organic fertilizer applied per unit area (10 4 g N ha −1 ) [72], S is the area cultivated (ha) taken from the Statistical Survey on Crops [50], and (f Chem + f Org ) × S is the fertilizing amount as input. C H and c R are the ratios of N content in the harvested product taken from government reports and other literature [73][74][75] and residue taken from the National Greenhouse Gas Inventory Report [76], respectively, and b is the fraction of the area that is burnt on a field, µ is the combustion factor [76], w is the rate of residue to production [75,77]. Here, c R × Q × w × b × µ is the N amount in burned residue counted as the loss of N in production [78], and is N taken out of field or plowed into soils as non-burned residue. The N amount plowed into soils with residue was also calculated as utilization. The N input by fertilizers was assumed to either go to harvested crops or residues or be directly lost to the environment. Supposing that the consumption ratio of imported commodities in target prefecture α is the same as it is at the national level, C IMPORTjα , the consumption of imports in target prefecture α, can be defined as follows, where C DOMESTICjα is the consumption of "domestically-produced" crop j in the target prefecture α, which is produced domestically. C DOMESTICj is the national consumption of crop j, while C IMPORTj is the national import of crop j. Supposing the N loss per production is the same as the national average for the transported from outside Miyagi Prefecture and imported commodities, F DIM can be expressed by Equation (5), where L and Q are based on the data of prefecture k, which are 46 prefectures other than Miyagi Prefecture in Japan. C TRANS and C IMPORT are the consumption of crop j in Miyagi Prefecture, which is transported from production in other prefectures and overseas, respectively. Due to the limitation of data from each import country, the N footprint from import was calculated from the average of that from the other prefectures. On this basis, the vegetable N footprint of Miyagi Prefecture was calculated based on consumption data [53] and population statistics in 2018 [67]. The footprints for Miyagi Prefecture were estimated by multiplying the population of the Miyagi Prefecture by the averages of the per capita footprints of Tohoku region and Sendai city due to the limitation of the consumption data. Note that the footprints for strawberries, watermelons, and melons were calculated based on Sendai city only in this study. The above method was used for the calculation of the N footprint and the P footprint of Miyagi Prefecture was also calculated according to Equations (1)-(5), but the value of µ was set at µ = 0 for P in Equation (3) because there is no volatilized P in burned residue [79]. In the calculation of the N and P footprints, none of the crops planted that would fix N, such as legumes or alfalfa, were considered. These were the limitations of the methodology in this study. Comparison Analysis In order to verify the extent to which the N and P footprints are reduced due to the wider introduction of VF in Miyagi Prefecture, three scenarios at different import ratios were established. These scenarios assumed the international imported vegetables were substituted with the vegetables produced locally by VF. Scenario 1 is to halve the import ratios of the target vegetables in 2018. Scenario 2 is to halve the import quantities of the target vegetables in 2018. Scenario 3 is to have all target vegetables produced domestically. As the import quantities and ratios of each vegetable show in Table 2, the targeted nine types of vegetables with an import ratio of over 22% were divided into three groups: currently VF grown (grown extensively in VF in 2018), possibly VF grown (planted in conventional farms mainly but possibly grown in VF, e.g., lettuce in Japan), and potentially VF grown (potentially grown in VF with a high risk of failure due to insufficient social and economic acceptance [80][81][82][83][84]). Due to the limitation of the consumption data, the import ratios of vegetables in Miyagi Prefecture were assumed to be the same as the ratios for the entire Japan, and the scenarios and vegetables groups were established based on the assumed import ratios. The N and P footprints on scenarios were calculated through Equations (1)- (5). The nutrient solution is recycled with controlled water cycling and does not run off [18]. Therefore, it is reasonable to assume that the N and P losses in VF are negligible. Then, the differences in the N and P footprints for conventional farming and VF were determined for Miyagi Prefecture. Prefectural-Level N and P Footprints The N and P footprints of all 36 investigated vegetables for conventional farming in Miyagi Prefecture were calculated in this study. The total N footprint of the vegetables was 3119 Mg N year −1 , while the total P footprint was 626 Mg P year −1 . The proportional footprint of the nine vegetables we propose could be primarily grown in VF was 32% each for both N and P. The results of the nine target vegetables in the scenario analysis is shown in Table 3. In the conditions of 2018, the total N and P footprints were 992 Mg N year −1 and 198 Mg P year −1 , respectively, while the proportion of the possibly VF grown group (such as Welsh onions) was over 60% of the total N and P footprints. The trends of the N footprints for each vegetable were similar to those of the P footprints. Among the target vegetables, Welsh onions accounted for the highest N and P footprints, at 238 Mg N year −1 and 58 Mg P year −1 , whereas celery accounted for the lowest, at 9.4 Mg N year −1 and 2.4 Mg P year −1 , respectively. These results reveal great differences in the N and P footprints of each vegetable. The total of the per capita N and P footprints of the nine target vegetables were 431 g N capita −1 year −1 and 86 g P capita −1 year −1 , respectively, in 2018 ( Figure 2). The per capita N footprints were 71 g N capita −1 year −1 , 264 g N capita −1 year −1 , and 97 g N capita −1 year −1 in the currently VF grown (such as tomatoes), possibly VF grown, and potentially VF grown (such as pumpkins) groups, respectively, while the per capita P footprints were 15 g P capita −1 year −1 , 55 g P capita −1 year −1 , and 16 g P capita −1 year −1 . While the per capita N and P footprints of each vegetable exhibited similar trends between their total N and P footprints, these differed greatly between different kinds of vegetables. The total of the per capita N and P footprints of the nine target vegetables were 431 g N capita −1 year −1 and 86 g P capita −1 year −1 , respectively, in 2018 ( Figure 2). The per capita N footprints were 71 g N capita −1 year −1 , 264 g N capita −1 year −1 , and 97 g N capita −1 year −1 in the currently VF grown (such as tomatoes), possibly VF grown, and potentially VF grown (such as pumpkins) groups, respectively, while the per capita P footprints were 15 g P capita −1 year −1 , 55 g P capita −1 year −1 , and 16 g P capita −1 year −1 . While the per capita N and P footprints of each vegetable exhibited similar trends between their total N and P footprints, these differed greatly between different kinds of vegetables. Results of the Scenario Analysis The total N and P footprints of nine target vegetables in Miyagi Prefecture reduced by 234 Mg N year −1 (24%) and 45 Mg P year −1 (22%) in scenario 1, with import ratios half of the ratio in 2018; by 184 Mg N year −1 (19%) and 35 Mg P year −1 (18%) in scenario 2, with import quantities half of the quantity in 2018; and by 368 Mg N year −1 (37%) and 71 Mg P year −1 (36%) in scenario 3, with all target vegetables domestically produced (Table 3 and Figure 3a). Results of the Scenario Analysis The total N and P footprints of nine target vegetables in Miyagi Prefecture reduced by 234 Mg N year −1 (24%) and 45 Mg P year −1 (22%) in scenario 1, with import ratios half of the ratio in 2018; by 184 Mg N year −1 (19%) and 35 Mg P year −1 (18%) in scenario 2, with import quantities half of the quantity in 2018; and by 368 Mg N year −1 (37%) and 71 Mg P year −1 (36%) in scenario 3, with all target vegetables domestically produced (Table 3 and Figure 3a). By introducing VF to substitute the imported vegetables, it is possible to reduce the total N and P footprints by more than 35%. The reduction ratios of the N and P footprints of each vegetable were similar in the same scenario. The reduction ratio of the footprints for pumpkins was the highest, whereas the lowest was for Welsh onions in each scenario. Compared to the N footprint reduction, the reduction ratios for P footprint were over 1% higher for spinach and over 0.7% lower for Welsh onions (Figure 3b). Among three vegetable groups, the reduction in the N and P footprints in the possibly VF grown group, such as Welsh onions, was the largest, whereas the reduction ratio in this group was the lowest in each scenario. The reduction ratio in the footprint of the potentially VF grown group was the highest in each scenario. This reveals that the potentially VF grown group has a better reduction effect due to large reduction of the N and P footprints for pumpkins and melons. The reduction effect of N and P footprints in the possibly VF grown group was the lowest. Results for N and P Use Efficiency As shown in Figure 4, in conventional farming, the NUE was the highest for broccoli (30%) and the lowest for pumpkin (5%), while the PUE was the highest for melons (23%) and the lowest for asparagus (4%). By introducing VF, NUEs for each vegetable increased to 30-60% in scenario 1, 28-57% in scenario 2, and 41-72% in scenario 3 (Figure 4a). While PUEs increased to 27-53% in scenario 1, 25-50% in scenario 2, and 39-68% in scenario 3 (Figure 4b). This reveals the NUE and PUE increased in each scenario on introducing VF instead of importing food, and there was significant difference compared with NUE and PUE for each vegetable. The NUE for bell peppers, celery, asparagus, broccoli, and Welsh By introducing VF to substitute the imported vegetables, it is possible to reduce the total N and P footprints by more than 35%. The reduction ratios of the N and P footprints of each vegetable were similar in the same scenario. The reduction ratio of the footprints for pumpkins was the highest, whereas the lowest was for Welsh onions in each scenario. Compared to the N footprint reduction, the reduction ratios for P footprint were over 1% higher for spinach and over 0.7% lower for Welsh onions (Figure 3b). Among three vegetable groups, the reduction in the N and P footprints in the possibly VF grown group, such as Welsh onions, was the largest, whereas the reduction ratio in this group was the lowest in each scenario. The reduction ratio in the footprint of the potentially VF grown group was the highest in each scenario. This reveals that the potentially VF grown group has a better reduction effect due to large reduction of the N and P footprints for pumpkins and melons. The reduction effect of N and P footprints in the possibly VF grown group was the lowest. Results for N and P Use Efficiency As shown in Figure 4, in conventional farming, the NUE was the highest for broccoli (30%) and the lowest for pumpkin (5%), while the PUE was the highest for melons (23%) and the lowest for asparagus (4%). By introducing VF, NUEs for each vegetable increased to 30-60% in scenario 1, 28-57% in scenario 2, and 41-72% in scenario 3 (Figure 4a). While PUEs increased to 27-53% in scenario 1, 25-50% in scenario 2, and 39-68% in scenario 3 (Figure 4b). This reveals the NUE and PUE increased in each scenario on introducing VF instead of importing food, and there was significant difference compared with NUE and PUE for each vegetable. The NUE for bell peppers, celery, asparagus, broccoli, and Welsh onions was higher than the PUE for these vegetables, whereas that for pumpkin and melons was significantly lower than the PUE in each scenario. onions was higher than the PUE for these vegetables, whereas that for pumpkin and melons was significantly lower than the PUE in each scenario. Discussion The contributions of the reductions in the N and P footprints by introducing VF were considered in terms of the following: the water environment, food self-sufficiency, and disaster-resilient agriculture. Impact of VF on Averting the Risk of Water Degradation One of the environmental impacts of agriculture is the pollution caused by the N and P losses [85]. In conventional farming, N can be lost to the environment in forms of N2O, NO3 -, or NH3 [86,87]. NH3 is released to the atmosphere with volatilization [78], and NO3goes to waterways as leaching or runoff [86,88]. N2O is a significant GHG [3]. N2O emissions in agriculture are significant due to the utilization of fertilizers and manure [86]. Together with the P compounds discharged in the dissolved and particulate forms by runoff [78], N lost to the environment contributes to the eutrophication of aquatic ecosystems [3,[89][90][91][92]. One important factor in reducing the N and P loss in production to the environment in VF is the reuse of the nutrient solution. A nutrient solution including N and P is recycled in VF by controlling the nutrient composition rather than discharging the nutrients into the environment. This study confirmed that the total N and P footprint of the target vegetables of conventional farming in Miyagi Prefecture were 992 Mg N year −1 and 198 Mg P year −1 in 2018, and that VF effectively reduced N emissions by 368 Mg N year −1 (37%) and P emissions by 71 Mg P year −1 (36%) ( Discussion The contributions of the reductions in the N and P footprints by introducing VF were considered in terms of the following: the water environment, food self-sufficiency, and disaster-resilient agriculture. Impact of VF on Averting the Risk of Water Degradation One of the environmental impacts of agriculture is the pollution caused by the N and P losses [85]. In conventional farming, N can be lost to the environment in forms of N 2 O, NO 3 − , or NH 3 [86,87]. NH 3 is released to the atmosphere with volatilization [78], and NO 3 − goes to waterways as leaching or runoff [86,88]. N 2 O is a significant GHG [3]. N 2 O emissions in agriculture are significant due to the utilization of fertilizers and manure [86]. Together with the P compounds discharged in the dissolved and particulate forms by runoff [78], N lost to the environment contributes to the eutrophication of aquatic ecosystems [3,[89][90][91][92]. One important factor in reducing the N and P loss in production to the environment in VF is the reuse of the nutrient solution. A nutrient solution including N and P is recycled in VF by controlling the nutrient composition rather than discharging the nutrients into the environment. This study confirmed that the total N and P footprint of the target vegetables of conventional farming in Miyagi Prefecture were 992 Mg N year −1 and 198 Mg P year −1 in 2018, and that VF effectively reduced N emissions by 368 Mg N year −1 (37%) and P emissions by 71 Mg P year −1 (36%) ( Table 3). This means that the N and P loss in the production of food to be consumed in Miyagi Prefecture in remote places can be reduced by introducing more VF as well as keeping the environment of the local production areas within Miyagi Prefecture, protecting both local and remote ecosystems [93]. The study of reducing N and P footprints by VF provides a reference for N and P emission standards in agriculture. However, the actual application ratios of chemical and organic fertilizers in agricultural production may be higher than the recommended application ratios provided by prefectural governments used in this article. According to the results of this study, the nutrient use efficiencies of conventional farming were low, from 5% to 30% for the NUEs and from 4% to 23% for the PUEs, whereas they can be increased by introducing VF to 41-72% for the NUEs and 39-68% for the PUEs. In earlier studies, the NUE for duckweed in the hydroponics system was increased to 67% from 25% in conventional farming, while the PUE in the hydroponics system was 33% [94,95]. Those for water hyacinth increased to 63% for total N and 79% for total P [94]. Clearly, VF contributes to improving the quality of water by removing N and P from runoff and because there is no leaching water, unlike in conventional agriculture. Therefore, VF is an effective approach to agriculture with a mitigated risk of water-quality degradation. Impact of VF on Food Self-Sufficiency and Urban Agriculture The estimated prefectural food self-sufficiencies of the nine target vegetables ranged from 1% (melons) to 83% (Welsh onions) in conventional farming and can be increased by 27-111% by introducing VF. The results show that VF is an option to reduce N and P footprints and promote self-sufficiency for major food importers such as Japan. It is foreseeable that exclusive reliance on improving conventional farming to ensure food security will one day change due to resource shortage caused by rapid urban expansion and industrial development [96]. As shown in this study, VF, as a type of urban agriculture, allows for food cultivation in areas where farmland is scarce or damaged. VF is a sustainable urban agricultural technique with great potential to improve self-sufficiency in countries lacking in farmland or with barren land [97]. In Singapore, VF was promoted in a new policy in 2019 designed to promote an improvement in self-sufficiency from 10% to 30% by 2030 [98]. In another case study of Lyon, France, the positive environmental and social benefits of VF were highlighted by the increase in self-sufficiency and improved adaptability of the city [99]. In the United States, several VF facilities have been established in Chicago, and the world's largest VF facility is in New Jersey [100]. VF has also become more common in other countries such as Italy and Brazil. Because of the characteristics of soilless cultivation, VF is a viable option in countries with insufficient farmland and also in regions that cannot engage in conventional farming due to the limitations of topography and poor soil fertility. These studies point to a trend that VF will gradually replace conventional farming in urban areas in the future, in effect contributing to higher self-sufficiency and a reduction in land use for agricultural purposes [93]. Potential of VF as a Disaster-Resilient Agriculture Natural disasters such as landslides, heavy rainfalls, and floods, particularly in the rainy season and the typhoon season, threaten the food security of Japan. In the past decade, there have been 10 earthquakes with a magnitude of over 6.0 in Japan, including the massive Great East Japan Earthquake in 2011 [101]. Furthermore, persistent rainstorms and super typhoons have become increasingly common in the years from 2012 to 2020 [101]. The damage to agriculture from massive rainfall events, typhoons, and violent earthquakes in 2018 was estimated at JPY 568 billion (USD 5 billion) including JPY 112 billion (USD 1 billion) in crop production, and it was the second worst year in the decade after 2011 [68]. In another survey, it was reported that the total damage to crop production in the 25 prefectures affected by unseasonably heavy rainstorms in July 2020 was JPY 1.4 billion (USD 12 million) [102]. As global warming accelerates, natural disasters are likely to become more frequent and intense globally. According to an international disaster database, the number of annual disasters in 2018 in developing Asia and the Japan region was the highest since records began in the 1970s [101]. The widespread application of VF is considered a way to offset the damage caused by disaster to the food supply. According to the survey results, which were discussed in Section 2.1.2, the post-disaster promotion of VF was revealed. The main advantage of VF is that the crops are grown completely indoors and are, therefore, unaffected by rain, drought, and most other natural disasters, and the cultivation conditions are controlled [11]. From this perspective, VF can be considered disaster resilient [33]. However, it is not reasonable to claim that VF is disaster-proof since disruption to the electricity and water supply due to a violent earthquake or a flood would pose an immediate risk to VF, with both time and cost required for recovery [33]. The risk posed by violent disasters to VF needs to be assessed and efforts to mitigate the potential damage need to be considered. To be less susceptible to disaster, a back-up generator system would enable VF production to continue in the event of electricity outages, for example. After the Great East Japan Earthquake in 2011, VF was widely adopted in agricultural reconstruction efforts in the areas damaged by the earthquake and tsunami. As a response to the devastation of regional agriculture due to these multiple concurrent natural disasters, it was essential to restore agricultural capacity with no further environmental impacts. Based on the results in Section 2.1.2, 14 of the 21 VF operators in Miyagi Prefecture began their operations in the period from 2011 to 2017. This uptake in VF was at almost twice the pace outlined in the Tohoku reconstruction strategy, and the operators have been in business for at least 5 years. The extensive implementation of VF in Miyagi Prefecture provides the opportunity to investigate the specific impacts of VF as a form of agricultural reconstruction. The results of this study confirm the suitability and stability of VF for post-disaster agriculture from the perspective of reducing the N and P footprints and also for its potential to restore agricultural capacity. The suitability of VF as a system to provide locally grown food with a reliable high production rate, with high efficiency, and without occupying farmland has been demonstrated in several studies [94,95,[103][104][105]. It has the potential to be used anywhere, and planting can be done at any time regardless of the location of the VF facility or the season [8,106,107]. For example, in Bangladesh, where cyclones occur frequently, an adaptation will be implemented to both increase productivity and reduce the risks posed by natural disasters to conventional farming [108]. Similar to the case in Japan, in Aceh, Indonesia, which was damaged by the earthquake and tsunami in 2004, VF has been a central part of the postdisaster recovery program, since it is both disaster resilient and sensible for post-disaster scenarios [28]. This reveals that post-disaster agricultural development is a key factor in mitigating the impacts of natural disasters in the future. That is, VF is suitable not only for Miyagi Prefecture in Japan but should be considered a necessary new farm technology for use in scattered islands and other countries with frequent natural disasters, such as Indonesia and Bangladesh, or where food security needs to be improved. Considering the extreme risks to the food supply posed by natural disasters, VF has the potential to decrease the dependence on conventional farming and to accelerate the move toward more sustainable agriculture. Conclusions The N and P footprints of vegetables in VF and conventional agriculture for Miyagi Prefecture were compared based on the change in replacing imported vegetables with production from VF in Japan. In the case of VF, the footprints of the target vegetables were reduced. The N footprint was reduced by 37%, at 363 M g N year −1 , and the P footprint was reduced by 36%, at 71 Mg P year −1 . The results indicate that expanding the scale of production in VF has the potential to reduce pollution due to excessive N and P in the aquatic environment, to improve prefectural and even national self-sufficiency, and to prevent water quality decline while saving water resources. The vital role played by VF in the regional agricultural reconstruction of Miyagi Prefecture after the Great East Japan Earthquake in 2011 was also shown. Further analysis revealed that VF is well suited for use in disaster-prone regions in Japan and in other parts of the world. The data provided by this study have potential for use in the formulation of policies designed to reduce N and P emissions by the introduction of VF. In the future, this research can be expanded by conducting a life-cycle analysis of the environmental footprint and carbon emissions of VF and comparing the results with those of conventional agriculture with agricultural imports taken into consideration. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/su14021042/s1, Table S1: The categories of 36 vegetables consumed in Miyagi Prefecture, Japan, produced via conventional farming, including 9 target vegetables whose import ratios were above average in Japan in 2018.
v3-fos-license
2021-08-15T06:16:03.370Z
2021-07-08T00:00:00.000
237009379
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://iwaponline.com/wst/article-pdf/84/3/777/919532/wst084030777.pdf", "pdf_hash": "249eb4cc48f69191972998422fb7de8704252d84", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2798", "s2fieldsofstudy": [ "Engineering" ], "sha1": "64ca424e878234128c5fb6c22c852151100926e0", "year": 2021 }
pes2o/s2orc
Using a grey multivariate model to predict impacts on the water quality of the Zhanghe River in China In order to assess the social factors affecting the water quality of the Zhanghe River and predict the potential impact of growth in primary, secondary, tertiary industries and population on water quality of the Zhanghe River in the next few years, a deformation derivative cumulative grey multiple convolution model (DGMC(1,N)) was applied. In order to improve the accuracy of the model, the accumulation of deformation derivatives is introduced, and the particle swarm optimization algorithm is used to solve the optimal order. The DGMC(1,N) model was compared with GM(1,2) and GM(1,1) models. The results show that the DGMC(1,N) model has the highest prediction accuracy. Finally, DGMC(1,N) model is used to predict the potential impact of growth in primary, secondary, tertiary industries and population on water quality in the Zhanghe River (using chemical oxygen demand (COD) as the water quality indicator). INTRODUCTION Due to the continuous development of the economy, growing human activities increase the potential to pollute waterways and degrade the environment. Water pollution not only affects human health and the health of ecosystems, it also restricts social and economic development. Therefore, it is important to understand the relationship between economic development and the pollution of waterways in order to identify water pollution mitigation strategies. In order to improve water quality, extensive research has been conducted on the relationship between water quality and its influencing factors. Kyei & Hassan (2019) analyzed the economic and environmental impact of water pollution taxes in the Olifants Basin, South Africa, using an environmentally scalable general equilibrium model. Nguyen et al. (2018) developed a model to evaluate the relationship between economic activity and water pollution in Viet Nam in order to identify water pollution mitigation strategies within the context of economic development. In the case of the Samarinda River in East Kalimantan in Indonesia, Vita et al. (2018) adopted a field observation method based on interviews of the government of society, industry, public welfare activities along the river and environmental departments, and further used an analytic hierarchy process to establish data, and identify countermeasures for controlling the water pollution. Choi et al. (2015) assessed the relationship between the economic growth in four major river basins in Korea and two key water quality indicators. Li & Lu (2020) tested the impact of regional integration on cross-border pollution under the auspices of the Yangtze River Economic Belt by using the difference-in-difference model. The results showed that regional integration could significantly reduce cross-border water pollution. Liu et al. (2020) used qualitative and quantitative analyses to study the relationship between water pollution and economic growth in the Nansihu River basin in China. Cullis et al. (2019) discuss the increasing risks to water quality in the Begg River basin in South Africa as a result of climate change and rapid urban development, as well as the direct and indirect economic impacts this may have on the agricultural sector. Based on the threat to water quality in the American Midwest posed by agricultural runoff, Floress et al. (2017) proposed and tested a structural equation model based on the dual interest theory to test whether, and to what extent, the relationship between awareness and agribusiness attitudes is regulated by management attitudes. Qualitative assessments of the Lake Merrill basin by Pires et al. (2020), which were performed using discriminant analysis methods, concluded that seasonality mainly affected anthropogenic sources such as agricultural activities and household emissions. An assessment of the current state of water quality in Lake Wadi El-Rajan, particularly following the increase in uncontrolled economic activity within its borders, was presented by Goher et al. (2019). Similarly, De Mello et al. (2020) outline the relationship between land use/land cover and water quality in Brazil and its impact on water quality. Kuwayama et al. (2020) examined long-term trends in surface water quality, nutrient pollution and its potential economic impact in Texas, USA while Du Plessis et al. (2015) quantified the complex relationship between land cover and specific water quality parameters and developed a unique model equation to predict water quality in the Grootdraai Dam catchment due to the importance of water quality within the basin to the country's future economic growth. While researchers have analyzed the impact of the economy on water pollution from different perspectives, few predict the impact of economic development on water resources in the future. The accurate prediction of levels of water pollution can inform the identification of countermeasures that are needed in response to the direction of future economic development. Extreme learning machine was used by Saberi-Movahed & Mehrpooya (2020) to predict longitudinal dispersion coefficients and evaluate the pollution status of water pipelines. Najafzadeh & Emamgholizadeh (2019) estimated the biochemical oxygen demand, dissolved oxygen and chemical oxygen demand (COD) using gene expression programming, evolutionary polynomial regression and a model tree while estimated biochemical oxygen demand and chemical oxygen demand using multivariate adaptive regression splines and least squares support vector machines. Mustafa et al. (2021) used support vector machines to build prediction models of water quality in the Kelantan River based on historical data collected from different sites. Xinzi et al. (2020) applied correlation analysis and path analysis to identify the causal relationship between urbanization and water quality indicators, and then comprehensive water quality indicators and related urbanization parameters were input into a back-propagation neural network for water quality prediction. Bao et al. (2020) predicted the water quality index for free surface wetlands by using three soft computing techniques, namely adaptive neurofuzzy systems, artificial neural networks and group data processing. Liu & Wu (2021) used a new adjacent non-homogeneous grey model to predict renewable energy consumption in Europe Bilgaev et al. (2020) analyzed the environmental and socio-economic development indicators of Baikal Island region with the method of constructing time series and structural transfer. Zhang et al. (2020) used the grey water footprint to estimate the different water bodies in 31 provinces (autonomous regions) in China. The poor information principle of the grey system theory was used to predict the rural water environment with a network search method to provide support for rural water environmental governance. Shen et al. (2020) compared a residual correction grey model with a grey topology prediction method in order to predict the water quality of the artificial reef area in Haizhou Bay. Yuan et al. (2019) used a fractional grey scale power model to predict water consumption while Jiang et al. (2019) used grey multivariate forecasting models to predict the long-term electricity consumption of power companies. Sahin (2019) combines linear and non-linear metabolic models with optimization techniques to accurately predict Turkey's greenhouse gas emissions. Zhong et al. (2017) use the grey model of particle swarm optimization algorithm to predict shortterm photovoltaic power generation, which improves the prediction accuracy compared with the traditional grey model. Utkucan (2021) uses genetic algorithms to optimize parameters, and proposes a new nonlinear grey Bernoulli model to study and analyze energy trumpets. Hu (2020) uses the grey multivariate forecasting model to predict bankruptcy, and uses genetic algorithm to avoid the influence of time on the result. Wang & Hao (2016) nonlinearly optimized the background value of the grey convolution model and compared the prediction of industrial energy consumption with the traditional model. Although the research is extensive, few researchers have used a grey multivariable model to analyze the impact of future economic development on water quality. A key feature of a grey model is that is cane been used when there is little information. In this study, a grey multivariable model was used to analyze and forecast the water pollution in the upper reaches of the Zhanghe River by the added value of the first, second and third industries and the population added value. A new accumulation method was adopted to improve the prediction accuracy of the original grey multivariable model. In this paper, Section 2 describes the study area and the indicators. Section 3 outlines the forecasting method while Section 4 analyzes the impact of local socioeconomic factors on the water quality of an upstream reach of the Zhanghe River in China. The study area The Zhanghe River is a tributary of the Hai River. Its source is located in the Shanxi Province and flows through Shanxi, Hebei and Henan Provinces. The upper reaches of the river mainly comprise two tributaries, namely the Qingzhanghe River and the Zhuozhanghe River. The study area is shown in Figure 1. Water quality indicator and data When assessing the level of pollution of a river, chemical oxygen demand is an important and fast-determinable indicator of organic pollution. The COD is a measure of the water quality of a river. The higher the COD level, the greater the degradation of water quality. The COD value is reported once a month. This article takes the average of the 12-month values as the research object. The COD levels in the upper reaches of the Zhanghe River from 2013 to 2018 have been reported by Handan Ecological Environment Bureau. Primary industry is the foundation of the national economy, while secondary industry is a leading industry of the national economy and tertiary industry is the key to providing employment in China. Population is the main indicator of social factors. The data on socio-economic indicators was obtained from the 'China County Statistical Yearbook' and 'Hebei Economic Yearbook' from 2013 to 2018. Consequently, the relationship between the socio-economic indicators and COD levels was analyzed. Deformable grey multivariable convolution (DGMC) model is a first order accumulation sequence. Considering the definition of the deformable derivative (Wu & Zhao 2019), the a(0 a 1)-order accumulation is The DGMC(1,N) modelling process is described below. (1) A non-negative sequence is The sequences of the related factors are: (2) The a-order of DGMC(1,N) is and u is the parameter to be estimated. The parameters can be obtained by using the least squares method which minimizes the sum of the squared residuals. The unknown parameters can be solved by the following formulas: The time response formula obtained from the Gaussian formula is: (4) Therefore, the sequencê The a-order accumulative reduction thuŝ (5) Evaluate the model using the mean absolute percentage error (MAPE), as follows: 2.4. GM(1,1) (1) A non-negative original sequence is (Yin et al. 2017) Then the differential equation of GM(1,1) is (2) Assumeâ parameter to be estimated,â ¼ a m , the least squares estimation minimizes the sum of the squared residuals, we can obtain the parameters by using the least squares method. The unknown parameters can be solved by the following formulas: (3) The time response formula obtained from the Gaussian formula iŝ 2.5. GM(1,2) (1) GM(1,2) represents a first-order differential equation with two variables, the differential equation is (Li et al. 2016) (2) Assumeâ parameter to be estimated,â ¼ a, b ½ T , we can obtain the parameters by using the least squares method. The unknown parameters can be solved by the following formulas: . . COMPARATIVE PREDICTION ACCURACY DGMC(1,2), GM(1,2) and GM(1,1) models were fitted to the annual COD concentrations for the period 2013 to 2018 reported by the Handan Ecological Environment Bureau. The COD concentrations and the model fitting results are shown in Table 1 and Figure 2. The MAPE value for the DGMC(1,2) model is 4.9% in comparison to 31.8% for the GM(1,2) model and 5.7% for the GM(1,1) model. Compared with traditional grey models, the DGMC(1,2) model improves the prediction accuracy. Consequently, the DGMC(1,2) was used to predict the annual average COD values. THE INFLUENCE OF SOCIAL DEVELOPMENT ON WATER QUALITY The next step was to assess the relationship between COD levels and primary, secondary, and tertiary industries and population, respectively. In order to forecast results, a value for the growth rate of the added value of primary industry was estimated. In the past seven years, the contribution of China's primary industry to GDP has been 0.3%. Noting that Handan City is a fourth-tier city with a large population and mainly relies on agriculture, forestry, animal husbandry and fishery. For the period 2013-2018, the calculated growth rates of the value added to the primary industry were 4.76%, 3.45%, À4.66%, À8.46% and À8.08%, respectively. Consequently, the assumed value-added rate of the primary industry was between 5% and À20%. From the estimated growth rate of the primary industry added value, the primary industry's added value for the period 2019-2022 was estimated, and the DGMC(1,2) model was used to predict the annual average COD value for 2019-2022. When the growth rate is 5%,x (0) 1 ¼ {11:69, 17:81, 28:38, 45:06}. When the growth rate is À20%,x (0) 1 ¼ {9:53, 8:65, 6:18, 1:95}. As shown in Figure 3, the predicted values of COD rise when the value-added growth rate of the primary industry is 5%. Likewise, the predicted values of COD fall when the rate of the added value of the primary industry falls and the water quality improves. When the growth level was 1, 3 and 10%, respectively, the COD predicted by the model showed an increasing trend. If the growth rate is 10%, the COD levels would reach 65 mg/L by 2022 (in the absence of any pollution reduction strategies). In order to further analyze this phenomenon, it is necessary to understand the added value of primary industry in the six counties across the upper reaches of the Zhanghe River, respectively. The added value of the primary industry in the six counties (Shexian, Cixian, Weixian, Daming, Linzhang and Cheng'an) from 2013 to 2018 and the resulting COD levels are shown in Table 3. The COD for the six counties predicted by the new DGMC(1,2) model are given in Table 4. It can be seen from Table 4 that the MAPE values for the six counties are all less than 10%. Assuming that the growth rate of primary industry in the six counties is the same as the overall growth rate, i.e. between 5% and À20%, the predicted impact on COD in the six counties is shown in Figure 4. Handan is an underdeveloped city, where the proportion of the primary industry is large. Therefore, the development of primary industries will not be reduced in order to reduce the pollution of the rivers. Consequently, there will need to be a focus on optimizing the agricultural industrial structure, adjusting the agricultural structure along the river, strengthening publicity and supervision, and eliminating water pollution at its source, possibly through the reduction of the area of cultivated land along the river and the use of pesticides and fertilizers. Predicting COD under secondary industry The added value of the secondary industry from 2013 to 2018 is shown in Table 5. Following the calculation procedure set out in Section 4.1, the results are shown in Table 6. The MAPE of DGMC(1,2) is 7.35%. The growth rate of the added value of the secondary industry from 2013 to 2018 were 0.17%, À1.09%, 0.15%, 19.94% and À19.96%. The results predicted for À5.0%, 5.0%, 10.0% and 15.0% growth rates are shown in Figure 5. When the growth rate was 5%, the results showed that with the growth rate increasing, the COD value was 10.5 mg/L in 2022, and the overall trend was increasing. When the growth rate is 10%, the COD value is 13.33 mg/L by 2022, which does not exceed the national standard of 20 mg/L. When the growth rate is 15%, the COD value will be 16.46 mg/L by 2022. However, when the growth rate is reduced by 5%, COD shows a downward trend, and will drop to 5.68 mg/L by 2022. Given this potential impact on COD, it is meaningful to study the structure of secondary industries. Secondary industries include industry and construction which are heavy polluting industries. The industrial value added and construction value added from 2013 to 2018 (data source: Hebei Economic Yearbook) are given in Table 7. The COD predicted by DGMC(1,2) models are shown in Table 8. The MAPE of COD is less than 10%, for both industry and the construction. Applying the adopted secondary industry growth rates to industrial growth, it can be seen from Figure 6, that when the industrial growth rate is 5%, COD would reach 14.30 mg/L in 2022, while if the construction growth rate is 5% then COD would reach 8.87 mg/L in 2022. Assuming growth rates of 10% for industry and construction, the COD would reach 19.51 mg/L and 9.80 mg/L in 2022, respectively. Assuming that the growth rates of the industrial and construction industries are both 15%, the COD would reach 25.28 mg/L and 10.84 mg/L in 2022, respectively. Handan City attaches great importance to the adjustment of industrial structure, vigorously implements an innovation-driven development strategy, and in-depth advances the supply-side structural reforms, focusing on the transformation of traditional industries, the cultivation of strategic emerging industries, and the development of modern service industries. These results indicate that the impact construction industry on water quality in the upper reaches of the Zhanghe River is modest and not as great as the impact of industrial growth. From the perspective of water quality, this indicates that it would be beneficial to reduce the proportion of industrial development and that investment in the construction industry could be increased. Predicted COD under tertiary industry The Table 8 shows the added value of tertiary industry from 2013 to 2018. Following the calculation procedure set out in Section 4.1, the results are shown in Table 9. The MAPE of the DGMC(1,2) is 6.66%. The growth rates of the added value of tertiary industry from 2013 to 2018 were 1.41%, 7.72%, À5.54%, 17.92% and À1.03%. The results predicted for À10%, À5%, 5%, 10% and 15% growth rates are shown in Figure 7. Under a growth rate of À5%, the COD would be 6.24 mg/L by 2022. Under a growth rate of À10%, the COD would be 4.23 mg/L by 2022. The trend in COD is consistent with the growth rate. Under a growth rate of 15%, the predicted COD by 2022 would be 20.86 mg/L, which exceeds national regulations the standard of 20 mg/L. This indicates that from a water quality COD (mg/L) 9.09 6 6.67 6.54 7.08 8.92 Figure 6 | The predicted COD based on secondary industry growth rates. perspective that tertiary industry growth rates of up to 15% could be sustained up to 2023 but that adverse impacts would arise in subsequent years depending on the growth rate. Table 10 shows the annual average of population from 2013 to 2018. It can be seen from the data that the population first increases and then decreases, and the data basically tends to be stable. Following the calculation procedure set out in Section 4.1, the results are shown in Table 11. The MAPE of the DGMC(1,2) is 2.24%. The growth rates of the population from 2013 to 2018 were 1.21%, 1.67%, À4.36%, À0.19% and 0.69%. From the data from 2013 to 2018, it can be concluded that the population has a trend of a slow decline. Because the population of China is influenced by national policies, growth rates of À5% and 5% were assessed. The results are shown in Figure 8. It can be seen from Figure 8 that when the population growth rate is À5%, the COD experiences a stable decline. When the growth rate is 5%, the COD shows a trend of continuous rise, reaching 53.42 mg/L by 2022. This indicates that if the growth rate of the six counties of Handan city is 5%, then this population growth would have a significant adverse impact on water quality in the Zhanghe River in the absence of any additional pollution reduction strategies. CONCLUSIONS Most research on water quality issues involves multivariate models. Through comparative analysis of DGMC(1, 2), GM(1, 2) and GM(1, 1) models, it was concluded that a DGMC(1,2) model which was able to analyze and predict the water quality of the Zhanghe River from 2013 to 2022, using COD as the indicator of water quality, to a high level of accuracy. The DGMC(1,2) model was used to analyze the relationship between COD in the upper reaches of the Zhanghe River and the added value of the primary, secondary and tertiary industries as well as population. It was found that growth of primary, secondary and tertiary industries, as well as population, would all adversely impact on water quality in the in the absence of any additional pollution reduction strategies. Under an assumed growth rate of 5% the ranking of the adverse impact on COD in 2022 (highest to lowest) would be population (53.42 mg/L), primary industry (45.06 mg/L), industrial development (secondary) (14.30 mg/L), tertiary industry (11.09 mg/L), and the construction industry (secondary) (8.87 mg/L). While the model can also be used to inform decision makers in other cities of the primary sources of water quality problems in rivers and to help local governments focus broadly on pollution reduction strategies, the uncertainties of the social economy and the limitations of the model mean that more detailed models calibrated to local conditions should be used to develop pollution reduction strategies which have the greatest potential to deliver environmental benefits while sustaining the economy.
v3-fos-license
2018-09-05T17:18:21.982Z
2018-08-23T00:00:00.000
52208677
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.preprints.org/manuscript/201808.0248/v1/download", "pdf_hash": "e441d873ab23ca87c42560531e6615792cfcc45c", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2800", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Physics" ], "sha1": "e441d873ab23ca87c42560531e6615792cfcc45c", "year": 2018 }
pes2o/s2orc
An novel explicit model for photovoltaic I-V characteristic prediction based on different splitting spectrum 1Optics and Optical Engineering Department, USTC (University of Science and Technology), Hefei China 2Institute of Advanced Technology, Hefei China Abstract: Looking at different operating climatic conditions, the electrical behavior predicting photovoltaic modules gets very important. For the estimation of output power from photovoltaic (PV) plants this is a very essential and basic aspect. In this paper, the relationship between the I-V curve and the irradiation spectrum is discussed by combining the single diode model. An explicit elementary analytical model with two defined shape parameters is discussed and improved with three approximations and second order Taylor expansion. Then, the explicit elementary analytical model is investigated under varying conditions leveraging the four parameters Iph, I0, Rs and Rsh from the single diode model. The relationship between the physical parameters and the condition parameters are discussed and applied to extract the shape parameters at different scenarios. Considering the aging effect, the process of calculation to predict the I-V curve under different splitting spectra is simplified as follow: (1) two shape parameters are gotten from the I-V data at measurement reference conditions (MRC); (2) the short circuit current, open circuit voltage and shape parameters under any splitting spectrum can be calculated based on the relationship provided in article; (3) the performance of PV panel can be predicted with parameters. The validation of this model was experimentally proven leveraging monocrystalline silicon photovoltaic module with different splitting films. Results showed that the model accurately predicts the I-V characteristics for the examined PV modules at different irradiance spectra and cell temperatures. Moreover, the presented model performs superior compared to other investigated models when looking at accuracy and simplicity. Introduction Photovoltaic power generation technology was greatly improved and widely used all over the world since it's invented [1].Because the photovoltaic cell can't utilize all the wavelengths of sunlight effectively, there are many applications of systems combining several kinds of photovoltaic cells to get a higher system efficiency.The idea of utilizing specific ranges of sunlight spectrum for various kinds of photovoltaic cells was firstly presented in 1955 by Jackson [2] and firstly experimentally demonstrated in 1978 by Moon et.al [3].Many studies were carried out in this field to improve the efficiency of photovoltaic systems [4][5][6][7].There are also many papers about Hybrid Photovoltaic (PV)-Thermoelectric (TE) systems which is another kind of splitting technology emerged in recent years.The PV-TE systems make part of solar radiation available for PV generating system [8].The rest of solar radiation is concentrated on the TE system for producing electricity by the thermoelectric effect.Thus the PV-TE systems further reduce the heat at the solar cells and improve the efficiency of the whole systems [9,10].Photovoltaic power generating systems are also combined with some concepts different than power generating systems.For example, a photovoltaic-greenhouse system had been proposed by Sonneveld et al [11,12].In this case photosynthetically active radiation is transmitted through the film that is coated on the glass roof of green houses for plant growth.The film has a total reflection in the near infrared (NIR) region.Therefore, solar panels can leverage NIR that is reflected and concentrated for power generation.For the systems with different combinations, the spectra of beam splitting are quite diverse.This situation requests an effective and precise way to predict the I-V curve of the PV panel under different irradiation spectra. To get the I-V curve for a PV panel, a circuital equivalent model is needed.Among the models which are used in the papers for PV panel simulation, the one diode equivalent representation model is more common than other models such as two/three diodes equivalent representation models.This single diode model is also well known as five-parameter model since the current-voltage (I-V) curve in this model is determined by five parameters [13][14][15]..The equation received directly from the single diode model is a transcendental equation that is implicit.Therefore, the exact analytic solution of I-V curve can't be obtained directly.Many approaches have been carried out to extract the parameters in the single diode model [16,17].Because of the complexity of the implicit model, the calculation of the I-V curve normally requires the parameters from the manufacture data sheet and it takes more time to approach the output current with the voltage of PV cells or panels.Many methods of the analytical explicit model of I-V characteristic have been investigated in the last two decades [18][19][20][21][22][23][24][25].One kind of these methods expressing the I-V characteristic based on the Lambert W-function.This method is an exact expression derived from the physical model [15,26].The other methods are in terms of elementary approach.The models based on these methods are more widely used in the practical application because of their simplified form.Karmalkar et al. presented an explicit model with defined shape parameters.They combined the explicit models and the single diode model to identify the relationship between the five physical parameters and defined the shape parameters [18,21].With this relationship, the five physical parameters can be calculated or numerically approximated with just few measurements.Furthermore, the whole I-V curve and maximum operating point of the PV panel can be determined without tracking the output current and voltage until open circuit from short circuit. PV systems are used under different conditions considering mainly changing temperature and irradiation.It is well known that the efficiency of PV cells decreases with increasing temperature and decreasing light intensity.There are many studies discussing the mathematical model of this phenomenon.In PV systems with spectral separation, the spectral response of the solar cells is typically used only to determine the photocurrent [5].The changing irradiation spectrum is converted into the changing light intensity so that the photocurrent can be obtained under different spectrum condition [27][28][29].The relationship between the five parameters and operating conditions can be used to develop a model to predict the I-V curve.Yunpeng Zhang et al. put forward a flexible and reliable method leveraging a few measurement at measurement reference conditions.This method is convenient for the practical application with the changing physical parameters in the photovoltaic panel working life [30]. In this paper, we propose a simplified elementary method for a PV panel which can predict the I-V characteristic under varying spectral conditions with two defined shape parameters.This explicit expression was directly derived from a physical model and improves the reduction of errors compared to previous methods.The two defined shape parameters are expressed directly by the values at standard reference conditions (SRC) or measurement reference conditions (MRC) and the solar spectrum after separation.The relationship between shunt resistance and irradiation spectrum is discussed combining the spectral response.Considering the aging effect, the process of calculation to predict the I-V curve under different splitting spectra is simplified as follow: (1) two shape parameters are gotten from the I-V data at measurement reference conditions (MRC); (2) the short circuit current, open circuit voltage and shape parameters under any splitting spectrum can be calculated based on the relationship provided in article; (3) the performance of PV panel can be predicted with parameters.At the end of paper, the model are validated through the experiments with seven kinds of films.The reliability of the model is proved by the result of the validation experiments. The single diode model is discussed and simplified in part 2 and 3. Then the method is discussed in part 4 with varying conditions, especially with different spectrum splitting.The method is validated and the result of experiment is discussed in part 5. Single diode model The physical model of a solar cell in a solar panel can be described as a single diode model such as shown in Fig. 1.The model contains a light-induced current source, an ideal diode, a series resistance and a shunt resistance.The light-induced current source provides photocurrent when the irradiation reaches the surface of solar cells based on photoelectric conversion.As it is well known, this model has five parameters: photocurrent (Iph), shunt resistance (Rsh), series resistance (Rs), saturation current under reverse bias (I0) and the ideality factor of diode (n). In the equation ( 1), VT is equal to kT/q, in which k is Boltzmann constant (1.381×10 -23 J/K) and q is the electronic charge (1.608×10 -19 C).T is the absolute temperature in Kelvin which is 298.15K at SRC conditions.The parameters in the equation for a solar panel with Ns identical solar cells in series can be modified as follow equations. The equations ( 2) and (3) are suitable for varying kinds of solar cells, including monocrystalline silicon and polycrystalline silicon.In the series circuit, the total current is equal to the current of each component.Then equation of the I-V curve for a solar panel with Ns identical solar cells in series should be changed into the form as equation (5).0 ( 1) Explicit elementary analytical model with two defined shape parameters Defining the normalized voltage v and normalized current i via the short circuit current Isc and open circuit voltage Voc derives in the following equations / sc i I I  (7) Putting equation ( 5) and ( 6) into equation ( 7), the equation ( 7) can be expressed as follow: To simplify the equation ( 8), three approximations are possible: Because the series resistance Rs is much smaller than the shunt resistance in solar cells (normally Rs/Rsh < 10 -3 ), the term is far less than 1 and approximately equal to 0. ( ) Thus, this term can be ignored. Ⅱ. Approximation 2 is about the term 0 exp( Considering short circuit condition with V = 0 and place it into equation ( 5), this part can be expressed as equation (10). For most solar cells, the value of saturation current is much lower than output current so that I0/Isc << 1 (normally I0/Isc < 10 -4 ).Furthermore, normally the value of Iph is approximately equal to the value of the output current Isc.Therefore, we can get Iph/Isc ≈ 1.Thus, this term can be approximately ignored because of 0 exp( Ⅲ. Approximation 3 is about the term 0 exp( After transforming into equation ( 11), the ( 1) is considered a voltage independent constant. The error caused by approximation 3 is the main error in this model.However, the error can be reduced by adjusting the expression of the term (1 ) After these three approximations as shown above, the equation ( 8) can be changed into (12) ( ] Then it is easy to see that the equation ( 12) can be simplified with two defined parameters (linear parameterγand exponent parameter m) as follows: The form of equation ( 13) can be simplified by a Taylor exponent expending row: If the equation ( 14) is approximated in the first order, it can be modified into equation (15). () This form of the explicit I-V model is provided and discussed by Karmalkar et al. and allows an easy closed-form estimation of the entire I-V curve [18,31].The predictability of this method was verified, and the scope was expanded to a wide range of solar cells made out of various materials [32].The two defined shape parameters can be derived from few measurement I-V points as well as five physical parameters in the single diode model.There is an explicit elementary expression with two shape parameters in this analyze method for the fill factor and the maximum power point.This method is proved that it fits better than the other methods for describing the performance of a PV panel [33].The method can be widely used in practical applications because of the easy calculation avoiding the difficulty of measurements and numerical approaching in parameter extraction.This method can be improved by adding the second-order term in the Taylor expansion.In this way, the equation ( 13) can be changed into equation ( 16) In this explicit analysis of I-V form, the normalized current imp and normalized voltage vmp at maximum point can be gotten by: Then imp and vmp are represented as follow: The equation ( 16) ,( 19) and ( 20) can express the I-V characteristic and maximum point with Isc and Voc while the Isc and Voc can be measured directly.Some manufacturers provide the five physical parameters of the solar cell at SRC.Of course, the two shape parameters m and γ can be extracted by the five physical parameters containing all the information of the I-V curve.The linear parameter γ is defined by the equation ( 12) and ( 13) as: The exponent parameter m is defined in approximation 3 (see above) and the value can be determined by the derivative at the open-circuit point of the I-V curve to reduce the error as this has been mentioned in Karmalkar S,(2008). Putting the equations ( 5) and ( 16) into the equation ( 22), the expression of m is as follows Furthermore, the value of m is adjusted by a calibration parameter θ in the explicit model of Karmalkar (2008Karmalkar ( ,2009)) [18,31] in order to reduce the error which can't be avoided between the implicit model and the explicit model.The expression for the parameter m with θ was changed into the equation (24). θ is an empirical value without any physical meaning and the value can be approximately represented by θ≈0.77impγ.The expression contains current ratio at the maximum power point imp so it can't be used for the prediction.Considering both short circuit condition and open circuit in the equations ( 5), Isc and Voc are expressed by the five physical parameters in the forms as follow: 1 (1 ) Explicit elementary analytical under varying conditions In the single diode model, four parameters Iph, I0 ,Rs and Rsh are considered to be related to temperature and irradiation intensity.By defining the ratio of these four parameters at different conditions with varying temperature and irradiation, the following equations result : the expression of short-circuit current Isc,, open-circuit voltage Voc and two shape parameters m and γ can be changed into equations ( 31)-( 34) deriving from equations ( 21), ( 23),( 25), (26) with the value of parameters at MRC and equations ( 27)- (30). (1 ) The short-circuit current Isc MRC and the open-circuit voltage Voc MRC at MRC can be measured directly. Considering that the physical parameters can be changed during the working life [33], the shape parameters m MRC and γ MRC at MRC are extracted by two simple measurements of i for v = 0.4 and v for i = 0.4 which are chosen differently with Karmalkar S (2008) because of (v 2 +1)/2>v. ] / ln( ) 2 Therefore, by using equations ( 35) and (36), there is only one parameter n to identify the I-V curve under varying conditions.The parameter n is independent of the temperature and irradiation, so it can be received from the datasheet provided by manufacturers as well as the I-V curve at MRC [32]. The maximum point ( , ) at MRC can be extracted by the formula (19), (20) and the measurements at MRC.Therefore, the identification of I-V characteristic at varying conditions is turned into how to obtain the relationship of the physical parameters between MRC and varying conditions. When the spectra of the irradiation are different because of the spectral separation Spl(λ) in power generating systems, the changing must be considered with the spectrum response of the solar panels Rs(λ) and the solar irradiation spectrum S(λ).The spectral ratio parameter k1 is defined as follow: the ratio of the photocurrent (Kph) and the ratio of the saturation current(K0) can be extracted with the temperature [34]. In equations (39), the slope of the short-current and temperature αph can be extracted by the slope of voltage and temperature βoc which both are provided by some manufacturers. For the shunt resistance of the solar cells, the ratio Ksh is used to be considered as S MRC /S [22,24]. But some papers show that most of the shunts are process-induced, such as edge shunts, cracks, holes, scratches or aluminum particles, rather than material induced shunts [37,38].These effects depend on the carrier in the solar cells related to the irradiation spectrum and the spectral response.In Ruschel C S (2016) [38], the relationship between shunt resistance and irradiation intensity of solar cells made from various materials are presented in different ways.At monocrystalline fitting conditions, the ratio expression for shunt resistance Ksh with different irradiation spectra can be expressed as follows: 0.9 1 ( ) In many papers, the series resistance Rs is considered as a constant which is independent of the temperature and irradiation.However, some researches show that series resistance decreases with irradiation intensity [39,40].The expression of the relationship between Rs and S hasn't been provided in these papers, so it should be discussed and simulated before any prediction is done. 5.Validation The validation experiments were performed with a monocrystalline silicon photovoltaic module.The m MRC and γ MRC can be extracted from equations ( 35) and ( 36) with the corresponding shape parameters.The shape parameters of the measurement data at MRC conditions are listed in Table 1.It can be verified in Figure .2 that the improved model with second order approximation is more accurate than the model with first order approximation as discussed in Karmalkar (2008).3. The proportional parameter k1, the irradiation S, the temperature T and the series resistance Rs.The prediction of the I-V curve and P-V curve for the film as it was calculated and discussed by our improved model shown above is compared with the prediction as calculated model in Yunpeng (2017) which used the explicit expression proposed in Karmalkar (2008Karmalkar ( ,2009)).(see above) with the definition of shape parameters m and the approximation of equation ( 13) after the Taylor expansion.The error caused by the exponential term with v approaches 1 or 0 is not as high as around the maximum power point.In the improved model, the exponential term in the explicit expression is two orders of magnitude approximated instead of just the first order approximation as used in the explicit model of Karmalkar (2008Karmalkar ( ,2009)).Furthermore in Yunpeng(2017), the error is also caused by the value of Rs which considered as a constant independency on the irradiation intensity.In the improved explicit model, the value of Rs is well fitted so that the value of parameter m is more accurate as shown in Table 4. Conclusion In this paper we present an explicit I-V model for a PV panel based on the single diode model under different irradiation spectra.The power law of I-V curves which are presented in Karmalkar (2008Karmalkar ( ,2009) ) are discussed and calculated combining the single diode model with three approximations. The explicit analyze model is improved as a simple elementary term with second order approximation. To apply the model to the photovoltaic system with spectral separating, the spectrum of the irradiation is taken into account in the model as well.A spectral ratio parameter k1 extracted from the irradiation spectrum, the transmission spectrum of the film and the spectral response of photovoltaic cells itself are set as a conditional parameter as well as temperature and irradiation intensity.The relationship between the physical parameters and the conditional parameters are discussed and applied to extract the shape parameters at different scenarios.The relationship between Rs and the irradiation intensity as well as the spectrum are discussed and simulated.Furthermore, the condition parameters are used in the explicit analyze model directly to avoid the complex calculation and numerical approximation for the physical parameters as it was done in the single diode model.To avoid the aging effect, the measured I-V parameters from MRC are leveraged instead of the data from SRC, which are provided by the manufacturer.the process of calculation to predict the I-V curve under different splitting spectra is Furthermore, the improved model has a better prediction of the maximum power compared to the model in Yunpeng (2017).Because of the advantages mentioned above, this model can be widely used for the prediction of I-V characteristic of a PV panel.This model is especially useful for a spectrum splitting system, such as system with various kinds of photovoltaic cells, some kinds of Hybrid Photovoltaic (PV)-Thermoelectric (TE) systems, solar cells used in agriculture and architecture.For the simplicity and validated predictability, this model can be used to design a monitoring software forecast the I-V characteristic for a photovoltaic panel used in a PV system for a long time. Figure 1 . Figure 1.The equivalent single diode model for a PV panel or a solar cell With those five parameters, the equation of I-V curve for a solar cell is: , , 0 , ( 1) cell s cell T V IR cell s cell nV ph sh cell V IR I I I e R        Considering both short circuit condition and open circuit.Place I = 0 when V = Voc and V = 0 when I = Isc condition into equation (5), the form of the term 0 exp( ) It was composed of 8 photovoltaic cells connected in series.The size of the photovoltaic cells was 12.5 cm× 12.5 cm and were provided by company Sun power.The I-V curve data were measured by Prova-210 measurement-equipment which can scan the I-V curve automatically in one minute.The uncertainty of current is 0.01 A and the uncertainty of voltage is 0.01 V.The solar irradiation and the temperature was monitored with a TES-1333 pyranometer and an infrared thermometer.The measurement error of the TES-1333 pyranameter is 10 W/m 2 .The measurement error of infrared thermometer is 1 K.The experiments were done with sun irradiation at 10 a.m. in November 23, 2017.The I-V curve for the photovoltaic module without film was measured with G = 880 W/m 2 and T = 285 K.The parameters on MRC condition Voc MRC , Isc MRC , Vi=0.4 and Iv=0.4 can be derived from the I-V curve directly. Figure 2 . 1 Figure 2. The I-V curve with measurement data, simulated data of improved model and simulated data of model inKarmalkar (2008) Figure 3 .Figure 4 . Figure 3.The transmission spectrum of the films used in the experiment Figure 5 (Figure 5 ( Figure 5(a).The I-V curves of film A-G with measurement data Figure 6 Figure 6 .Figure 7 Figure 7 .Figure 8 Figure 8 .Figure 9 Figure 9 .Figure 10 Figure 10 .Figure 11 Figure 11 .Figure 12 Figure 12 . Figure 6(a) Figure 6(b) Figure 6.The curves of film A with measurement data, fitted by improved model and model in Yunpeng (2017)(a); The current error curves of two models are also shown in the figure(b) Figure 13 . Figure 13.Maximum power of film A-G with measurement data, simulated data of improved model and simulated data of model in Yunpeng(2017) In Fig. 13, the calculated parameter Pm deriving from the improved model and the model in Yunpeng (2017) are compared with the measurement data.The error is primarily caused by the approximation 3 simplified as follow: (1) two shape parameters are gotten from the I-V data at measurement reference conditions (MRC); (2) the short circuit current, open circuit voltage and shape parameters under any splitting spectrum can be calculated based on the relationship provided in article; (3) the performance of PV panel can be predicted with parameters.In the validation experiments, the photovoltaic panel is tested with seven kinds of films.The experimental results showed that there is a good agreement between the calculated and measured I-V curve.The simple elementary term with second order approximation is proved better than the term in Karmalkar S (2008) and Karmalkar S (2009). Table 3 Parameter m and maximum power of films A-G with measurement data, improved model and model in Author Contribution: Investigation, Luqing Liu and Wen Liu; Simulation, Luqing Liu; Validation experiments, Luqing Liu and Xinyu Zhang; Writing Luqing Liu and Jan Ingenhoff Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2023-01-27T16:14:53.455Z
2023-01-24T00:00:00.000
256294742
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1648-9144/59/2/222/pdf?version=1674559430", "pdf_hash": "3b3bbc6bb6396dfa53815c2f9b0211bf419fb023", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2801", "s2fieldsofstudy": [ "Medicine" ], "sha1": "d2216f3a679848f3745b68bbe2458da1851d717a", "year": 2023 }
pes2o/s2orc
Investigation of The Effects of Oxytocin Administration Timing on Postpartum Hemorrhage during Cesarean Section Background and Objectives: To determine and compare the effects of the timing of oxytocin administration (routinely used for intraoperative uterotonic purposes in cesarean section (CS) deliveries in our clinic) on the severity of postpartum hemorrhage following CS. Materials and Methods: All study participants (n = 216) had previous cesarean deliveries, were 38–40 weeks pregnant, and had CS planned under elective conditions. The cases were randomly divided into two groups: one group (n = 108) receiving oxytocin administration before the removal of the placenta (AOBRP) and another group (n = 108) receiving oxytocin administration after the removal of the placenta (AOARP). In all cases, the placenta was removed using the manual traction method. The standard dose of oxytocin is administered as an intravenous (IV) push of 3 international units (IU); simultaneously, 10 IU of oxytocin is added to 1000 cc isotonic fluid and given as an IV infusion at a rate of 250 cc/h. All methods and procedures applied to both groups were identical, except for the timing of administration of the standard oxytocin dose. Age, body mass index (BMI), parity, gestational week, preoperative hemoglobin (HB) and hematocrit (HTC), postoperative 6th and 24th hour HB-HTC, intraoperative hemorrhage, additional uterotonic need during cesarean section, postoperative hemorrhage (number of pads), need for blood transfusion during or after cesarean section, cesarean section time, and postpartum newborn baby weight were evaluated. Results: Age (year), BMI (kg/m2), parity, gestational week, surgical time, and newborn weight (g) did not differ between the groups (p > 0.05). The AOBRP group had significantly higher postoperative 6th hour HB and HTC and postoperative 24th hour HB and HTC values (p < 0.05). The intraoperative hemorrhage level was higher in the AOARP group (p = 0.000). Conclusions: The administration of oxytocin before placenta removal did not change the volume of bleeding in the postoperative period but significantly reduced the volume of bleeding in the intraoperative period. Therefore, in the postoperative period, the HB and HTC values of the AOBRP group were higher than those of the AOARP group. Introduction Childbirth can be divided into vaginal births and cesarean (CS) births. Recent studies have shown that cesarean delivery rates have increased worldwide [1]. According to data published by Turkey's Ministry of Health in 2020, the percentage of the total number of cesarean sections to live births was 54.4% in 2019, and the percentage of the number of primary cesarean sections to live births was 26.5% [2]. Postpartum hemorrhage (PPH) during childbirth is a feared medical emergency [3]. The American College of Obstetricians and Gynecologists defines PPH as cumulative loss of at least 1000 mL of blood (or any volume if there are signs or symptoms of hypovolemia) within 24 h after birth [4]. When it is difficult to calculate the volume of the hemorrhage, it can also be defined as a decrease of more than 10% between pre-and postpartum hematocrit (HTC) levels [5,6]. The estimated incidence of PPH in women who delivered with CS is 3-15%, and 2-4% in women who delivered vaginally [7,8]. Worldwide, approximately 140,000 women die yearly due to PPH [9][10][11]. Because PPH is a life-threatening condition, obstetricians should be aware of the risk factors, prevention strategies, and actions required. Risk factors include fibroids, macrosomia, maternal obesity, polyhydramnios, inherited coagulation disorders (von Willebrand disease, hemophilia A, hemophilia B, and hemophilia C), multiple pregnancies, repeated cesarean sections, prolonged labor, prolonged third stage, grand multiparity, chorioamnionitis, PPH history, antepartum hemorrhage, and operative delivery for PPH [12][13][14]. Uterine atony is responsible for 50-80% of all PPH cases [15,16]. According to the World Health Organization (WHO), intravenous (IV) or intramuscular oxytocin is recommended for the prevention of PPH in all deliveries in settings in which multiple uterotonic options are available [16]. However, there is no clear recommendation as to the best time to administer oxytocin to prevent PPH in women who deliver by cesarean section. Current guidelines have various recommendations on doses, routes, and regimens for the administration of prophylactic oxytocin in CS, but most do not provide specific guidance on the timing of administration [6,[17][18][19]. Some obstetricians administer prophylactic oxytocin at various times before fetal birth in CS, others administer oxytocin immediately after the baby is born and the umbilical cord is clamped, and another group delays oxytocin administration until the placenta separates from the uterus [20,21]. PPH, one of the most common causes of maternal morbidity and mortality, increases in parallel with an increase in cesarean section rates [22,23]. In this study, we aimed to determine the effects of the timing of standard-dose oxytocin administration (which we routinely apply during cesarean sections in our clinic) on the amount of intrapartum and postpartum bleeding. Therefore, we administered oxytocin at two different times: before and after placenta removal. In this way, it was planned to investigate the effects of oxytocin administration in equal doses on intraoperative and postoperative bleeding by changing only the time of administration. We hope that our results will contribute to the few studies conducted in this area. In addition, we believe that our study will have positive effects on maternal and newborn health. Study Design and Patients This research was carried out at Pamukkale University Hospital Gynecology and Obstetrics Clinic (Denizli, Turkey) between 15 December 2021 and 15 October 2022. The research protocol was approved by the Faculty of Medicine of Pamukkale University Ethics Committee (14 December 2021;22). All the cases participating in the study had previous cesarean deliveries, were 38-40 weeks pregnant, and planned for cesarean section under elective conditions. Pregnant women with systemic diseases, such as hypertension, diabetes, thyroid dysfunction, and major depression, as well as women with conditions that increase the risk of hemorrhage (such as multiple pregnancies, coagulation defects, fibroids, and abnormal placental adhesions, such as placenta previa) were excluded from the study (Figure 1). The cases (n = 216) were divided into two groups in sequential order according to the order of cesarean section: Group 1 (n = 108): administration of oxytocin before removal of the placenta (AOBRP). Group 2 (n = 108): administration of oxytocin after removal of the placenta (AOARP). Intervention Cesarean sections were performed under spinal anesthesia by the same investigator in all cases in both groups. In all deliveries, the abdomen was entered with a Pfannenstiel incision, the fetus was removed from the uterus with a transverse incision of the lower segment of the uterus, and then the umbilical cord was clamped and cut immediately. The placenta was removed using the manual traction method in all cases. The standard dose of oxytocin is delivered as an intravenous (IV) push of 3 international units (IU); simultaneously, 10 IU of oxytocin is added to 1000 cc isotonic fluid and given as an IV infusion at a rate of 250 cc/h [24]. All methods and procedures applied to both groups were identical, except for the timing of administration of the standard dose of oxytocin. Adjustment and administration of the same dose of oxytocin in all cases were performed by anesthesiologists. The timing of administration for the groups is given below. • AOBRP group: In all cases, the umbilical cord was clamped after the removal of the fetus. Immediately after clamping, 3 IU of oxytocin was given as an IV push before the placenta was removed. The placenta was removed by manual traction, while the fluid containing 10 IU of oxytocin in 1000 cc isotonic fluid was administered IV at a rate of 250 cc/h. • AOARP group: In all cases, the umbilical cord was clamped after the fetus was removed. After clamping, the placenta was removed by manual traction. After removal of the placenta, 3 IU of oxytocin was given as an IV push. Simultaneously, 10 IU of oxytocin was added to 1000 cc of isotonic fluid and administered at a rate of 250 cc/h IV. Measurements As an additional uterotonic agent, an IV push of 5 IU of oxytocin and 0.2 mg methylergonovine maleate were administered intramuscularly (IM) to cases with excessive Intervention Cesarean sections were performed under spinal anesthesia by the same investigator in all cases in both groups. In all deliveries, the abdomen was entered with a Pfannenstiel incision, the fetus was removed from the uterus with a transverse incision of the lower segment of the uterus, and then the umbilical cord was clamped and cut immediately. The placenta was removed using the manual traction method in all cases. The standard dose of oxytocin is delivered as an intravenous (IV) push of 3 international units (IU); simultaneously, 10 IU of oxytocin is added to 1000 cc isotonic fluid and given as an IV infusion at a rate of 250 cc/h [24]. All methods and procedures applied to both groups were identical, except for the timing of administration of the standard dose of oxytocin. Adjustment and administration of the same dose of oxytocin in all cases were performed by anesthesiologists. The timing of administration for the groups is given below. • AOBRP group: In all cases, the umbilical cord was clamped after the removal of the fetus. Immediately after clamping, 3 IU of oxytocin was given as an IV push before the placenta was removed. The placenta was removed by manual traction, while the fluid containing 10 IU of oxytocin in 1000 cc isotonic fluid was administered IV at a rate of 250 cc/h. • AOARP group: In all cases, the umbilical cord was clamped after the fetus was removed. After clamping, the placenta was removed by manual traction. After removal of the placenta, 3 IU of oxytocin was given as an IV push. Simultaneously, 10 IU of oxytocin was added to 1000 cc of isotonic fluid and administered at a rate of 250 cc/h IV. Measurements As an additional uterotonic agent, an IV push of 5 IU of oxytocin and 0.2 mg methylergonovine maleate were administered intramuscularly (IM) to cases with excessive hemorrhage during cesarean section. The decision related to excessive hemorrhage during a cesarean was made by the responsible obstetrician who performed the surgery. A visible excessive bleeding, and a rapid increase in aspiration fluid were defined as excessive hemorrhage. Two methods were used to measure the amount of intraoperative bleeding during cesarean section in all cases: (1) preoperative and immediate postoperative weights of compresses and sponges used to absorb intraoperative bleeding were weighed, and 1 g of the weight difference was converted to milliliters to be 1 mL; and (2) intraoperatively aspirated bleeding and aspiration fluid including amniotic fluid were measured and the estimated amniotic fluid amount (measured by ultrasonography just before cesarean section) was subtracted from this fluid. Thus, the sum of both methods was evaluated as intraoperative bleeding. All cases were evaluated with obstetric USG before cesarean section. The fetal position, estimated fetal weight, placental localization, and estimated amount of amniotic fluid were measured. The height and weight of all cases were measured before cesarean section, and the body mass index (BMI) was calculated using these values (kg/m 2 ). The hemoglobin (HB) and hematocrit (HTC) values were measured preoperatively, at the 6th hour and at the 24th hour postoperatively in all cases. The starting and ending times of the cesarean section were recorded for all cases, and the duration of the operation was calculated. All cases were followed up at the obstetrics clinic for approximately 48 h after cesarean section. During the first 48 h of hospitalization in the obstetrics clinic, 3 × 5 IU/day oxytocin was administered intramuscularly to all cases routinely, and the amount of bleeding was measured by counting the number of pads used. Sample Size and Statistical Analysis According to the reference study results [25], they had a large effect size (d = 2.425). Assuming we can achieve an effect size at a small level (d = 0.4), a power analysis was performed before the study. Accordingly, when at least 200 participants (100 per group) were included in the study, that would result in 80% power with 95% confidence level (5% type 1 error rate). Statistical analysis was performed using SPSS version 25.0 software (IBM Corp., Armonk, NY, USA). Numerical variables are expressed as mean ± standard deviation and categorical variables as numbers and percentages (%). Numerical data were analyzed for normal distribution by skewness. An independent-sample t-test and a post hoc test were used to analyze the differences between the groups; p < 0.05 was set as the threshold for significance. Ethical Approval This study was conducted in accordance with the principles of the Declaration of Helsinki. All participants gave their written and informed consent prior to their participation in this study. Ethical approval for the study was obtained from the Pamukkale University Clinical Research Ethics Committee (14 December 2021; 22). Table 1 compares the descriptive parameters of the two groups. Age (year), BMI (kg/m 2 ), parity, gestational week, surgical time, and newborn weight (g) did not differ between the groups (p = 0.406, 0.238, 0.704, 0.390, 0.399, and 0.141, respectively). Table 2 compares the hemorrhage parameters of the two groups. Preoperative HB and HTC levels did not differ between the groups (p = 0.665 and 0.755, respectively). The estimated amount of intraoperative blood loss was higher in the AOARP group than in the AOBRP group (537.77 ± 113 mL vs. 425.83 ± 90 mL, respectively). Postoperative 6th hour HB and HTC and postoperative 24th hour HB and HTC levels were significantly higher in the AOBRP group than in the AOARP group (p = 0.002, 0.002, 0.002, and 0.001, respectively). Postoperative hemorrhage (3.33 ± 0.86 pads vs. 3.52 ± 0.89 pads, respectively), uterotonic requirement during cesarean section (5 (4.6%) cases vs. 12 (11.1%) cases, respectively), and the need for blood transfusion (1 (0.9%) case vs. 4 (3.7%) cases, respectively) did not differ between the AOBRP group and the AOARP group (p > 0.05). Table 3 compares the hemoglobin levels at different times for each group. The mean HB difference between the preoperative and postoperative 6th hour was 0.524 g/dL in the AOBRP group, whereas it was 1.061 g/dL in the AOARP group (p = 0.026 and 0.000, respectively). Similarly, the mean hemoglobin difference between the preoperative and postoperative 24th hour was lower in the AOBRP group than in the AOARP group (0.626 g/dL vs. 1.153 g/dL; p = 0.004 and 0.000, respectively). The mean hemoglobin difference between the postoperative 6th hour and the postoperative 24th hour was 0.102 g/dL in the AOBRP group and 0.092 g/dL in the AOARP group; these differences were not statistically significant (p = 0.936 and 0.954, respectively). Discussion In the current study, we aimed to determine the effects of the timing of standard-dose oxytocin administration (which we routinely apply during cesarean sections in our clinic) on the amount of intrapartum and postpartum bleeding. Therefore, we administered oxytocin at two different times: before and after placenta removal. The main finding was that there was less intraoperative bleeding and less decrease in postoperative HB and HTC levels with oxytocin administration before placenta removal. Oxytocin is still the drug of choice for the prevention and treatment of PPH. According to WHO recommendations, oxytocin (10 IU, IM, or IV) is the recommended uterotonic agent for the prevention of PPH in all deliveries in all settings [16]. If PPH occurs, additional units (up to a total of 40 IU) should be administered intravenously until the hemorrhage stops [16]. There are many studies in the literature on the dose of oxytocin administration [16,24,[26][27][28][29]. However, studies on the timing of oxytocin administration are limited [30]. In this study, the effects of the timing of oxytocin administration on PPH were investigated. We found that the descriptive features of the research groups did not differ statistically. This result shows that the groups are homogeneous and that the results of the comparisons are consistent. In addition, since the preoperative hemoglobin and hematocrit levels did not differ between the groups, the validity and reliability of the postoperative results are enhanced. Ahmadi [31] stated that the use of 80 units of oxytocin in the prevention of uterine atony after cesarean section resulted in a decrease in uterine atony and a decrease in the need for an additional uterotonic drug compared with a dose of 30 units of oxytocin. However, it was stated that the dose did not have a significant effect on the rate of decrease in hemoglobin at 6 and 24 h after surgery. In the current study, we administered an equal dose of oxytocin (an IV push of 3 IU and an IV infusion of 250 cc/h with a fluid containing 10 IU of oxytocin in 1000 cc) in both groups to compare the effects of oxytocin with respect to the timing of administration. Mangla et al. [32] administered oxytocin (5 IU/10 mL of saline) directly to the myometrium after the fetus was born (n = 50) or without placental separation (n = 50). According to their results, intramyometrial oxytocin injection before placenta separation was effective in increasing uterine contraction and reducing the incidence of PPH. In our study, we administered oxytocin intravenously (only). Takmaz et al. [33] evaluated the positive effect of IV oxytocin infusion on intraoperative blood loss in the early period before uterine incision. They concluded that early initiation of IV oxytocin infusion from the uterine incision is more effective in reducing intraoperative blood loss than the late infusion of IV oxytocin after umbilical cord clamping or delivery of the placenta. If oxytocin is started before the delivery of the baby, fetal well-being should be well monitored during the delivery, whereas Takmaz et al. recorded only the postnatal APGAR scores of the babies in their study and reported that there was no difference between the two groups in terms of APGAR scores. In our study, we administered an equal dose of oxytocin two separate times immediately after the birth of the baby: before and after the placenta was removed. Postoperative 6th-hour HB and HTC and postoperative 24th hour HB and HTC values decreased more in the OARP group than in the OBRP group, according to the preoperative values. However, postoperative 24th hour hemorrhage (number of pads), uterotonic requirement during CS, and the need for blood transfusion did not change according to the time of oxytocin administration. Cecilia et al. [29] compared one such protocol using single-dose IV oxytocin over 2-4 h (total = 10 units) with oxytocin maintenance infusion for 8-12 h (total = 30 units) in postoperative CS women to prevent PPH. When their results were analyzed, it was found that both regimens were equally effective in the prevention of PPH in postoperative CS women. Both treatment regimens were associated with a similar amount of blood loss during the operative and postoperative periods. Thus, they reported that the low-dose oxytocin regimen was as effective as a high-dose oxytocin regimen in the prevention of PPH in postoperative CS women. In our current study, we only changed the timing of oxytocin administration without making any changes in oxytocin doses. In our current study, we only changed the timing of oxytocin administration without making any changes in oxytocin doses. Thus, we found that oxytocin administered before removal of the placenta after clamping the umbilical cord was significantly effective on intraoperative and postoperative bleeding in CS. Torloni et al. [30] stated that the earlier prophylactic administration of oxytocin in CS may be slightly more beneficial than subsequent administration (i.e., after fetal delivery) without an increase in side effects. In addition, oxytocin given before fetal delivery significantly reduced intraoperative blood loss, but did not change the incidence of blood transfusion. Similarly, in our study, intraoperative blood loss was lower in the group that received oxytocin before placenta removal. In a study by Tharwat et al. [34], 300 elective cesarean section patients were divided into two groups, and oxytocin was administered during anesthesia induction and after delivery. The study stated that oxytocin given as an IV infusion during anesthesia induction before skin incision during CS is more effective in reducing blood loss and preventing PPH compared to oxytocin administration after delivery of the fetus. However, in both studies, in which oxytocin administration was initiated before the birth of the baby, there were no detailed data on how fetal well-being was monitored during delivery or whether postpartum umbilical cord blood gases and APGAR scores of babies were recorded. There is a need for additional, well conducted, and well reported, trials on the timing of prophylactic oxytocin in women giving birth by CS, to increase the overall certainty of the evidence on this important clinical question. Ideally, future RCTs should be placebocontrolled and double-blinded, involve other obstetric populations (women with previous CS and those at high risk for PPH), as well as other types of CS (in the 1st and 2nd stages of spontaneous and induced labor previously exposed to oxytocin), and measure all PPH prevention core results, including adverse effects and women's views. The following study limitations were identified: (1) it was a single-center study; (2) a group of spontaneous vaginal deliveries was not formed; (3) a group with emergency cesarean sections other than elective cesarean sections was not formed; and (4) only cesarean sections at 38-40 weeks of gestation were evaluated. Conclusions In conclusion, avoiding PPH is vital for both maternal and newborn health. The timing of standard-dose prophylactic oxytocin administration significantly altered the amount of intraoperative bleeding and the postoperative 6th and 24th hour HB and HTC values in the current study. Administration of oxytocin before removal of the placenta resulted in less intraoperative bleeding and less decrease in postoperative HB and HTC values. We believe that the main reason for these changes may be that administration of oxytocin before placenta removal leads to uterine contraction with an earlier effect following placental removal. The main strength of our study was a uniform treatment protocol as the study was conducted in a single center and appropriate sample size. Further studies are needed to elucidate this situation.
v3-fos-license
2018-04-03T06:04:16.659Z
2017-01-13T00:00:00.000
17989933
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ro-journal.biomedcentral.com/track/pdf/10.1186/s13014-016-0744-1", "pdf_hash": "6a7eff1a1196178eab9e6a7b3946cea41d823444", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2804", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "1204886dd92815e115d97a0a455b694e9e51eee9", "year": 2017 }
pes2o/s2orc
Correlation between the Ki-67 proliferation index and response to radiation therapy in small cell lung cancer Background In the breast cancer, the decision whether to administer adjuvant therapy is increasingly influenced by the Ki-67 proliferation index. In the present retrospective study, we investigated if this index could predict the therapeutic response to radiation therapy in small cell lung cancer (SCLC). Methods Data from 19 SCLC patients who received thoracic radiation therapy were included. Clinical staging was assessed using the TNM classification system (UICC, 2009; cstage IIA/IIB/IIIA/IIIB = 3/1/7/8). Ki-67 was detected using immunostained tumour sections and the Ki-67 proliferation index was determined using e-Count software. Radiation therapy was administered at total doses of 45–60 Gy. A total of 16 of the 19 patients received chemotherapy. Results Patients were divided into two groups, one with a Ki-67 proliferation index ≥79.77% (group 1, 8 cases) and the other with a Ki-67 proliferation index <79.77% (group 2, 11 cases). Following radiation therapy, a complete response (CR) was observed in six cases from group 1 (75.0%) and three cases from group 2 (27.3%). The Ki-67 proliferation index was significantly correlated with the CR rate (P = 0.05), which was significantly higher in group 1 than in group 2 (P = 0.04). The median survival time was 516 days for all patients, and the survival rates did not differ significantly between groups 1 and 2. Conclusions Our study is the first to evaluate the correlation between the Ki-67 proliferation index and SCLC tumour response to radiation therapy. Our findings suggest that a high Ki-67 proliferation index might represent a predictive factor for increased tumour radiosensitivity. Introduction Ki-67 is a nuclear protein associated with cell proliferation and is expressed in the G1, S, G2 and M phases of the cell cycle but not in the G0 phase [1]. Thus, this protein is used as a marker for the proliferation of various tumour cells. Particularly in breast cancer, Ki-67 positivity is a marker for a high risk of recurrence and poor survival [2], and immunostaining with Ki-67 antibody is routinely used as a proliferation index. In the treatment of breast cancer, Ki-67 is regarded as a predictive marker for the efficacy of chemotherapy, and the decision to administer adjuvant chemotherapy is frequently determined on the basis of the Ki-67 proliferation index [3]. In lung cancer, several studies have reported that high Ki-67 expression was an indicator of poor prognosis in patients with non-small cell lung cancer (NSCLC) [4,5]. However, few reports have evaluated Ki-67 expression in patients with small cell lung cancer (SCLC). Moreover, the most recent World Health Organization (WHO) classification has adopted the Ki-67 proliferation index for the diagnosis of SCLC, with numerical values of cell proliferation used to diagnose this disease [6]. In the present study, we investigated the association between the Ki-67 proliferation index and the therapeutic effects of radiation therapy in SCLC. Table 1 lists the patient characteristics and the treatment regimens administered. Patients This retrospective study included data from 19 patients (15 males and 4 females) who were diagnosed as having SCLC and received thoracic radiation therapy at our hospital between February 2011 and August 2015. Ki-67 proliferation index SCLC tumour samples were collected prior to chemotherapy or radiation therapy. Among the 19 patients, bronchoscopic biopsy specimens were collected from 16 patients and percutaneous lung biopsy specimens were collected from 3 patients. Samples were stained with haematoxylin and eosin and the Ki-67 antibody MIB-1 clone (DAKO, Glostrup, Denmark) was used to detect Ki-67 expression. The Ki-67 proliferation index was defined as the percentage of cells with positive nuclear Ki-67 immunostaining in a section of confirmed carcinoma using e-Count cell counting software (e-Path, Kanagawa, Japan). Images of tumour sections mounted on glass slides were converted to JPEG (Joint Photographic Experts Group) format, and cells with positive nuclear immunostaining for Ki-67 were counted based on pixel colour intensity. Images were automatically segmented into Ki-67-positive and Ki-67-negative areas according to the pixel colour intensity cut-off point (Fig. 1). The cell counting software automatically determines the cut-off point from a histogram of brown density (MIB-1 clone, visualized with DAB labelling) and blue density (haematoxylin) in the nucleus. Tumour samples were also microscopically reviewed by two pathologists to verify the Ki-67-positive and Ki-67-negative scores obtained by the software. The median number of tumour cells was 420 (range, 91-1001) in each sample. Thoracic radiation therapy A linear accelerator was used for 10 MV X-ray irradiation, and some lesions were irradiated with 4 MV Xrays. In principle, multiportal irradiation was applied to the anterior-posterior opposed fields to include at least the primary tumour and metastatic lymph nodes, while regional lymph nodes were included if necessary. All patients who received concurrent chemotherapy were irradiated at 1.5 Gy per fraction twice daily, to a total dose of 45 Gy. Patients irradiated with 2 Gy per fraction to a total dose of 50 or 60 Gy were administered chemotherapy as a neoadjuvant therapy or received radiation therapy alone. In the single patient irradiated with 3 Gy per fraction to a total dose of 45 Gy, the primary tumour was accompanied by an additional non-contiguous tumour nodule in the same pulmonary lobe, and both the tumour and the nodule were irradiated. Chemotherapy A total of 16 of the 19 patients received chemotherapy, which was administered as neoadjuvant therapy in seven patients and as concurrent therapy in nine patients. While 13 patients received a regimen consisting of etoposide and cisplatin, three patients with renal dysfunction received a regimen consisting of etoposide and carboplatin. Three patients who received radiation therapy alone were at an advanced age or had dementia. Response to radiation therapy Tumour responses were assessed using computed tomography (CT), performed after the last day of radiation therapy or chemotherapy (median, 27 [range, 4 − 225] days). Clinical responses were categorized as complete or partial according to the Response Evaluation Criteria in Solid Tumours (RECIST), version 1.1. A complete response (CR) was defined as the disappearance of both the primary tumour and metastatic lymph nodes. Statistical methods SPSS version 21.0 (IBM, Armonk NY, USA) was used for statistical analysis. The therapeutic effects of radiation therapy were analysed using stepwise logistic regression with the following variables: Ki-67 proliferation index (≥mean vs. <mean); age (<median vs. ≥median 70 years); period from the first day of chemotherapy (the first day of radiation therapy in patients who received radiation therapy alone) to the last day of radiation therapy (<median vs. ≥median 39 days); frequency of radiation doses (twice daily vs. once daily); and clinical staging (<IIIB vs. IIIB). The frequency tables for the therapeutic effects of radiation therapy and the Ki-67 proliferation index were analysed using the χ 2 test. The Kaplan-Meier method was used to estimate the probability of overall survival on the first day of chemotherapy or radiation therapy, whichever came first. Mantel's log-rank test was performed to compare the differences in survival between the subgroups of patients according to the indicated variables. Ki-67 proliferation index The Ki-67 proliferation index ranged from 45.55%-99.21%, with a mean value of 79.77% (Table 1). Patients were classified into two groups, one with a Ki-67 proliferation index ≥79.77% (group 1, 8 cases) and the other with a Ki-67 proliferation index <79.77% (group 2, 11 cases). Following radiation therapy, a CR was observed in six cases from group 1 (75.0%) and three cases from group 2 (27.3%) ( Table 2). Stepwise logistic regression analysis revealed that the Ki-67 proliferation index was significantly correlated only with the CR rate (P = 0.05). The χ 2 test showed that the CR rate was significantly higher in group 1 than in group 2 (P = 0.04). The median survival was 516 days for all patients, and the survival rates in groups 1 and 2 did not differ significantly (Fig. 2). Significant differences were not observed in the other variables. Discussion The WHO Classification of Tumours of the Lung, Pleura, Thymus and Heart (fourth edition; 2015) was the first classification system to adopt the Ki-67 proliferation index for the differentiation of neuroendocrine tumours. This classification indicates that, based on previous studies involving biopsy samples or surgical specimens [7,8], the Ki-67 proliferation index in SCLC is typically >50%, ranging from 50% to 100%; in addition, cell proliferation is prominent in SCLC [6]. However, in the diagnosis of SCLC, the differentiation of this tumour from carcinoid tumours remains a challenge. Because the standard treatment for SCLC is chemoradiotherapy, large tumour samples are rarely obtained during surgery and the diagnosis is instead routinely made based on small tumour samples obtained using bronchoscopy or other procedures. As a result of limitations such as the presence of crush artefacts and poor tissue preservation, the use of cytoplasmic markers may lead to inaccurate diagnoses. The nuclear marker Ki-67, which is well preserved in samples with an extensive crush artefact, can effectively differentiate SCLC from carcinoid tumours [7,8]. A further challenge associated with the diagnosis of lung tumours is the potential tumour heterogeneity observed Fig. 2 Overall survival curves plotted using the Kaplan-Meier method for patients assigned to the two groups. There was no significant difference observed between the survival rate in groups 1 and 2 between biopsy samples and surgical specimens. However, in NSCLC, high correlation has been reported between the expression of Ki-67 in biopsy samples and surgical specimens [9]. An additional limitation associated with the Ki-67 proliferation index is the lack of consistent counting methods, which has caused variations among pathologists in the index calculated [10]. Thus, to reduce these variations as much as possible in the present study, we used e-Count, a cell counting software program that counts cells with a nucleus that is positive for Ki-67 immunostaining on the basis of pixel colour intensity. Our study is the first to use e-Count (e-Path, Kanagawa, Japan). A recent study has reported the use of digital image analysis in Ki-67 immunostaining; in this study, digital image processing software was used to reduce variations in the Ki-67 proliferation index [11]. Regarding the association between the Ki-67 proliferation index and lung cancer, many studies on NSCLC (including meta-analyses) have indicated that Ki-67 expression is a poor prognostic factor for survival [4,5]. In contrast, only two studies to date have evaluated the association between SCLC and the Ki-67 proliferation index, one of which reported that the outcome was poor in patients with biopsy samples yielding a Ki-67 proliferation index lower than the median [12]. Another study has reported that there was no association between survival and the Ki-67 proliferation index in SCLC [13]. Thus, the significance of the Ki-67 proliferation index in SCLC has been controversial. In NSCLC, the excision repair cross-complementation group 1 (ERCC1) protein is reported to be associated with resistance to platinumbased chemotherapy. In SCLC, however, ERCC1 was not related to survival or to chemoradiation therapy response, and no association was observed between Ki-67 and ERCC1 [13]. In vitro, SCLC cell lines are sensitive to radiation, and the dose-response curves for these cells lack a shoulder. Furthermore, relatively low radiation doses per fraction have been shown to be lethal to SCLC cell lines [14]. On the basis of these radiobiological features, it is known that twice-daily thoracic radiation therapy improves overall survival when this therapy is initiated with the first cycle of chemotherapy or less than 30 days after the start of the first cycle of chemotherapy [15,16]. Moreover, according to the "law of Bergonié and Tribondeau" (1906), which established a link between cell proliferation and cellular radiosensitivity, cells that more frequently undergo cell division are more sensitive to radiation. Proliferating cells have been reported to be more sensitive to radiation than quiescent cells in vitro [17]. Ki-67 was originally obtained from monoclonal antibodies to the nuclear antigen in Hodgkin and Sternberg-Reed cells. The nuclear antigen detected by Ki-67 is expressed in almost all human cell lines, but is not expressed in normal human cells in the resting stage. Ki-67 thus recognizes a nuclear antigen associated with cell proliferation [1]. Thus, we expected that tumour cells would have a higher rate of proliferation, more frequently undergo cell division, and be more sensitive to radiation in patients with a higher Ki-67 proliferation index before treatment; when we compared the mean Ki-67 proliferation index of the two groups (≥79.77% vs. <79.77%), tumour responses in terms of the CR rate were greater in the group with a Ki-67 proliferation index equal to or higher than the mean, as we expected. Few previous reports have evaluated the association between the Ki-67 proliferation index and tumour responses to radiation therapy. A study on uterine cervical cancer demonstrated that patients with a higher Ki-67 proliferation index at the time of diagnosis showed a significantly better histological response to radiation therapy at a total dose of 30 Gy [18]. Another study on oral squamous cell carcinoma (OSCC) reported that the Ki-67 proliferation index at the time of diagnosis had no significant correlation with the response to radiation therapy; in contrast, the reduction in the growth fraction (decrease in proliferation index) after radiation therapy at a total dose of 10 Gy was significantly correlated with the CR rate [19]. A study of OSCC after curative resection and postoperative radiation therapy reported that low Ki-67 proliferation index tumours had a significantly shorter time to recurrence than high proliferation index tumours [20]. This study concluded that tumours with a high Ki-67 proliferation index might respond better to radiation therapy as a result of increased radiosensitivity. Two studies on rectal cancer have reported results that were contradictory: one showed that there was no correlation between the Ki-67 proliferation index and the rate of response to radiation therapy, while the other reported that there was a good correlation between high Ki-67 proliferation index and improved response to radiation therapy [21,22]. Our study is the first to evaluate the correlation between the Ki-67 proliferation index and the rate of response to radiation therapy in SCLC. Our findings suggest that a higher Ki-67 proliferation index might represent a predictive factor for higher radiosensitivity. Although we also examined whether the Ki-67 proliferation index was a prognostic factor in SCLC, as observed for NSCLC in previous studies, no difference in the survival rate was observed between groups 1 and 2. This may have been attributable to the fact that all three patients who were unable to receive chemotherapy were included in group 2 and had a Ki-67 proliferation index that was lower than the mean; as mentioned, a higher Ki-67 proliferation index might be associated with a higher survival rate. A challenge for future studies is the low number of tumour cells in each sample. In the present study, the mean number of tumour cells was 420, which is lower than the number required (≥500 cells) for the assessment of Ki-67 in breast cancer, for example, according to the Ki-67 proliferation index guidelines [23]. Given that large SCLC samples are rarely obtained from surgery, there is a need to develop a technique that consistently obtains an adequate number of tumour cells, even from small samples acquired using bronchoscopy or other procedures. Further studies among SCLC patients are needed to assess the significance of the Ki-67 proliferation index in the treatment of SCLC. Conclusions To the best of our knowledge, our study is the first to evaluate the correlation between Ki-67 proliferation index and SCLC tumour response to radiation therapy. Our findings suggest that a higher Ki-67 proliferation index might represent a predictive factor for increased tumour radiosensitivity.
v3-fos-license
2016-01-13T18:10:52.408Z
2015-07-06T00:00:00.000
263950858
{ "extfieldsofstudy": [ "Medicine", "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsif.2015.0233", "pdf_hash": "daeb3ee357a1f35804a9a0406d508885b7833c02", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2805", "s2fieldsofstudy": [ "Computer Science", "Mathematics", "Biology" ], "sha1": "daeb3ee357a1f35804a9a0406d508885b7833c02", "year": 2015 }
pes2o/s2orc
Tensor methods for parameter estimation and bifurcation analysis of stochastic reaction networks Stochastic modelling of gene regulatory networks provides an indispensable tool for understanding how random events at the molecular level influence cellular functions. A common challenge of stochastic models is to calibrate a large number of model parameters against the experimental data. Another difficulty is to study how the behaviour of a stochastic model depends on its parameters, i.e. whether a change in model parameters can lead to a significant qualitative change in model behaviour (bifurcation). In this paper, tensor-structured parametric analysis (TPA) is developed to address these computational challenges. It is based on recently proposed low-parametric tensor-structured representations of classical matrices and vectors. This approach enables simultaneous computation of the model properties for all parameter values within a parameter space. The TPA is illustrated by studying the parameter estimation, robustness, sensitivity and bifurcation structure in stochastic models of biochemical networks. A Matlab implementation of the TPA is available at http://www.stobifan.org. Introduction Many cellular processes are influenced by stochastic fluctuations at the molecular level, which are often modelled using stochastic simulation algorithms (SSAs) for chemical reaction networks [1,2]. For example, cell metabolism, signal transduction and cell cycle can be described by network structures of functionally separated modules of gene expression [3], the so-called gene regulatory networks (GRNs). Typical GRN models can have tens of variables and parameters. Traditionally, GRNs have been described using continuous deterministic models written as systems of ordinary differential equations (ODEs). Several methodologies for studying parametric properties of ODE systems, such as identifiability and bifurcation, have been developed in the literature [4][5][6][7][8]. Recently, experimental evidence has highlighted the significance of intrinsic randomness in GRNs, and stochastic models have been increasingly used [1,9]. They are usually simulated using the Gillespie SSA [10], or its equivalent formulations [11,12]. However, methods for parametric analysis of ODEs cannot be directly applied to stochastic models. In this paper, we present a tensor-structured parametric analysis (TPA) which can be used to understand how molecular-level fluctuations influence the system-level behaviour of GRNs and its dependence on model parameters. We illustrate major application areas of the TPA by studying several biological models with increasing level of complexity. The parametric analysis of GRN models is computationally intensive because both state space and parameter space are high-dimensional. The dimension of the state space, V x , is equal to the number of reacting molecular species, denoted by N. When an algorithm, previously working with deterministic steady states, is extended to stochastic setting, its computational complexity is typically taken to the power N. Moreover, the exploration of the parameter space, V k , introduces another multiplicative exponential complexity. Given a system that involves K parameters, the 'amount' of parameter combinations to be characterized scales equally with the volume of V k , i.e. it is taken to the power K [13]. The TPA framework avoids the high computational cost of working in high-dimensional V x and V k . The central idea is based on generalizing the concept of separation of variables to parametric probability distributions [14]. The TPA framework can be divided into two main steps: a tensor-structured computation and a tensor-based analysis. First, the steady-state distributions of stochastic models are simultaneously computed for all possible parameter combinations within a parameter space and stored in a tensor format, with smaller computational and memory requirements than in traditional approaches. The resulting tensor data are then analysed using algebraic operations with computational complexity which scales linearly with dimension (i.e. linearly with N and K ). The rest of this paper is organized as follows. In §2, we discuss how the parametric steady-state probability distribution can be presented and computed in tensor formats. We illustrate the data storage savings using tensor-structured simulations of four biological systems. The stored tensor data are then used as the input for the tensor-based analysis presented in the subsequent sections. In §3, we show that the existing procedures for parameter inference for deterministic models can be directly extended to the stochastic models using the computed tensor data. In §4, a direct visualization of stochastic bifurcations in a high-dimensional state space is presented. The TPA of the robustness of the network to extrinsic noise is illustrated in §5. We conclude with a brief discussion in §6. Tensor-structured computations Considering a well-mixed chemically reacting system of N distinct molecular species X i , i ¼ 1, 2, . . . , N, inside a reactor (e.g. cell) of volume V, we denote its state vector by x ¼ (x 1 , x 2 , . . . , x N ) T , where x i is the number of molecules of the ith chemical species X i . In general, the volume V can be time dependent (for example, in cell cycle models which explicitly take into account cell growth), but we will focus in this paper on models with constant values of V. We assume that molecules interact through M reaction channels where n þ j,i and n À j,i are the stoichiometric coefficients. The kinetic rate parameters, k ¼ (k 1 , k 2 , . . . , k M ) T , characterize the rate of the corresponding chemical reactions. We will treat k as auxiliary variables, and, in other words, the parametric problem of (2.1) involves considering both x [ V x and k [ V k . In this paper, we study problems where the dimension of the parameter space K is equal to M. We also consider cases where some rate constants are not varied in the parameter analysis, i.e. K , M. In this case, notation k will be used to denote K-dimensional vector of rate constants, k ¼ (k 1 , k 2 , . . . , k K ) T , which are considered during the TPA. The values of the remaining (M 2 K) rate constants are fixed. In principle, the TPA could also be used to study models where K . M, i.e. when we consider additional parameters (e.g. system volume V ). Let p(xjk) be the steady-state probability distribution that the state vector is x (if the system is observed for sufficiently long time) given the parameter values k. The main idea of the TPA is to split p(xjk) in terms of coordinates as ..,N and {g ' j (k j )} j¼1,...,K are univariate functions that vary solely with a single state variable and parameter, respectively. The number of summands R, the so-called separation rank, controls the accuracy of the decomposition (2.2). By increasing R, the separated expansion could theoretically achieve arbitrary accuracy. The value of the separation rank R can be analytically computed for simple systems. For example, there are analytical formulae for the stationary distributions of first-order stochastic reaction networks [15]. They are given in the form (2.2) with R ¼ 1. Considering second-order stochastic reaction networks, there are no general analytical formulae for steady-state distributions. They have to be approximated using computational methods. The main assumption of the TPA approach is that the parametric steady-state distribution has a sufficiently accurate low-rank representation (2.2). In this paper, we show that this assumption is satisfied for realistic biological systems by applying the TPA to them and presenting computed (converged) results. The main consequence of low-rank representation (2.2) is that mathematical operations on the probability distribution p(xjk) in N þ K dimensions can be performed using combinations of onedimensional operations, and the storage cost is bounded by (N þ K )R. The rank R may also depend on N þ K and the size of the univariate functions in (2.2). Numerical experiments have shown a linear growth of R with respect to N þ K and a logarithmic growth with respect to the size of the univariate functions in the representation (2.2) [16,17]. To find the representation (2.2), we solve the chemical Fokker-Planck equation (CFPE), as a (fully) continuous approximation to a (continuous time) discrete space Markov chain described by the corresponding chemical master equation (CME) [18,19]. Specifically, we keep all the objects in the separated form of (2.2) during the computations, such that exponential scaling in complexity does not apply during any step of the TPA. We refer to the representation (2.2) as tensor-structured, because computations are performed on p(xjk) as multidimensional arrays of real numbers, which we call tensors [20]. The (canonical) tensor decomposition [21], as a discrete counterpart of (2.2), then allows a multi-dimensional array to be approximated as a sum of tensor products of one-dimensional vectors. Within such a format, we can define standard algebraic operations similar to standard matrix operations such that the resulting tensor calculus enables efficient computation. The tensor-structured parametric steady-state distribution (2.2) is approximated as the eigenfunction corresponding to the smallest eigenvalue of the parametric Fokker-Planck operator. The operator is constructed in a tensor separated representation as a sum of tensor products of one-dimensional operators. The eigenfunction is computed by the adaptive shifted inverse power method, using the minimum alternating energy method as the linear solver. We leave further discussion of technical computational rsif.royalsocietypublishing.org J. R. Soc. Interface 12: 20150233 details of the underlying methods to electronic supplementary material, appendix S1. The TPA has been implemented in Matlab and is part of the Stochastic Bifurcation Analyzer toolbox available at http://www.stobifan.org. The source code relies on the Tensor Train Toolbox [22]. Applications of the tensor-structured parametric analysis to biological systems We demonstrate the capabilities of the TPA framework by investigating four examples of stochastic reaction networks: a bistable switch in the five-dimensional Schlögl model [23], oscillations in the seven-dimensional cell cycle model [24], neurons excitability in the six-dimensional FitzHugh-Nagumo system [4] and a 20-dimensional reaction chain [25] (see electronic supplementary material, appendix S2, for more details of these models). Table 1 compares computational performance of the TPA with the traditional matrix-based methods for the computation of the parametric steady-state distribution p(xjk). The minimum memory requirements of solving the CME and the CFPE using matrix-based methods, Mem CME and Mem CFPE , are estimated as products of numbers of discrete states times the total number of parameter combinations. They vary in ranges 10 13 -10 44 and 10 11 -10 54 , respectively, which are beyond the limits of the available hardware. In contrast, the TPA maintains affordable computational and memory requirements for all four problems considered, as we show in table 1. The major memory requirements of the TPA are Mem A and Mem p to store the discretized Fokker-Planck operator and the steady-state distribution p(xjk), respectively (see electronic supplementary material, appendix S1, for detailed definitions). Similarly, T A is the computational time to assemble the operator and T tot is the total computational time. Table 1 shows that the TPA can outperform standard matrix-based methods. It can also be less computationally intensive than stochastic simulations in some cases. For example, the total computational time is around 30 min for the TPA to simulate 64 4 different parameter combinations within the four-dimensional parameter space of the Schlö gl chemical system (table 1). If we wanted to compute the same result using the Gillespie SSA, we would have to run 64 4 different stochastic simulations. If they had to be all performed on one processor in 30 min, then we would only have 1.07  10 24 s per one stochastic simulation and it would not be possible to estimate the results with the same level of accuracy. In addition, the TPA directly provides the steadystate distribution p(xjk), which would be computationally intensive to obtain by stochastic simulations (with the same level of accuracy) for larger values of N þ K. Parameter estimation Small uncertainties in the reaction rate values of stochastic reaction networks (2.1) are common in applications. Some model parameters are difficult to measure directly, and instead are estimated by fitting to time-course data. If GRNs are modelled using deterministic ODEs, there is a wide variety of tools available for parameter estimation. Many simple approaches are non-statistical [26], and the procedure usually, although not necessarily [27], follows the algorithm presented in table 2. This approach seeks the set of those parameters that minimize the distance measure d(b x, x à ), while the rules to generate candidate parameters k* in step (a1) and the definition of distance function along with stopping criteria in step (a3) may vary in different methods. In optimization-based methods, k* may follow the gradient on the surface of the distance function [26]. In statistical methods, such distance measure is provided in the concept of likelihood, L(k à jb x) ¼ p(b xjk à ) [28]. In Bayesian methods, the candidate parameters k* are generated from some prior information regarding uncertain parameters, p(k), and form a posterior distribution rather than a single point estimate [29]. Extending the algorithm in table 2 from deterministic ODEs to stochastic models requires substantial modifications [29]. One main obstacle is the step (a2) which requires repeatedly generating the likelihood function L(k à jx), as the Mem A is the storage requirement for the discrete Fokker-Planck operator in tensor structure. It can be avoided by computing all matrix -vector products on the fly. Table 2. Parameter estimation for ODEs. Generate a candidate parameter vector k à e V k . Compute model prediction x* using the parameter vector k*. Compare the simulated data x* with the experimental The tolerance 1 . 0 is the desired level of agreement between b x and x*. Return to step (a1) rsif.royalsocietypublishing.org J. R. Soc. Interface 12: 20150233 outcome of stochastic models. In this case, a modeller must either apply statistical analysis to approximate the likelihood [30], or use the Gillespie SSA to estimate it [31]. Consequently, the algorithms are computationally intensive and do not scale well to problems of realistic size and complexity. To avoid this problem, the TPA uses the tensor formalism to separate the simulation part from the parameter inference. The parameter estimation is performed on the tensor data obtained by methods described above (table 1). The algorithm used for the TPA parameter estimation is given in table 3. The distance function d(x, x à ) is replaced with a distance between summary statistics,Ŝ and S*, which describe the average behaviour and the characteristics of the system noise. The steps (b1), (b3) and (b4) are similar to steps (a1), (a3) and (a4) under the ODE settings, and a variety of existing methods can be extended directly to stochastic settings. The newly introduced step (b0) is executed only once during the parameter estimation. Steps (b1)-(b4) are then repeated until convergence. Step (b2) only requires manipulation of tensor data, of which the computational overhead is comparable to solving an ODE. An example of parameter estimation We consider that the distance measure J(Ŝ, S à ) in table 3 is defined using a moment-matching procedure [32,33]: is the corresponding moment derived from p(xjk à ) and L denotes the upper bound for the moment order. The weights, b i1,...,iN , can be chosen by modellers to attribute different relative importances to moments. Empirical moments are estimated from samplesx d, where nm is the number of samples. Moments of the model output are computed as We show, in electronic supplementary material, appendix S1.4, that it is possible to directly compute different orders of moments, m [i1,...,iN ] (k à ), using the representation (2.2) with O(N ) complexity. We illustrate the tensor-structured parameter estimation using the Schlögl chemical system [23], which is written for N ¼ 1 molecular species and has M ¼ 4 reaction rate constants k i , i ¼ 1, 2, 3, 4. A detailed description of this system is provided in electronic supplementary material, appendix S2.1. We prescribe true parameter values as k 1 ¼ 2.5  10 24 , k 2 ¼ 0.18, k 3 ¼ 2250 and k 4 ¼ 37.5, and use a long-time stochastic simulation to generate a time series as pseudo-experimental data (for a short segment, see figure 1a). These pseudo-experimental data are then used for estimating the first three empirical momentsm i , i ¼ 1, 2, 3, using (3.2). While the moments of the model output, m i (k), i ¼ 1, 2, 3, are derived from the tensor-structured data p(xjk), computed using (2.2). Moment matching is sensitive to the choice of weights [33]. However, for the sake of simplicity, we choose the weights b i , i ¼ 1, 2, 3, in a way that the contributions of the different orders of moments are of similar magnitude within the parameter space. Having the stationary distribution stored in Table 3. An algorithm for the tensor-structured parameter estimation. Compute the stationary distribution p(xjk) for all considered combinations of xe V x and ke V k ; and store p(xjk) in tensor data. Generate a candidate parameter vector k à e V k . Extract the stationary distribution p(xjk à ) from the tensorstructured data p(xjk) and compute the summary statistics S* ¼ S*( p). Compare the model prediction S* with the statisticsŜ obtained from experimental data, using distance function J(Ŝ, S à ) and tolerance 1. The summary statisticsŜ are not restricted to lower order moments. The TPA can efficiently evaluate different choices of the summary statistics, because of the simplicity and generality of separable representation (2.2). For example, if one can experimentally measure the probability that the system stays in each of the two states of the bistable system, then distance measure J(Ŝ, S à ) can be based on the probability of finding the system within a particular part of the state space V x . We show, in electronic supplementary material, appendix S1.4, that such quantity can also be estimated in the tensor format efficiently with O(N) complexity. Considering the Schlö gl model, we estimate the probability that the system stays in the state with less molecules by where P denotes the probability and the threshold 230 separates the two macroscopic states of the Schlögl system, see the dashed line in figure 1a. The splitting probability (3.4) can be estimated using long-time simulation of the Schlögl system as the fraction of states which are less or equal than 230 and is equal toŜ ¼ 47:61% for our true parameter values. Figure 1b shows the set of admissible parameters within the parameter space V k whose values provide desired agreement on the splitting probability (3.4) with tolerance 1 ¼ 5%, i.e. we use in the algorithm given in table 3, where S* is computed using (2.2) and (3.4). Identifiability One challenge of mathematical modelling of GRNs is whether unique parameter values can be determined from available data. This is known as the problem of identifiability. Inappropriate choice of the distance measure may yield ranges of parameter values with equally good fit, i.e. the parameters being not identifiable [35]. Here, we illustrate the tensor-structured identifiability analysis of the deterministic and stochastic models of the Schlö gl chemical system. We plot the distance function against two parameter pairs, rate constants k 1 -k 3 and k 2 -k 4 , in figure 3. From the colour map, we see that the distance function (3.1) possesses a well distinguishable global minimum at the true values (k 1 ¼ 2.5  10 24 , k 2 ¼ 0.18, k 3 ¼ 2250 and k 4 ¼ 37.5). This indicates that the stochastic model is identifiable in both cases. In the deterministic scenario, the Schlö gl system loses its identifiability. When the distance function (3.1) only fits the mean concentration, the minimal values are attained on a curve in the two-dimensional parameter space (the distance Figure 2. Circular representation [34] of estimated parameter combinations for the Schlögl model. Each spoke represents the corresponding parameter range listed in electronic supplementary material, table S3. The true parameter values are specified by the intersection points between the spokes and the dashed circle. Each triangle (or polygon in general) of a fixed colour corresponds to one admissible parameter set with 1 ¼ 0.25%. Each panel (a -d ) shows the situation with one parameter fixed at its true value, namely (a) k 1 is fixed; (b) k 2 is fixed; (c) k 3 is fixed; and (d ) k 4 is fixed. Stochastic models are advantageous in model identifiability, because they can be parametrized using a wider class of statistical properties (typically, K quantities are needed to estimate K reaction rate constants for mass-action reaction systems). The TPA enables efficient and direct evaluation of J(Ŝ, S à ) all over the parameter space in a single computation by using the representation (2.2). Figure 3 also reveals the differences between the model responses to parameter perturbations. The green contour lines show the landscape of J(Ŝ, S à ) for the stochastic model using only the mean values, i.e. L ¼ 1 in (3.1). The minimum is attained on a straight line, representing another nonidentifiable situation. This line (green) has a different direction than the line obtained for the deterministic model (blue). In particular, this example illustrates that the parameter values estimated from deterministic models do not give good approximation of both average behaviour and the noise level when they are used in stochastic models [36]. Bifurcation analysis Bifurcation is defined as a qualitative transformation in the behaviour of the system as a result of the continuous change in model parameters. Bifurcation analysis of ODE systems has been used to understand the properties of deterministic models of biological systems, including models of cell cycle [37] and circadian rhythms [38]. Software packages, implementing numerical bifurcation methods for ODE systems, have also been presented in the literature [39,40], but computational methods for bifurcation analysis of corresponding stochastic models are still in development [19]. Here, we use the tensor-structured data p(xjk) given by (2.2) for a model of fission yeast cell cycle control developed by Tyson [24], and perform the tensor-structured bifurcation analysis on the tensor data. The interaction of cyclin-cdc2 in the Tyson model is illustrated in figure 4a. Reactions and parameter values are given in electronic supplementary material, appendix S2.2. The parameter k 1 , indicating the breakdown of the active M-phase-promoting factor (MPF), is chosen as the bifurcation parameter. The analysis of the corresponding ODE model reveals that the system displays a stable steady state when k 1 is at its low values, which describes the metaphase arrest of unfertilized eggs [41]. On the other hand, the ODE model is driven into rapid cell cycling exhibiting oscillations when k 1 increases [24]. The ODE cell cycle model has a bifurcation point at k 1 ¼ 0.2694, where a limit cycle appears [24]. In our TPA computations, we study the behaviour of the stochastic model for the values of k 1 which are close to the deterministic bifurcation point. We observe that the steady-state distribution changes from a unimodal shape (figure 4b) to a probability distribution with a 'doughnutshaped' region of high probability (figure 4c) at k 1 ¼ 0.3032. In particular, the stochastic bifurcation appears for higher values of k 1 than the deterministic bifurcation. In figure 5, we use the computed tensor-structured parametric probability distribution to visualize the stochastic bifurcation structure of the cell cycle model. As the bifurcation parameter k 1 increases, the expected oscillation tube is formed and amplified in the marginalized YP-pM-M state space (figure 5a-d). In figure 5e-h, the marginal distribution in the Y-CP-pM subspace is plotted. We see that it changes from a unimodal (figure 5e) to a bimodal distribution (figure 5f). Cell cycle models have been studied in the deterministic context either as oscillatory [24] or bistable [42,43] systems. In figure 5, we see that the presented stochastic cell cycle model can appear to have both oscillations and bimodality, when different subsets of observables are considered. Robustness analysis GRNs are subject to extrinsic noise which is manifested by fluctuations of parameter values [44]. This extrinsic noise originates from interactions of the modelled system with other stochastic processes in the cell or its surrounding environment. We can naturally include extrinsic fluctuations under the tensor-structured framework. For a GRN as in (2.1), we consider the copy numbers X 1 , X 2 , . . . , X N as intrinsic variables and reaction rates k 1 , k 2 ,. . . , k M as extrinsic variables. Total stochasticity is quantified by the stationary distribution of the intrinsic variables, p(x). We assume that the invariant probability density of extrinsic variables, q(k), does not depend on the values of intrinsic variables x. Then the law of total probability implies that the stationary probability distribution of intrinsic variables is given by where V k is the parameter space and p(xjk) represents the invariant density of intrinsic variables conditioned on constant values of kinetic parameters, see the definition below equation (2.1). If distributions q(k) of extrinsic variables can be determined from high-quality experimental data, then the stationary density can be computed directly by (3.5). If not, the TPA framework enables to test the behaviour of GRNs for different hypothesis about the distribution of the extrinsic variables. The advantage of the TPA is that it efficiently computes the high-dimensional integrals in (3.5) (see electronic supplementary material, appendix S1.4). Extrinsic noise in FitzHugh -Nagumo model We consider the effect of extrinsic fluctuations on an activator-inhibitor oscillator with simple negative feedback loop: the FitzHugh -Nagumo neuron model which is presented in figure 6a. Self-autocatalytic positive feedback loop activates the X 1 molecules, which are further triggered by the external signal. The species X 2 is enhanced by the feedforward connection and it acts as an inhibitor that turns off the signalling [4]. We perform robustness analysis based on the simulated tensor data in §2.1 (summarized on the third line of table 1). In our computational examples, we assume that q(k) ¼ q 1 (k 1 )q 2 (k 1 ) . . . q M (k M ), i.e. the invariant distributions of rate constants k 1 , k 2 , . . . , k M are independent. Then (3.5) reads as follows: Extrinsic variability in the FitzHugh-Nagumo system is studied in four prototypical cases of q i , i ¼ 1, 2, . . . , M: (i) Dirac delta, (ii) normal, (iii) uniform, and (iv) bimodal distributions, as shown in figure 6b. As these distributions have zero mean, the extrinsic noise is not biased. We can then use this information about extrinsic noise to simulate the stationary probability distribution of intrinsic variables by (3.6). When the extrinsic noise is omitted, the inhibited and excited states are linked by a volcano-shaped oscillatory probability distribution (figure 6c). At the inhibited state, X 1 molecules first get activated from the positive feedback loop, and then excite X 2 molecules by feed-forward control. The delay between the excitability of the two molecular species gives rise to the path (solid line) describing switching from the inhibited state to the excited state (figure 6c). If the normal or uniform noise are introduced to the extrinsic variables, then the path becomes straighter (figure 6d,e). This suggests that, once X 1 molecules get excited or inhibited, X 2 molecules require less time to response. GRNs with stronger negative feedback regulation gain higher potential to reduce the stochasticity. This argument has been both theoretically analysed [45,46], and experimentally tested for a plasmid-borne system [47]. We have shown that the extrinsic noise reduces the delay caused by the feedback loop (figure 6d). If we further increase the variability of the extrinsic noise, then the delay caused by the feedback loop is further reduced (figure 6e). In the case of the bimodal distribution of extrinsic fluctuations, the most-likely path linking the inhibited and excited states even shrinks into an almost straight line ( figure 6f ). This means that, for the same level of the inhibitor X 2 , the number of the activator X 1 is lower, i.e. the presented robustness analysis shows that the behaviour of stochastic GRNs with negative feedback regulation can benefit from the extrinsic noise. Discussion We have presented the TPA of stochastic reaction networks and illustrated that the TPA can (i) calculate and store the parametric steady-state distributions; (ii) infer and analyse stochastic models of GRNs. To explore high-dimensional state space V x and parameter space V k , the TPA uses a recently proposed low-parametric tensor-structured data format, as presented in equation (2.2). Tensor methods have been recently used to address the computational intensity of solving the CME [16,48]. In this paper, we have extended these tensor-based approaches from solving the underlying equations to automated parametric analysis of the stochastic reaction networks. One notable advantage of the tensor approach lies in its ability to capture all probabilistic information of stochastic models all over the parameter space into one single tensor-formatted solution, in a way that allows linear scaling of basic operations with respect to the number of dimensions. Consequently, the existing algorithms commonly used in the deterministic framework can be directly used in stochastic models via the TPA. In this way, we can improve our understanding of parameters in stochastic models. To overcome technical (numerical) challenges, we have introduced two main approaches for successful computation of the steady-state distribution. First, we compute it using the CFPE approximation which provides additional flexibility in discretizing the state space V x . The CFPE admits larger grid sizes for numerical simulations than the unit grid size of the CME. In this way, the resulting discrete operator is better conditioned. We illustrate this using a 20-dimensional problem introduced in the last line of table 1 and in electronic supplementary material, appendix S2.4. To compute the stationary distribution, a multi-level approach is implemented, where the steady-state distribution is first approximated on a coarse grid, and then interpolated to a finer grid as the initial guess (see electronic supplementary material, appendix S1.3, for more details). The results are plotted in figure 7. Second, we introduce the adaptive inverse power iteration scheme tailored to current tensor solvers of linear systems, see electronic supplementary material, appendix S1.3, for technical details. As tensor linear solvers are less robust especially for ill-conditioned problems, it is necessary to carefully adapt the shift value during the inverse power iterations in order to balance the conditioning and sufficient speed of the convergence. We would like to emphasize the importance of these improvements, because the TPA is mainly limited by the efficiency of computing steady-state distributions, rather than by the problem dimension, N þ K. Both the computational efficiency and the separation rank R are negatively correlated with the relaxation time of the reaction network. Reaction networks exhibiting bistable or oscillating behaviours usually have larger relaxation times. This explains some counterintuitive results in table 1, namely the smaller memory requirements and shorter computational times of the 20-dimensional reaction chain in comparison with the seven-dimensional cell cycle model. In particular, the TPA can be applied to systems with dimensionality N þ K greater than 20, provided that they have small relaxation times. Techniques for the parameter inference and bifurcation analysis of stochastic models have been less studied in the literature than the corresponding methods for the ODE models. One of the reasons for this is that the solution of the CME is more difficult to obtain than solutions of mean-field ODEs. This has been partially solved by the widely used Monte Carlo methods, such as the Gillespie SSA, which can be used to estimate the required quantities [29]. Advantages of Monte Carlo methods are especially their relative simplicity and easy parallelization. The TPA provides an alternative approach. The TPA uses more complex data structures and algorithms than the Gillespie SSA, but it enables to compute the whole probability distribution for all combinations of parameter values at once. The TPA stores this information in the tensor format. If the state and parameter spaces have a higher number of dimensions, then the Monte Carlo methods would have problems with storing computed stationary distributions. Another advantage of the TPA is that it produces smooth data, see e.g. figure 3 for the data over the parameter space and figure 5 for the data in the state space. This is important for a stable convergence in the gradient-based optimization algorithms [49] and for reliable analysis of stochastic bifurcations. Monte Carlo methods provide necessarily noisy and hence non-smooth data that may cause problems for these methods. Parameter inference of stochastic models can make use of various statistical measures, such as the variance and correlations. Monte Carlo approaches are widely used to compute these quantities, but they may be computationally expensive. The TPA provides an alternative approach. Once we compute the stationary distribution for the desired ranges of parameter values and store it in the tensor format, we can use the tensor operation techniques (see electronic supplementary material, appendix S1.4) to efficiently compute many different statistical measures from the same stationary distribution. If the results of the used statistical measure and chosen method are not satisfactory, we can modify or completely change both and try to infer the parameters again. As the stationary distribution is stored, the modifications and changes can be done with low computational load. Namely, no stochastic simulations are needed. In addition, as the stationary distribution contains complete information about the stochastic steady state, it can be used to compute practically any quantity for comparison with experimental data. We have illustrated several different parametric studies in figures 1b, 2 and 3. All these results are based on a single tensor solution presented in §2.1 (table 1). We would like to note that the presented inference is based solely on the steady-state distributions, and not on the time-dependent trajectories. Consequently, parameter estimation of the Schlö gl system needs to be performed with at least one model parameter fixed at its true value. Nevertheless, the time evolution can be incorporated into the TPA framework. We can consider the time t as an additional dimension in the tensor data [50], i.e. we can compute p(xjk; t), where t ¼ (t 1 , t 2 , . . . , t L ) T is a vector of temporal samples. Adding a temporal dimension to the separated tensor data increases the storage requirements and computational complexity from order O(N þ K) to order O(N þ K þ 1). Then, the existing trajectory-based inference methods [51] can be applied to the computed tensor data p(xjk; t). Let us also note that it is relatively straightforward to use the TPA framework to study the parameter sensitivity of stochastic systems (i.e. to quantify the dependence of certain quantities of interest on continuous changes in model parameters). A systematic way for conducting the sensitivity analysis is illustrated in electronic supplementary material, appendix S1.5, using the fission yeast cell cycle model.
v3-fos-license
2021-05-15T06:16:53.970Z
2021-05-13T00:00:00.000
234497002
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0251320&type=printable", "pdf_hash": "eca251453f2bac8320b6796b80f52a16e1105fbb", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2806", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "6c5582ff74c4133a0c53bb73252439c464f4a735", "year": 2021 }
pes2o/s2orc
Mental-physical multimorbidity treatment adherence challenges in Brazilian primary care: A qualitative study with patients and their healthcare providers Improved understanding of multimorbidity (MM) treatment adherence in primary health care (PHC) in Brazil is needed to achieve better healthcare and service outcomes. This study explored experiences of healthcare providers (HCP) and primary care patients (PCP) with mental-physical MM treatment adherence. Adults PCP with mental-physical MM and their primary care and community mental health care providers were recruited through maximum variation sampling from nine cities in São Paulo State, Southeast of Brazil. Experiences across quality domains of the Primary Care Assessment Tool-Brazil were explored through semi-structured in-depth interviews with 19 PCP and 62 HCP, conducted between April 2016 and April 2017. Through thematic conent analysis ten meta-themes concerning treatment adherence were developed: 1) variability and accessibility of treatment options available through PHC; 2) importance of coming to terms with a disease for treatment initation; 3) importance of person-centred communication for treatment initiation and maintenance; 4) information sources about received medication; 5) monitoring medication adherence; 6) taking medication unsafely; 7) perceived reasons for medication non-adherence; 8) most challenging health behavior change goals; 9) main motives for initiation or maintenance of treatment; 10) methods deployed to improve treatment adherence. Our analysis has advanced the understanding of complexity inherent to treatment adherence in mental-physical MM and revealed opportunities for improvement and specific solutions to effect adherence in Brazil. Our findings can inform research efforts to transform MM care through optimization. S1 Appendix Exemplary Interview Topic Guides (English & Portuguese) A conversational style of interviewing was adopted, to encourage comfortable and fluent dialogue rich in detail, while using a semi-structured interview topic guide as a reference to ensure that all key topics were covered. We have attempted to cover each key topic with each interviewee. Consistent with good qualitative research practice, main questions and prompts [text in square brackets in grey font] varied in each interview, the former dependent on whether the participant already spontaneously covered that topic or not and the latter dependant on the participant's experience and their opening narrative. Interviews with Physicians [Topic guide questions were tailored to specific healthcare professional group, according to relevance] • Opening question: Could you describe your role/ involvement in care for people with diabetes, hypertension, heart disease or arthritis/arthrosis in the primary care unit where you work? • There are people who have two or more of these chronic illnesses at the same time (for example, diabetes, hypertension, heart disease or arthritis). Is your involvement in care for these people any different compared to your involvement in caring for people who have only one of these chronic diseases? • Some health professionals say that they have observed that some patients do not feel comfortable when they tell them their problems and difficulties. Has • How do you know if every patient you are seeing who has diabetes, hypertension, heart disease or arthritis is getting help for any of these chronic illnesses at other health care services other than the one where you work? [If so, can you give me more details on how you got to know? If not, can you give me more details on why you can't find out?] • How do you know which medications have been prescribed for each of your patient who has a chronic illness, including those medications that have been prescribed in health services other than the health service here, where you work? [If so, can you give me more details on how you got to know? If not, can you give more details on why you can't find out?] • How do you know if the patients you care for are taking the medications they were prescribed correctly? • How do you find out which of the patients you care for have diabetes, hypertension, heart disease or arthritis? [In your experience, what are the best diagnostic methods for hypertension, diabetes, heart disease, arthritis to use in Primary Care?] • When you have to decide how you decide which treatment is best for a patient with one or more of these chronic diseases? [For example, how do you decide to advise a patient to take specific medications, or to do some kind of physical activity, or to eat certain types of foods? How is your use of the "Primary Care Notebooks", from the Ministry of Health, to help you plan and decide what is the best treatment for your patients?] • What treatments or ways to help treat are offered for hypertension, diabetes, heart disease, arthritis / arthrosis in your care unit? • Of the treatments that are offered for these chronic illnesses in this health service, which ones do you think help the patients a lot and which ones do you think do not help that much? [Why do you think that? Do you think there is a combination of treatments that is especially good for helping patients with these chronic illnesses?] • If you had a choice, what treatment -or treatments -would you like to offer to patients with diabetes, hypertension, heart disease or arthritis / arthrosis? • In you experience, people with hypertension, diabetes, heart disease or arthritis, also have co-exiting, emotional problems such as depression or anxiety? [It is common?] • Do you think that hypertension, diabetes, heart disease or arthritis can cause depression and anxiety? [If so, give more details on how you think chronic illnesses can trigger emotional problems.] • Do you think that depression and anxiety can help cause hypertension, diabetes, heart disease, arthritis or osteoarthritis? [If so, give more details on how you think these emotional problems can help to cause chronic illness.] • What else do you think can help cause depression and anxiety? • What is your involvement in the care of people who have hypertension / diabetes / heart disease / arthritis or osteoarthritis and also have depression and / or anxiety at the same • How do you know if each patient is taking the medication prescribed to treat anxiety and / or depression correctly? • How do you know, for each patient you attend, if he received any help, other than medication, to treat emotional problems such as depression and / or anxiety, right here in this health service? [Can you give me more details about how you get to know (or why you don't get to know)?] • How do you know, for each patient you see, if he/she received any help, other than medication, to treat depression and/or anxiety, in another health service? [Can you give me more details about how you get to know (or why you don't get to know)?] • What ways of helping people with depression and/or anxiety are offered at the health service where you work? • In your experience, which forms of treatment work and which forms of treatment do not work well, to help people improve from depression and/or anxiety? • If you had a choice, what kind of help would you like to offer to your patients to treat depression and anxiety that you would? [Why, this one?] • In your experience, what ways do your patients prefer to treat depression and/or anxiety? [For example, do some patients prefer to take medication rather than receive some other type of help to treat depression and/or anxiety? Do some patients prefer to receive help of any kind that does not include taking medication to treat depression and /or anxiety?] • When a patient of yours, needs a referral for treatment of chronic illness, can you or someone in your primary care team talk to them about the specialised services in which they could be seen? [If so, how do you or someone on your team manage to make this conversation? If not, why can't you or someone on your team make this conversation?] • When your patients need referrals to treat emotional problems, can you or someone on your primary care team talk to them about specialised mental health services that they could be seen to? [If so, how do you or someone on your team manage to make this conversation? If not, why can't you or someone on your team make this conversation?] • Can you or another professional in the primary care service where you work help the patient to make an appointment or be admitted to other health services, if necessary? [If so, how can you or someone on your team help the patient with this? If not, why can't you or someone on your team help the patient with this?] • Have you ever noticed that any of your patients experienced difficulties with getting care in other health services to treat hypertension, diabetes, heart disease, arthritis / arthrosis, even after you or another professional in this health service made a referral to these other services? [If so, what difficulties have you noticed? How did you deal with these difficulties?] • Have you ever noticed that any of your patients experienced difficulties with getting care in a mental health service, even after you or another professional at this health service made a referral to these other services? [If so, what difficulties have you noticed? How did you deal with these difficulties?] • When your patients are referred, can you or someone from this primary care service provide them with written information (a report, a reference form) to take to the specialist or specialised service? [If so, how do you or someone on your team do this? If not, why can't you or someone on your team do this?] • After your patient has been consulted or admitted to a specialised service, do you or anyone in this primary care service receive a report, a counter-reference form, from that specialised service? [If so, what is the quality of this counter-reference form? If not, why do you think they are unable to provide the counter-reference form?] • Do you or someone in this primary care service talk to your patient about the results of this consultation or admission to the specialised service after it occurred? [If yes, talk more, give details, about these conversations? If not, why doesn't this conversation with the patient happen?] • Thinking about the aspects of care we discussed, how would you describe your satisfaction with communication with other health professionals, including colleagues from this healthcare service and other health facilities on the treatment of their patients? [What works? What aspects need improvement?] • Do you have any suggestion (s) about how communication with other health professionals who look after your patients could be improved? • Can you recommend some strategies or ways to integrate treatments for chronic illnesses with treatments for depression and / or anxiety, which work here in your health service? • Do you have any suggestion (s) on how the integration of these treatments for these two types of problems could be improved here in your health service? [For example, ideas about training health professionals or ideas about how to operate a system or equipment that facilitates communication within your health service and/or with other health services?] • Closing question: In the context of our conversation, is there a topic that is important to you, but you haven't had a chance to speak? Interviews with Patients [Diabetes and depression are used for illustration purposes, but specific condition would vary from a patient to another] • Have you ever found it difficult to make an appointment or get treatment for diabetes? [What difficulties did you have? E.g: cost, distance to a service unit, opening hours, waiting time for a consultation at a specialized clinic or others] • How did you managed to resolve those difficulties? • Some people think that people with darker skin colour or a complicated financial situation may find it more difficult to access health services for diabetes, and others do not think that. What are you experiences with it? • Among these service locations in which you were attended for diabetes, to which one you come back more often? • When you come back in this service is it always the same doctor or nurse who takes care of your diabetes? • In your opinion, is there a health professional or health service that knows you better as a • How have you dealt with these difficulties? • Some people think that people with darker skin colour or a complicated financial situation may find it more difficult to get help from health services to treat emotional problems [like depression or anxiety], and other people do not. What are your experiences with it? • Among healthcare services that you attend for depression, which one you most frequently return to? • When you return to this healthcare service, are you always attended by the same person? • • Does the health professional who takes care of your depression give you enough time for you to talk about their concerns and their problems with depression? • Do healthcare professionals that care for your depression, ask about how you are coping with your diabetes? • What do you do to not feel worse or feel better when you are down or sad? • What do you do to not feel worse or feel better when you are nervous or worried? • How satisfied are you with the treatment of depression that you received? • of questions 1a, 1b, 1c and 1d, if the answer is positive, you should also ask questions 2a, 2b, 2c, 2d and 2e, before asking the next question in group 1) • 1a -In any of diabetes consultation, has any health professional told you to see an expert or specialist service for diabetes? [Type of health professional / health service?] • 1b -In any of the diabetes consultations, did any health professional tell you to see a specialist or specialised service for depression? [Type of health professional / health service?] • 1c -In any of the depression consultations, did any health professional tell you to consult with a specialist or specialised service for depression? [Type of health professional / health service?] • 1d -In any of the consultations for depression, did any health professional tell you to consult with a specialist or specialized service for diabetes? [Type of health professional / health service?] • 2a -Did this health professional / health service helped to arrange this appointment with the specialist or specialised service? • 2b -Did this health professional / health service know that you made these consultations with this specialist or specialised service? • 2c -This health professional / health service gave you some information (report or form) for you to take to the expert? • 2d -This health professional/health service asked to you about what happened during the consultation with the specialist or specialised service? • 2e -Did this health professional/health service seem interested in knowing what you thought about the care given to you by the specialist or specialised service to which he referred you? • In the context of our conversation, is there any other topic that is important to you, but you have not had a chance to speak about yet?
v3-fos-license
2014-10-01T00:00:00.000Z
2011-11-11T00:00:00.000
17613734
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0027108&type=printable", "pdf_hash": "3034af618f11eec7750f8c534bb7d67ba1fbaed6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2807", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "sha1": "3034af618f11eec7750f8c534bb7d67ba1fbaed6", "year": 2011 }
pes2o/s2orc
Cost-Effectiveness of Internet-Based Self-Management Compared with Usual Care in Asthma Background Effectiveness of Internet-based self-management in patients with asthma has been shown, but its cost-effectiveness is unknown. We conducted a cost-effectiveness analysis of Internet-based asthma self-management compared with usual care. Methodology and Principal Findings Cost-effectiveness analysis alongside a randomized controlled trial, with 12 months follow-up. Patients were aged 18 to 50 year and had physician diagnosed asthma. The Internet-based self-management program involved weekly on-line monitoring of asthma control with self-treatment advice, remote Web communications, and Internet-based information. We determined quality adjusted life years (QALYs) as measured by the EuroQol-5D and costs for health care use and absenteeism. We performed a detailed cost price analysis for the primary intervention. QALYs did not statistically significantly differ between the Internet group and usual care: difference 0.024 (95% CI, −0.016 to 0.065). Costs of the Internet-based intervention were $254 (95% CI, $243 to $265) during the period of 1 year. From a societal perspective, the cost difference was $641 (95% CI, $−1957 to $3240). From a health care perspective, the cost difference was $37 (95% CI, $−874 to $950). At a willingness-to-pay of $50000 per QALY, the probability that Internet-based self-management was cost-effective compared to usual care was 62% and 82% from a societal and health care perspective, respectively. Conclusions Internet-based self-management of asthma can be as effective as current asthma care and costs are similar. Trial Registration Current Controlled Trials ISRCTN79864465 Introduction Asthma is a chronic, inflammatory disorder of the airways clinically characterized by respiratory symptoms such as wheeze, cough, dyspnoea, chest tightness and impaired lung function [1,2]. Treatment for asthma is aimed at improving asthma control, i.e. reducing current symptoms and need for short-acting bronchodilation, improving lung function and preventing future exacerbations [1][2][3]. In the past decade, the care for asthma patients has shifted from physician-managed care to guided self-management. Guided selfmanagement includes asthma education, self-monitoring of symptoms and/or lung function and adjustment of treatment according to an action plan guided by a health care professional (not necessarily a physician). Self-management has been shown to improve asthma control and quality of life and reduce health care utilization and sometimes improve lung function [4]. Besides clinical effectiveness, the implementation of new disease management strategies requires an economic evaluation to determine whether the clinical benefits are gained at reasonable costs. Several cost evaluations have compared paper-and-pencil self-management plans to usual care in asthma [5][6][7][8][9][10][11], but only a few compared costs to quality of life [10][11]. Most of these economic evaluations found that written self-management plans for asthma were likely to be cost-effective compared to usual physician provided care. However, the implementation of paperand-pencil self-management plans is hampered by patients' and doctors' reluctance to use written diaries [12]. Implementation of guided self-management programs may be enhanced by the use of Internet-based technologies, particularly in remote and underserved areas. In a recently conducted randomized controlled trial we have shown that Internet-based selfmanagement is feasible and provides better clinical outcomes compared to usual physician provided care with regard to asthma related quality of life, asthma control, symptom-free days and lung function [13]. Although previous trials have also evaluated the clinical effects of Internet-based self-management in adults [14] and children [15,16], so far, no economic evaluations have been conducted. We therefore carried out a cost-utility analysis, comparing quality of life with societal and health care costs during one year, to determine whether the clinical benefits gained with Internet-based self-management are attained at reasonable costs. Methods The protocol for this trial and supporting CONSORT checklist are available as supporting information; see Protocol S1, Checklist S1, and Flowchart S1. Ethics statement The study was approved by the Medical Ethics Committee of the Leiden University Medical Center. All participants gave their written consent. Setting and Participants Two hundred patients participated in a 12-month multicenter, non-blinded, randomized controlled trial. Patients were recruited from 37 general practices (69 General Practitioners) in the Leiden and The Hague area and the Outpatient Clinic of the Department of Pulmonology at the Leiden University Medical Center, The Netherlands over the period from September 2005 to September 2006 [13]. We included patients with physician diagnosed asthma as coded according to the International Classification of Primary Care in the electronic medical record [17], aged 18-50 years, with a prescription of inhaled corticosteroids for at least three months in the previous year, access to Internet at home, mastery of the Dutch language and without serious comorbid conditions that interfered with asthma treatment. Patients on maintenance oral glucocorticosteroid treatment were excluded. All participants gave their written consent. Details of the randomization and intervention have been described previously [13]. Briefly, the 200 patients were randomly assigned to Internet-based self-management as an adjunct to usual care (Internet group: 101 patients) or to usual physician-provided care alone (usual care group: 99 patients). Allocation took place by computer after collection of the baseline data, ensuring concealment of allocation. The Internet-based self-management program included weekly monitoring of asthma control and lung function, immediate treatment advice according to a computerized personal action plan after completing the validated Asthma Control Questionnaire on the Internet [18], on-line education and group-based education, and remote Web communication with a specialized asthma nurse. Utilities and QALYs Utilities express the valuation of health-related quality of life on a scale from zero (death) to one (perfect health). Patients described their health-related quality of life using the EuroQol classification system (EQ-5D) [19], from which we calculated their utilities over time using the British tariff [20]. The area under the utility curve is known as quality-adjusted life years (QALY) and was used as the primary outcome measure for the cost-effectiveness analysis. Patients additionally valued their own health status on a visual analogue scale (VAS). This scale from the patient perspective is potentially more responsive to change than other generic quality of life instruments, but is not the best choice for economic evaluations from a societal perspective [21]. The VAS scale was transformed to a utility scale using the power transformation 12(1-VAS/ 100) 1.61 [22]. We obtained utility measurements at baseline, 3 and 12 months. For EQ-5D measurements 6.5%, 10% and 8.5% were missing and for visual analogue measurements 7%, 10% and 9% were missing at 0, 3 and 12 months, respectively. To correct for possibly selective non-response, missing measurements were replaced by 5 imputed values based on switching regression [23,24] with regression variables randomisation group, age, sex, asthma control at baseline and available utility measures at all time points. Costs We distinguished three major cost categories: intervention costs, other health care costs and productivity costs [10,11]. Intervention costs consisted of materials (software support, electronic spirometer), personnel and patient costs (travel, time, Internet and text messaging costs). Other health care costs included contacts (including face-to-face, telephonic and home contacts) with health care professionals (general practitioners, chest physician, other specialists, physiotherapists, psychologists, complementary care and other paramedical professionals), emergency room visits, hospital admissions and both asthma and non-asthma medication. Productivity costs consisted of hours of absence from work. Patients reported their use of health care resources and the hours of absence from work using a quarterly cost-questionnaire. We used Dutch standard prices for units of resource use (contacts with health care professionals, hospital admissions and drug prescriptions) and hours of absenteeism, designed to represent societal costs and to standardize economic evaluations [25,26]. Hours of absenteeism were converted to costs by multiplying them with the age and gender average hourly wage [25]. Details of the drugs used were derived from pharmacy records. All prices were converted to the price level of 2007 according the general Dutch consumer price index [27] and converted to US dollars using the purchasing power parity index (J1 = $1.131) [28]. Because of the one-year time horizon, costs were not discounted. Cost-questionnaires were scheduled to be handed in at 3, 6, 9 and 12 months. Of these quarterly questionnaires, 10%, 14%, 19% and 9% were missing, respectively. Pharmacy records were available for 182 patients (91%). Missing cost-questionnaire and pharmacy record were imputed using multiple imputation, as previously described under 'Utilities and QALYs'. Statistical Analysis Differences and statistical uncertainty of QALYs and costs were calculated using non-parametric bootstrap estimation with 5000 random samples (1000 from each of the 5 imputations). Differences in costs resulted from differences in volumes rather than differences in unit costs, since we used standard prices for units of resource use and hours of absenteeism. We estimated the intervention effect by a linear regression model with randomisation group as only independent variable, combining the 5 multiple imputation sets using Rubin's rules [29]. Analyses were carried out with Stata 9.0 (StataCorp, College Station, TX). Cost-Effectiveness Analysis The base case cost-effectiveness analysis compared societal costs with QALYs gained based on the British EQ-5D over the period of one year. Because of the limited degree of modeling in this cost utility analysis, we carried out sensitivity analyses only on the use of different utility measures (British EQ-5D or Visual Analogue Scale) and on the included cost categories (societal or healthcare perspective). Statistical uncertainty of the cost-effectiveness was analyzed using the net benefit approach [30]. The net benefit is defined as l x DQALY -Dcosts, where l is the willingness to pay for a gain of one quality-adjusted life year. This way, the observed QALY difference is reformulated into a monetary difference. The uncertainty about cost-effectiveness was graphically shown by plotting the bootstrapped incremental costs and QALY estimates in the cost-effectiveness plane (200 estimated pairs for each of the 5 imputed datasets) (figure 1). In a cost-effectiveness acceptability curve we graphed the probability (12[one sided] P value) that the Internet-based self-management program was cost-effective (i.e. had higher net benefit) compared with usual care, as a function of l for a range of l between 0 and 200000 (figure 2). We highlighted this probability at commonly cited willingness-to-pay values of $50000 and $100000 per QALY [31]. Results The Internet group and usual care group consisted of 101 and 99 participants, respectively. Mean age of the sample was 37 years and 70% of the participants were women (table 1). At baseline, asthma related quality of life, asthma control and medication use were similar for the two randomization groups. Utilities and QALYs At baseline, the utilities according to the EQ-5D did not statistically significantly differ between the Internet group and the usual care group. EQ-5D utilities did not reach a statistical significant difference throughout the study. At 3 months and 12 months the difference in EQ-5D utility was 0.037 (95% CI, 20.007 to 0.081) and 0.006 (95% CI, 20.042 to 0.054), respectively. Similarly, the difference in quality adjusted life years was not statistically significant: 0.024 (95% CI, 20.016 to 0.065) (table 2). Visual analogue scale utilities were not statistically significantly different throughout the study. At 3 and 12 months the difference in visual analogue scale utility was 0.012 (95% CI, 20.026 to 0.050) and 0.013 (95% CI, 20.015 to 0.040), respectively. The difference in quality of life years based on the visual analogue scale was estimated to be 0.007 (95% CI, 20.017 to 0.032) (table 2). Costs The total intervention costs were estimated at $25675, which is $254 (95% CI, $243 to $265) per patient (table 3). The highest cost components of the Internet-based intervention were software support ($7917) and the patients' time costs ($5380 for monitoring time and $5106 for attending the education sessions). Patients in the Internet group reported 114 hours of absence from work compared to 98 hours for patients in the usual care group. The 16 hours difference in absenteeism was estimated to be equivalent to $604 (95% CI, $21430 to $2637) in monetary terms. The difference in societal costs (i.e. health care costs plus costs due to absenteeism) was therefore estimated at $641 (95% CI, $21957 to $3240) in favor of usual care. Cost-utility analysis The estimates of the cost differences and QALY differences were both not-statistically significant. The cost-utility ratio, based on these point estimates, was $26700 per QALY. The probability that Internet-based self-management was both more effective and less costly than usual care (dominant) was 30%. The probability that it was less effective, but more costly (dominated) was 10% (figure 1). Due to statistical uncertainty of both costs and QALYs, the probability that Internet-based self-management is costeffective compared to usual care depends on the willingness-topay per QALY. This probability was 62% at $50000 per QALY and 74% at $100000 per QALY ( figure 1 and 2). From a health care perspective, the lower health care costs result in a cost-utility ratio of $1500 per QALY. The probability that Internet-based self-management is cost-effective from a health care perspective was 82% at $50000 per QALY and 86% at $100000 per QALY ( figure 1 and 2). QALYs gained, based on the visual analogue scale, were less than those based on the EQ-5D. The probability that Internetbased self-management is cost-effective based on visual analogue scale QALYs was 49% and 60% at $50000 and $100000 per QALY from a societal perspective and was 71% and 75% at $50000 and $100000 per QALY from a health care respectively. Discussion In this study we evaluated the cost-effectiveness of a new disease management strategy, Internet-based self-management, for patients with asthma. The QALY and cost differences, 0.024 and $ 641 respectively, between Internet based-self management and usual care were not statistically significant during a follow-up period of 1 year. Both the estimation of QALYs gained and the calculated expenses showed considerable uncertainty, which is displayed by the cost-effectiveness planes. The estimated costutility ratio was $26700 per QALY, which is generally considered acceptable [32]. At a commonly cited willingness-to-pay threshold of $50000 per QALY [31] the Internet-based self-management intervention had a probability of 62% and 82% to be cost-effective compared to usual care from a societal perspective and health care perspective, respectively. We have previously shown substantial and statistically significant clinical effects in favor of Internet-based self-management with regard to asthma related quality of life, asthma control and lung function [13,33]. Although the utility outcomes presented in the current study point in the same direction (i.e. in favor of Internet-based self-management) as the clinical outcomes, their statistical significance is less evident. There are two main reasons that may explain this finding. First, generic quality of life measures, such as the EQ-5D, must be distinguished from disease-specific quality of life measures, such as the Asthma Quality of Life Questionnaire [33]. The latter is well known to be responsive to change [21]. However, generic preference-based instruments may differentiate between the highest en lowest levels of asthma control, but are less able to discriminate between moderate levels [34,35]. The baseline asthma control scores found in our primary care study population can be classified as moderately or partly controlled asthma and substantial improvements in disease-specific quality of life may have been missed by the generic instruments. Second, the absence of a statistically significant difference in our primary utility measure may reflect a lack of statistical power, since our trial was powered to detect a statistical difference in the primary outcome measure, asthma related quality of life, and not explicitly to detect differences in generic preference-based utility measures [13,36]. The intervention costs of $254 per patient were similar to intervention costs of a paper-and-pencil asthma self-management program [10], but were half of the costs of intensive nurse-led telemonitoring in asthma reported by others [11]. The costs of the technological innovation (software support, electronic spirometer, Internet and mobile phone costs) were only about 40% of the total intervention costs. The fixed technological costs of software support constituted about one third of the intervention costs, so a considerable increase in the number of users could reduce the cost per user by one third. Moreover, the calculations were based on costs during the one-year randomized controlled trial. Asthma Uncertainty about cost-effectiveness of the asthma internet-based self-management program compared with usual asthma care (showing the 1000 bootstrapped estimates). Circles and triangles represent the incremental societal and health care costs, respectively, plotted against the incremental quality adjusted life years (QALY) (intervention minus usual care). The south-east quadrant indicates that internet-based self-management intervention dominates usual care (i.e. effectiveness is higher and costs are lower), the north-west quadrant indicates that usual care dominates the intervention. The points below the dashed diagonal lines are cost-effective at a willingness to pay threshold of $50000 and $100000 per QALY, respectively. doi:10.1371/journal.pone.0027108.g001 Data are mean (range) unless otherwise indicated. *Range 1 (worst) -7 (best) [19]. { EQ-5D = EuroQol questionnaire, 5 dimensions [20]. Parts of this table were published previously [13]. doi:10.1371/journal.pone.0027108.t001 self-management cost-effectiveness studies with a longer time horizon have shown that intervention costs decrease after the first year [10,37]. In our study, costs for education sessions only apply to the first year, thus reducing costs in later years by about a quarter. Differences in other health care costs should be interpreted with caution, since almost all components showed statistically nonsignificant differences. Only the reduction in contacts with physiotherapists were statistically significant, suggesting that patients in the Internet group with better asthma control are less in need for physiotherapy. The cost of drugs for asthma show small decreases in short-acting b2-agonists and inhaled corticosteroids alone, but increases in combination therapy (inhaled corticosteroids plus long-acting b2-agonists) and leukotriene antagonists in the self-management group. The increase in volumes and costs of asthma controller medication accompanied by a decrease in reliever medication might have contributed to improved clinical outcomes in favor of Internet-based selfmanagement. Our study had several limitations. First, quality adjusted life year estimates were calculated from only two follow-up measurements. More measurements would possibly have resulted in more accurate QALY estimates, but we limited the number of follow-up measures in order to minimize the awareness of participating in a clinical trial among patients in the usual care group. Second, patients were inevitably aware of the allocated group, which may have influenced their utility ratings. Therefore, the effects observed may be due to unblinding. On the other hand, the influence of unblinded groups in pragmatic trials might be regarded as part of the intervention, since all interventions implemented in daily clinical practice are not blinded. Third, our economic evaluation was limited to one year. As pointed out above a longer duration would probably have resulted in reduced intervention cost estimates after one year. It is, however, unknown how EQ-5D utility scores will progress after one year. New cost-effective disease management strategies for asthma are required to face up to the global burden of asthma. Internet-based self-management is an innovative and effective management strategy in adults with asthma that improves clinical outcomes [13]. This Internet-based strategy can be as effective as current asthma care with regard to quality of life and costs are similar. Future implementation studies ought to add other quality of life measures in order to reveal potentially more subtle differences.
v3-fos-license
2019-03-28T13:33:33.884Z
2018-11-05T00:00:00.000
86408046
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.intechopen.com/citation-pdf-url/63460", "pdf_hash": "bad36c309a4fe221a50dbcab77162c4ba3320d87", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2809", "s2fieldsofstudy": [ "Medicine" ], "sha1": "2443311078359caa55e728762e4acbb67c76e59f", "year": 2019 }
pes2o/s2orc
The Subcutaneous Implantable Cardioverter-Defibrillator The subcutaneous ICD (S-ICD) represents an important advancement in defibrillation therapy that obviates the need for a transvenous lead, the most frequent complication with transvenous devices. The S-ICD has been shown similarly safe and effective as transvenous ICD therapy, but the two devices are not interchange-able. The S-ICD is only suitable for patients who do not require bradycardia or antitachycardia pacing functionality. In patients with underlying diseases associated with polymorphic ventricular tachycardia and a long life expectancy, an S-ICD may be the preferred choice. Moreover, it is advantageous in the situation of increased risk of endocarditis, i.e., previous device system infection and immunosuppres-sion, including hemodialysis. In patients with abnormal vascular access and/or right-sided heart structural abnormalities, it may be the only option. The S-ICD is bulkier, the battery longevity is shorter, and the device cost is higher, even though remote follow-up is possible. A two- or three-incision implant procedure has been described with a lateral placement of the device and a single subcutaneous lead. The rate of inappropriate therapy for both S-ICD and transvenous systems is similar, but S-ICD inappropriate shocks are more frequently attributable to oversensing, which can often be resolved with sensing adjustments. Introduction The subcutaneous implantable cardioverter defibrillator (S-ICD) offers an alternative rescue device for sudden cardiac death in the form of an implantable device that can offer defibrillation therapy without the need for a transvenous lead. Lead failure is the most frequent source of complication requiring surgical revision. Approximately 20% of transvenous leads fail within 10 years and extraction may lead to devastating complications, including death [1][2][3][4][5]. The S-ICD differs from conventional transvenous ICD systems in other important ways: an S-ICD requires no transvenous leads (the most frequent source of device complications) but S-ICDs do not offer bradycardia pacing, antitachycardia pacing, cardiac resynchronization, plus they have limited programmability. Approved in Europe in 2009, the S-ICD system (SQ-RX 1010, Boston Scientific, Natick, Massachusetts, USA) consists of a pulse generator and a tripolar defibrillation lead, both of which are implanted subcutaneously. In terms of size, weight, and footprint, the S-ICD device is larger and heavier than a conventional transvenous ICD (approximately 130 vs. 60 g, respectively). S-ICDs are indicated for primary and secondary prevention but are seen as particularly useful for primary-prevention patients with a long life expectancy. The selection of an S-ICD system over a transvenous ICD may be based on a variety of factors. Transvenous ICD patients who experience device-related complications, such as lead problems, may be revised to an S-ICD device. In a German multicenter study, 25% of S-ICD patients had a previous transvenous system explanted because of device complications [6]. Implant techniques and considerations The S-ICD system is composed of a tripolar parasternal lead, positioned to the left (about 1-2 cm) and parallel to the sternal midline; this lead plugs into the pulse generator, which is implanted over the fifth to sixth rib and positioned submuscularly between the midaxillarly and anterior axillary lines. The lead has three electrodes, two of which sense only. The defibrillation electrode is positioned between the two sensing electrodes. The sensing vector is created from the sensing electrode to the can, with the device automatically selecting the better electrode for the vector to assure optimal sensing. Device implantation may require minimal (to verify final position) to no fluoroscopy, as much of the technique relies on anatomical landmarks [7]. See Figure 1. A three-incision technique (plus pocket formation) was originally pioneered for S-ICD implantation, and a newer two-incision approach has been described in the literature [8]. The two-incision approach creates an intermuscular pocket for the pulse generator rather than a subcutaneous pocket by incising the inframammary crease at the anterior border of the latissimus dorsi, allowing the generator to fit between the two muscles. Then a small incision at the xiphoid process (in the same direction as pocket incision) allows an electrode insertion device to tunnel the lead in place [8,9]. In a study of 36 patients, the two-incision approach was found to be safe and effective and it may produce superior cosmetic results compared to the three-incision approach [9]. See Figure 2. The time required for device implantation has been recently reported as an average of 68 ± 20 minutes which includes intraoperative defibrillation threshold (DT) testing [10]. DT testing is of decreasing importance with transvenous ICDs but remains a much-discussed topic for S-ICD systems. Guidelines still recommend DT testing during S-ICD implantation, even though it is often used without intraoperative testing based on generalized findings from transvenous systems [11][12][13]. In a study of 98 S-ICD patients, 25% of patients failed to convert their induced arrhythmia with the first intraoperative 65 joule shock, necessitating further therapy delivery and/or external defibrillation. In this study, 24/25 patients could be successfully defibrillated following either reversal of shocking polarity or lead reposition although the desired 10 joules safety margin could not be achieved in 4/24 of these patients [14]. This suggests the importance of perioperative DT testing. However, 100% of patients could be converted from defibrillation with an internal 80 joule shock [14]. In a subsequent study of 110 consecutive S-ICD patients, 50% (n = 55) did not undergo defibrillation testing at implant for any of several reasons (including patient condition, age, and physician preference). In this group, 11% had episodes of sustained ventricular tachycardia (VT) or ventricular fibrillation (VF) necessitating therapy delivery and all of them were effectively converted with the first 80 joule shock [15]. Ventricular tachycardia is a rhythm disorder originating in the heart's lower chambers that has a rate of at least 100 beats per minute; ventricular fibrillation is a much faster, chaotic heart rhythm that causes the heart to quiver rather than pump effectively. Thus, the notion that DT testing at implant is necessary for S-ICD patients has been challenged. S-ICD implantation may be carried out under local anesthesia [16], conscious sedation, or general anesthesia (64.1% of U.S. implants of S-ICD systems [17]. The rate of complications at implant is low and the most commonly reported complication is infection (1.8%) [18]. By dispensing with the transvenous leads, the S-ICD system avoids periprocedural and complications associated with conventional transvenous defibrillation leads, i.e. pericardial effusion, pneumothorax, accidental arterial puncture, nerve plexus injury, and tricuspid valve damage [19]. Safety and efficacy of S-ICDs S-ICDs appear to have similar rates of infection and other complications as transvenous systems and to be similarly effective in rescuing patients from sudden cardiac death, but there are important distinctions between the two systems. Safety In a retrospective study of 1160 patients who received an implantable defibrillator (either transvenous system or S-ICD) at two centers in the Netherlands, patients were analyzed using propensity matching to yield 140 matched patient pairs. The rates of complications, infection, and inappropriate therapy were statistically similar between groups, but S-ICD patients had significantly fewer lead-related complications than the transvenous group (0.8 vs. 11.5%, p = 0.030) and more non-lead-related complications (9.9 vs. 2.2%, p = 0.047) [20]. The most frequently reported S-ICD complication involved device sensing. (20) Pooled data from the Investigational Device Exemption (IDE) and postmarket registry EFFORTLESS (n = 882) found S-ICD-related complications occurred at a rate of 11.1% at 3 years, but with no lead failures, S-ICD-related endocarditis, or bacteremia [21]. An IDE allows a device that is the subject of a clinical study to be used to collect data about safety and effectiveness that may be later used to submit to the U.S. Food and Drug Administration (FDA). Device-related complications were more frequent with transvenous systems when compared to S-ICD devices in a propensity-matched case-control study of 69 S-ICD and 69 transvenous ICD patients followed for a mean of 31 ± 19 or 32 ± 21 months, respectively. About 29% of transvenous ICD patients experienced a device-related complication compared to 6% of S-ICD patients, reducing the risk of complications for S-ICD patients by 70%; transvenous lead problems were the most frequently reported complication in the former group [22]. In the largest study of S-ICD patients (n = 3717) to date, complications were low at 1.2% overall. The most frequently reported complications were cardiac arrest (0.4%), hematoma (0.3%), death (0.3%), lead dislodgement (0.1%), myocardial infarction (0.1%), and hemothorax (<0.1%) [23]. Device revision during index hospitalization was infrequent (0.1%) [23]. Infections occur at roughly similar rates with S-ICD and transvenous systems but with the important distinction that S-ICD infections may sometimes be resolved with conservative therapy (course of antibiotics with device left in place), whereas most transvenous ICD infections necessitate the extraction of the device and the transvenous leads. In a survey from the U.K. reporting on data from 111 S-ICD patients, 11/111 (10%) of patients experienced infection, of whom 6 could be successfully treated conservatively without device extraction [24]. The EFFORTLESS registry (n = 472) reported a 4% rate of documented or suspected infections and complication-free rates at 30 and 360 days were 97 and 94%, respectively [25]. Once implanted, the S-ICD device delivers a nonprogrammable, high-energy rescue shock (80 joules) to the thorax compared to shocks of 45 joules to the heart administered by conventional transvenous systems. Notably the S-ICD delivers a 65 joule shock during implant testing. Therapy delivery differs markedly between S-ICD and transvenous systems in terms of the amount of energy delivered, location of shocking vectors, and potential for damage to surrounding tissue or the heart. In a porcine study, the mean time to therapy delivery was significantly longer with an S-ICD than a transvenous system (19 vs. 9 seconds, p = 0.001) but the S-ICD shocks were associated with less elevation of cardiac biomarkers. The longer time to therapy may be advantageous in that device patients often experience short runs of nonsustained VT. On the other hand, S-ICD shocks were associated with more skeletal muscle injuries than transvenous device shocks owing to the energy patterns resulting from the device placement but the clinical relevance of this is likely negligible [26]. Efficacy Effective shock therapy is often defined as conversion of an episode of VT/ VF within five shocks, differing from effective first-shock therapy which occurs when the initial shock converts the arrhythmia. In a study of 79 S-ICD patients at a tertiary center, 7.6% of patients experienced at least one appropriate shock for a ventricular tachyarrhythmia during the follow-up period (mean 12.8 ± 13.7 months) [27]. In a multicenter study from Germany (n = 40), shock efficacy was 96.4% [95% confidence interval (CI), 12.8-100%] and first-shock efficacy was 57.9% (95% CI, 35.6-77.4%) [6]. In an effort to analyze S-ICD efficacy in a large group of diverse patients, data from the Investigation Device Exemption (IDE) clinical study and the EFFORTLESS post-market registry were pooled to provide information about 882 patients followed for 651 ± 345 days. About 59 patients experienced therapy delivery for 111 spontaneous VT/VF episodes with first-shock efficacy in 90.1% of events and shock efficacy (termination with five or fewer shocks) in 98.2% of patients [21]. In the EFFORTLESS registry (n = 472), first-shock efficacy in discrete episodes of VT/VF was 88% and shock efficacy within five shocks was 100% [25]. Inappropriate shocks with S-ICDs Inappropriate shock describes therapy delivery to treat an episode which the device inappropriately detects as a ventricular tachyarrhythmia. Inappropriate shocks have been recognized as a significant clinical challenge with transvenous systems as well as S-ICDs. In a tertiary care center study of 79 S-ICD patients, inappropriate shock occurred in 8.9% (n = 7) of patients, attributable to T-wave oversensing, atrial tachyarrhythmia with rapid atrioventricular conduction, external interference and/or baseline oversensing due to lead movement [27]. T-wave oversensing occurs when the device inappropriately senses ventricular repolarizations (the T-waves on the electrocardiograph) counting them as ventricular events, leading to double counting of the intrinsic ventricular rate. In a multicenter German study (n = 40) with a median follow-up of 229 days, four patients (10%) experienced 21 arrhythmic episodes resulting in 28 therapy deliveries. Four of these episodes were inappropriately identified by the device as ventricular tachyarrhythmias, with the result that two patients received inappropriate shocks. This results in a rate of 10% inappropriately detected ventricular tachycardia and 5% delivery of inappropriate therapy [6]. In a study using pooled data from the IDE and EFFORTLESS post-market registry (n = 882), the three-year rate for inappropriate therapy delivery was 13.1% [21]. It does not appear there are statistically more cases of inappropriate therapy in S-ICD patients compared to transvenous ICD patients. A propensity-matched study (69 patients with a transvenous ICD and 69 with an S-ICD) found the rate of inappropriate shocks was 9% in the transvenous and 3% in the S-ICD groups but this was not statistically significant (p = 0.49) [22]. In a study of 54 S-ICD patients in a real-world prospective registry, the one-year rate for inappropriate therapy delivery was 17%, most of whom had single-zone programming [10]. Inappropriate shocks with S-ICDs may be minimized. Most of them are caused by T-wave oversensing. In a survey from the U.K. (n = 111 implanted patients covered), 24 appropriate shocks were delivered in 12% of the patients (n = 13) and 51 inappropriate shocks were delivered in 15% of the patients (n = 17), of which 80% could be traced to T-wave oversensing [24]. In the EFFORTLESS registry (n = 472), there was a 7% rate of inappropriate therapy delivery in 360 days, mainly due to oversensing [25]. The main causes of inappropriate therapy delivery have been reported to be supraventricular tachycardia (SVT) at a rate above the discrimination zone, T-wave oversensing, other types of oversensing (e.g. interference), SVT discrimination errors, and low-amplitude signals [21]. Inappropriate therapy delivery due to T-wave oversensing can often be remedied by adjusting the sensing vector or adding another discrimination zone (dual-zone programming) [10]. Certain patients may be at elevated risk for inappropriate shock. A single-center study of 18 hypertrophic cardiomyopathy (HCM) patients implanted with an S-ICD system and followed for a mean 31.7 ± 15.4 months concluded that HCM patients may be at elevated risk for T-wave oversensing which could lead to inappropriate therapy delivery. In this study, 39% of these HCM patients had T-wave oversensing and 22% of the study population (n = 4) experienced inappropriate therapy delivery [28]. An evaluation of 581 S-ICD patients found that inappropriate shocks caused by oversensing occurred in 8.3% of S-ICD patients and patients with HCM and/or a history of atrial fibrillation were at elevated risk for inappropriate therapy [29]. There is a paucity of data on the use of S-ICD devices in HCM patients, but a small study of 27 HCM patients screened for possible S-ICD therapy found 85% (n = 23) were deemed appropriate candidates and 15 had the device implanted [30]. At implant testing, all patients were successfully defibrillated with a 65 joules shock and most induced arrhythmias were terminated with a 50 joules shock (12/15). After the median follow-up period of 17.5 months (range 3-35 months), there were no appropriate shocks and one inappropriate shock, attributed to oversensing caused when the QRS amplitude was reduced while the patient bent forward. In this particular high-risk patient group of HCM patients without a pacing indication, the S-ICD was effective at detecting and terminating tachyarrhythmias [30]. Mortality The mortality risk with S-ICD implantation is low, but merits scrutiny. On the one hand, S-ICD implantation is generally associated with fewer risks than transvenous ICD implantation in that no transvenous leads are required. On the other hand, patient selection for S-ICD may favor more high-risk patients (such as those with a prior infection, renal failure, comorbid conditions such as diabetes) but also includes many younger and generally fitter patients. Overall, mortality data from S-ICD studies appears favorable. In a pooled analysis combining IDE data and EFFORTLESS registry information, the one-year and two-year mortality rates were 1.6 and 3.2%, respectively [21]. In a study of real-world use of S-ICDs in 54 primary-and secondary-prevention patients, mortality at the mean follow-up duration of 2.6 ± 1.9 years was 11% but no patient died of sudden cardiac arrest [10]. In a six-month study comparing 91 S-ICD and 182 single-chamber transvenous ICD patients, mortality rates were similar although the S-ICD patients had more severe pre-existing illness at implant [31]. It may be that the similar mortality rates between transvenous and S-ICD populations reflects the patient populations rather than the implantation procedure or device characteristics [23]. Troubleshooting S-ICDs The S-ICD device was designed to be a streamlined system with fewer than 10 programmable features (transvenous ICDs have over 100 programmable features) and to perform in a largely automated fashion in terms of device function. The recent introduction of dual-zone programming to S-ICDs added a degree of programmability and reduced inappropriate shock [32]. Arrhythmia detection in the S-ICD relies on a system of template matching, based on waveform morphology of the subcutaneous ECG obtained at implant [33]. Oversensing and sensing-related problems are the most frequently reported problems but are being addressed in terms of device design and programmability. T-wave oversensing occurs when the device incorrectly identifies a T-wave as a QRS complex and counts it as a native ventricular beat, which leads to double-counting the rate. The use of dual-zone device programming has reduced the incidence of inappropriate therapy as a result of double-counting caused by T-wave oversensing [34]. T-wave inversions and QRS complexes that are overly large or very small may be particularly vulnerable to sensing anomalies. Reprogramming the sensing vector or therapy zones may be helpful in such instances [35,36]. In a propensity-matched study comparing transvenous ICDs to S-ICDs, there were three inappropriate shocks in the S-ICD group, all of which were due to T-wave oversensing in sinus rhythm and all of which could be eliminated with adjustment of the sensing vector [22]. Furthermore, it has been observed with increasing operator experience and better programming techniques, sensing problems have been reduced [21]. In a study using pooled data from the IDE and EFFORTLESS registry, the rate of inappropriate therapy associated with oversensing was <1% [21]. When inappropriate shock occurs, the stored electrograms will likely help identify the cause. If lead malposition is suspected, a chest X-ray may be appropriate. In case of oversensing, the sensing vector may be optimized, device programming may be revised to add a second detection zone, or pharmacological therapy may be added [32]. SVT discrimination likewise relies on template-matching (which is similar to transvenous systems) but the S-ICD may be able to accomplish this with a higher degree of resolution than transvenous ICDs [33]. The use of dual-zone programming appears advantageous. Primary and secondary prevention Primary-and secondary-prevention patients represent two distinct patient populations who may be treated with S-ICD therapy, although S-ICDs seem particularly well suited for primary-prevention patients. Secondary-prevention patients have a lower rate of comorbid conditions and significantly higher left-ventricular ejection fractions (LVEF) than primary-prevention patients (48 vs. 36%, p < 0.0001), while primary-prevention patients had a higher incidence of heart failure and were more likely to have had a transvenous ICD implanted before the S-ICD. Primary-prevention patients also have a higher rate of ischemic cardiomyopathy (41 vs. 33%) and nonischemic cardiomyopathy (28 vs. 12%) [18]. S-ICDs have been shown to be effective for both primary-and secondary-prevention patients. In a study of 856 S-ICD patients (mean follow-up 644 days), there were no significant differences between primary-and secondary-prevention populations in the rates of effective arrhythmia conversions, inappropriate therapy, mortality or complications although appropriate therapy delivery was delivered to significantly more secondary-prevention than primary prevention patients (11.9 vs. 5.0%, p = 0.0004) [18]. The freedom from any appropriate therapy delivery was 88.4% among primaryprevention patients with an LVEF ≤35 and 96.2% among primary-prevention patients with an LVEF >35%. The freedom from any appropriate therapy delivery among secondary-prevention patients was 92.1% [18]. Spontaneous conversion to sinus rhythm was more frequent among primary-prevention patients (about 48% of all ventricular tachyarrhythmias) compared to secondary-prevention patients (31%) [18]. However, the rates of inappropriate therapy delivery and complications were similar for both primary-and secondary-prevention patients [18]. The optimal candidates for S-ICD S-ICD systems are indicated for patients who require rescue defibrillation but do not need bradycardia pacing support and would not benefit from antitachycardia pacing or cardiac resynchronization therapy. This includes primary-and secondaryprevention patients. By avoiding transvenous leads, the S-ICD is particularly appropriate for patients with occluded veins or limited venous access (who are not suitable candidates for transvenous systems) and the S-ICD may be beneficial for younger, fitter, and active patients. The generator position of the S-ICD patient may make it easier and safer for strong, fit patients to resume active lifestyles without jeopardizing lead position. Despite the fact that S-ICD devices are larger than transvenous systems, their lateral placement may result in more pleasing esthetic results than a conventional transvenous ICD. Young device patients likely will have a lifetime of device therapy, resulting over time in much hardware in their vasculature; the S-ICD thus presents an advantage in that regard. It appears that S-ICDs are implanted in a younger patient population; a survey of multiple U.K. hospitals (n = 111 patients) found the median patient age was 33 (range 10-87 years) [24]. The mean age of patients in the EFFORTLESS registry was 49 ± 18 years (range 9-88 years) [25]. Younger patients with cardiomyopathy or channelopathy often have a high rate of complications with conventional transvenous ICDs [37] and it has been thought they may be better served with an S-ICD device [9]. In a multicenter case-control study, it was found that 59.4% of S-ICD patients were primary-prevention and the main underlying cardiac conditions were dilated cardiomyopathy (36.2%), ischemic cardiomyopathy (15.9%), and HCM (14.5%) [38]. In particular, these patients have been considered challenging to treat with a conventional transvenous ICD in that they may have an erratic electrical substrate in the heart and increased left-ventricular mass, which could contribute to an elevated DT. First-shock efficacy rates of up to 88% are promising in light of these challenges [25]. In a study of 50 hypertrophic cardiomyopathy patients implanted with S-ICDs, 96% of patients could be induced to an arrhythmia at implant and of the 73 episodes of VF induced, 98% were successfully converted with 65 joules from the S-ICD during DT testing. One patient in this study (2%) required rescue external defibrillation [39]. The patient who failed internal defibrillation had a body mass index of 36 and was successfully converted by an 80 joules shock with reversed polarity from the S-ICD [39]. Indications The most recent guidelines to address S-ICD were published by the American Heart Association, the American College of Cardiology, and the Heart Rhythm The Subcutaneous Implantable Cardioverter-Defibrillator DOI: http://dx.doi.org /10.5772/10.5772/intechopen.80859 Society in 2017 [40]. The An S-ICD is indicated (Class of Recommendation 1, level of evidence B) for patients who meet indication criteria for a transvenous ICD but who have inadequate vascular access or are at high risk of infection and for whom there is no anticipated need for bradycardia or antitachycardia pacing. Further, implantation of an S-ICD is deemed reasonable for patients with an ICD indication for whom there is no anticipated need for bradycardia or antitachycardia pacing (Class of Recommendation IIa, level of evidence B). An S-ICD is contraindicated in a patient who is indicated for bradycardia pacing, antitachycardia pacing for termination of ventricular tachyarrhythmias, and/or cardiac resynchronization therapy (Class of Recommendation III, level of evidence B) [40]. The European Society of Cardiology guidelines from 2015 report that S-ICDs are effective in preventing sudden cardiac death and the device is recommended as an alternative to transvenous ICDs in patients who are indicated for defibrillation but not pacing support, cardiac resynchronization therapy, or antitachycardia pacing (Class IIa, Level C). Moreover, the S-ICD was considered to be a useful alternative for patients in whom venous access was difficult or for patients who had a transvenous system explanted because of an infection or for young patients expected to need long-term ICD therapy [41]. Pre-implant testing Those considered for S-ICD therapy should be screened with a modified version of the three-channel surface electrocardiogram (ECG) set up to represent the sensing vectors of the S-ICD. With the patient both standing and supine, the ratio of R-wave to T-wave should be established and signal quality evaluated. If any of the three vectors does not result in satisfactory sensing, the S-ICD should not be implanted. Once the actual device is implanted in the patient, the system automatically selects the optimal sensing vector [11]. Programming The S-ICD may be programmed to detect arrhythmias using a single-or dualzone configuration. In the dual-zone configuration, a lower cutoff rate defines what might be called a "conditional shock zone" to which a discrimination algorithm is applied so that therapy is withheld if the rhythm might be deemed supraventricular in origin or non-arrhythmic oversensing. This discrimination zone relies on a form of template matching. Above that rate, a cutoff establishes the "shock zone" which delivers a shock based on the rate criterion alone. When the capacitors charge in anticipation of shock delivery, a confirmation algorithm assures the persistence of the arrhythmia prior to sending the shock. Shocks are delivered at the nonprogrammable 80 joules of energy [11]. Future directions The evolution of the S-ICD adds an important new device into the armamentarium for rescuing patients from sudden cardiac death. To further improve S-ICD technology, size reduction, increased battery longevity, and improved T-wave rejection will be needed. In the near future, improvement in sensing function might eliminate the need for a separate screening ECG prior to implant, which could optimize clinical workflow. Improved battery technology is particularly important as the S-ICD is often used in patients with a relatively long life expectancy. Leadless pacemaker systems that might work together with an S-ICD are in development which would allow for bradycardia pacing support, antitachycardia pacing and a subcutaneous defibrillator without transvenous leads [32]. The development of a leadless epicardial pacemaker might allow for left-atrial and left-ventricular pacing function to be integrated to the S-ICD. Taken altogether, these improvements could make the S-ICD the preferred device in the vast majority of cases for rescue from sudden cardiac death. Conclusion The subcutaneous implantable cardioverter defibrillator (S-ICD) offers an alternative to transvenous ICDs but the two systems should not be considered interchangeable. The S-ICD is appropriate for patients who require only rescue defibrillation (primary or secondary prevention) but does not offer bradycardia pacing, antitachycardia pacing, overdrive pacing, or cardiac resynchronization therapy. S-ICD devices may be appropriate in patients who have occluded vasculature or device infection with a transvenous system. Effectiveness, rate of infections, and survival rates are similar for both devices although, in general, S-ICDs may be implanted in patients with more serious underlying conditions such as end-stage renal disease or advanced diabetes. Infections with S-ICDs are more likely to be effectively treated with a conservative course of antibiotic therapy and no device extraction. Inappropriate shocks occur at similar rates with both systems but are more likely caused by oversensing in the S-ICD. A main advantage of S-ICDs over transvenous systems is the elimination of the transvenous defibrillation lead which may be considered the Achilles heel of the transvenous system, having a 10-year complication rate of 25%. It is likely that considerable advances in ICD therapy will occur in the next decade as the S-ICD systems are further refined. © 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
v3-fos-license
2019-07-04T13:03:56.191Z
2019-07-03T00:00:00.000
195784866
{ "extfieldsofstudy": [ "Materials Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fchem.2019.00473/pdf", "pdf_hash": "4382330d78e626edb2f773ea720b59b4367bde5e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2811", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "sha1": "4382330d78e626edb2f773ea720b59b4367bde5e", "year": 2019 }
pes2o/s2orc
Synthesis and Supramolecular Assembly of a Terrylene Diimide Derivative Decorated With Long Branched Alkyl Chains Terrylene diimide derivatives are pigments for dyes and optoelectric devices. A terrylene diimide derivative N,N'-di(1-undecyldodecyl)terrylene-3,4:11,12-tetracarboxdiimide (DUO-TDI) decorated with long branched alkyl chains on both imide nitrogen atoms was designed and synthesized. The supramolecular assembly behaviors of DUO-TDI in solution and at the liquid-solid interface were both investigated. The assembled nanostructures and photophysical properties of TDI in solution were explored by varying solvent polarity with spectral methods (UV-Vis, FL and FT-IR) and morphological characterization (AFM). Depending on the solution polarities, fibers, disk structures and wires could be observed and they showed diverse photophysical properties. In addition, the interfacial assembly of DUO-TDI was further investigated at the liquid-Highly Oriented Pyrolytic Graphite (HOPG) interface probed by scanning tunneling microscope (STM). Long range ordered monolayers composed of lamellar structures were obtained. The assembly mechanisms were studied for DUO-TDI both in solution and at the interface. Our investigation provides alternative strategy for designing and manipulation of supramolecular nanostructures and corresponding properties of TDI based materials. INTRODUCTION Rylene diimide dyes derivatives are famous for their outstanding photophysical and photochemical stability and their high fluorescence quantum yield (Zhao et al., 2016;Feng et al., 2017;Frankaer et al., 2019). They have not only shown importance as vat dyes in industrial olorants, but also have been proven to be excellent organic semiconductor candidate for opto-electronic applications (Geerts et al., 1998;Wolf-Klein et al., 2002;Jung et al., 2006). Terrylene diimides (TDIs) are a class of rylene diimide dye consisting of terrylene core, a large aromatic core along the long molecular axis. It shows brilliant blue color and emits fluorescence at long wavelengths with long fluorescence lifetime. Moreover, TDIs also show good thermal, chemical, and photochemical stabilities. TDIs are potential candidates as excellent probes for bio-labeling, energy convertors for light concentrators, and functional materials in electronic devices (Peneva et al., 2008;Bai et al., 2011;Berberich and Würthner, 2012;Chen et al., 2014;Stappert et al., 2016). Then, the design and synthesis of TDI based molecules have been attracting increasing attention recently, although less research had been done compared to perylene diimides (PDIs) (Chen et al., 2015;Würthner et al., 2016;Guo et al., 2019), another kind of rylene diimides. TDI derivatives were mostly designed via the decoration of the parent TDI molecule with various functional groups on the imide positions or on the periphery of the terrylene core (Heek et al., 2013). Actually, the competition and cooperation of the π-π stacking from the terrylene cores and other weak interactions of added functional groups play a great role in the modulation of molecular packing and their nanostructures and properties. One of the strategies is by using flexible chains, especially the alkyl chains (Davies et al., 2011). The affiliation of alkyl chains could vary the solubility, processing ability, molecular arrangement way, and the corresponding properties of TDIs. The topological structures of the alkyl chains could affect the assembly of rylene imides as well (Balakrishnan et al., 2006). It was proved that branched alkyl chains were capable of promoting distinguished assembly than induced by normal alkyl chains (Liao et al., 2013). For TDIs, branched alkyl chains were indeed fixed on the aromatic core to study the assembly therefore . It should be known that the length of the alkyl chains could greatly affect the molecular assembly even with only one methylene difference (Chesneau et al., 2010;Xu et al., 2013;Li et al., 2017). Here, we report on the synthesis and self-assembly of one terrylene imide derivative modified with alkyl chains. For this subjective, it has two branched long alkyl chains on both imide positions. The presence of long alkyl chains significantly enhances the solubility and inhibits the intermolecular interaction and aggregation. The modulation of the solution assembly of TDI was realized by changing the solvent polarity and monitored by spectral and morphological methods. It was found that different kinds of assembled nanostructures with various properties could be formed. In addition, the surface/interfacial assembly behaviors could provide insights into the design, select and optimizing of semiconductors for using in opto-electronic devices. Then the assembly of TDI at the liquid-HOPG interface was also explored by STM (Lee et al., 2014a,b). RESULTS AND DISCUSSIONS Synthesis and Properties of DUO-TDI DUO-TDI was synthesized based on reported methods (Mayo et al., 1990;Nolde et al., 2006) (Scheme 1). Blue powder was obtained for DUO-TDI. The molecular structure was confirmed by 1 HNMR, 13 CNMR and MALDI TOF MS. As indicated in the Scheme 1, a large aromatic core exists between two branched alkyl chains. From the optimized molecular structure of TDI, the aromatic core is almost planar from the side view of the molecular structure. The distance between the two N atoms in the molecular skeleton is about 1.58 nm and such large π-conjugated core could provide strong π-π interactions with neighboring SCHEME 1 | (A) Molecular structure of DUO-TDI. (B) The top and side view of DUO-TDI molecule. The optimization was performed using the Forcite module of Materials Studio 7.0. The DREIDING force field was implemented for the geometry optimizations (Mayo et al., 1990). conjugated systems. The four undecyl chains with length about 1.39 nm in the periphery of the core structure not only change the solubility in usual organic solvents, but also offer van der Waals interactions among adjacent chains. It could be expected that the synergistic effect between π-π interactions from the core and the van der Waals interactions from such long alkyl chains could be effectively modulated by varying the conditions, thus leading to diverse assembly process, assembled nanostructures, and properties (Chen et al., 2015;Zhang et al., 2016). Tetrahydrofuran (THF) is a good solvent for DUO-TDI. The absorption and emission properties of DUO-TDI in molecular state were investigated by UV-Vis and FL emission spectra. Firstly, the concentration-dependent UV-Vis absorption spectra of DUO-TDI in THF solutions were studied ( Figure 1A). It can be seen that all the UV-Vis absorption spectra exhibited well-resolved vibronic structures when the concentration was varied from 4.3 × 10 −6 to 3.5×10 −5 M. With the concentration increasing, the absorbance intensity increased accordingly but without band shift. The relationship of absorbance intensity at 644 nm as a function of the concentration was shown in Figure 1B. From the fitted linear line, it was clear that DUO-TDI did not assemble into aggregates. In another words, DUO-TDI exists in molecularly state in THF within the above concentration range. In solution, the absorbance band from 450 to 700 nm was ascribed to the π-π * electronic transition of the chromophores in the monomeric state along with vibrational transitions ( Figure 1C). Four characteristic vibration absorption bands centered at 644, 591, 546, and 505 nm were observed, and attributed to the 0-0, 0-1, 0-2, and 0-3 vibrational transitions, respectively (Nagao et al., 2002). The cast film from THF solution of DUO-TDI was also studied to compare with the molecularly DUO-TDI. Two structureless bands at 670 and 597 nm were obtained for the cast film. The band at 597 nm should shift from that at 644 nm from diluted THF solution. Such blue shift indicated the formation of H-aggregates in the film (Davies et al., 2011). At the same time, the appeared shoulder band at 670 nm suggested the existence of J-aggregates in the cast film (Jung et al., 2006). From FL spectra, one emission peak at 672 nm and a shoulder band at 725 nm showed up for DUO-TDI in diluted solution. In contrast, the emission of DUO-TDI was completely quenched in the cast film, suggesting the main formation of H-type molecular packing in accordance with the results from UV-vis data (Jung et al., 2006). Solvent Induced Assembly of DUO-TDI Apart from the above discussions on the molecular behaviors in diluted solutions and in films, the self-assembly of TDI was further investigated in mixed solvent with varied polarity. In the present contribution, the mixed solvents were prepared by adding water into THF solutions. The volume percentage of water (Vw, v %) in the mixed solvent was altered to adjust the solvent polarity. Vw was changed from 0 to 75v% to study the solvent-dependent self-assembly of DUO-TDI, which was monitored by UV-vis and FL spectra firstly (Figure 2). With Vw = 25 v%, the absorption spectral lineshape was almost the same to that from monomeric DUO-TDI in THF (0 v%), while the absorbance intensity was slightly enhanced and the bands shifted to red. It can be seen that the three monomeric absorption bands at 546, 591, and 644 nm, which belong to the 0-2, 0-1, and 0-0 electronic transitions from the terrylene diimide cores red-shifted to 549, 597, and 648 nm. FL spectra were recorded to shed light on the self-assembly of DUO-TDI. It was shown that the emission was quenched greatly (about 50%) and the main emission band at around 662 nm red-shifted to 670 nm. It could be concluded that J-aggregates were formed with Vw = 25 v% (Jung et al., 2006). When Vw was raised to 50 v%, drastic changes of absorption band were observed. The absorption bands were broadened and turned into unresolved structures. Two main bands appeared at 600 and 690 nm, accompanied by incremental absorption at a wavelength longer than 700 nm. Clearly, the absorption at 612 nm was blue-shifted from the band at 644 nm, and the absorption at 690 nm was a newly appeared band. In addition, the emission was quenched as well. It was demonstrated that H-aggregates were mainly formed with J-aggregates in a minority in solution with Vw = 50 v%. With Vw = 75 v%, it showed similar spectral lineshape to that of 50 v%, however the two main peaks were centered at 600 and 680 nm (Figure 2A). It was obvious that the blue shift was enlarged compared to that from solution with Vw = 50 v%, indicating the increased π-π stacking. Apart from that, the relative intensity at about 600 and 690 nm was increased from 1.89 to 3.14, suggesting the increased relative amount of H-to J-aggregates. Besides, the fluorescence emission was completely quenched in accordance with the results from UV-vis spectra. Based on the above results, DUO-TDI could assemble into J-or H-type of aggregates relying on the solvent condition. Slightly increasing polarity of solvent, J-aggregates would be formed, and the elevation of polarity could facilatate the formation of H-aggregates. The molecular arrangement of DUO-TDI within the aggregates could be altered by changing solvent polarity. FT-IR spectral method was used to detect alkyl chain packing in the molecular assemblies (Figure 3). It was reported that the alkyl chains with all trans-cis zigzag conformation could show an asymmetric stretching vibration of methylene group (CH 2 ) at 2,916-2,918 cm −1 (Wang et al., 1996;Zhang et al., 1997). For DUO-TDI powder, the asymmetric stretching vibration was at 2,918 cm −1 , which indicated all trans-cis zigzag conformation of alkyl chains. It was found that this peak shifted on varying V W . The absorption of CH 2 from the branched undecyl groups were at 2,919 cm −1 from THF, indicating the relative disordered packing of alkyl chains. With the increase of polarity by the addition of water, the asymmetric vibrations of CH 2 shifted to longer wavenumbers, from 2,918 to 2,922 cm −1 , implying that gauche conformation or disordered packing of alkyl chains increased gradually. From the FT-IR and UV-Vis spectral data, both the π-π stacking and the alkyl chain packing were both varied on changing the solvent conditions. Atomic force microscopy (AFM) measurements were carried out to investigate the polarity effect on the self-assembled nanostructures of DUO-TDI. Figure 4 shows the AFM images of DUO-TDI nanostructures formed in different mixed solutions. It was evident that in the cast film from pure THF solution, no uniform structures could be observed. Amounts of amorphous structures existed on the surface with few thin fibers. The width of the fibers was around 18 nm. The observed fiber structures might be obtained due to the evaporation of THF on mica surface, since there was no obvious aggregation behaviors were found based on the solution spectral data. For V W = 25 v%, helical fibers with left-handedness were mainly obtained. It means that chiral nanostructures were assembled, although DUO-TDI is an achiral building block (Shen et al., 2014). It was suggested that the J-aggregation manner facilitates DUO-TDI to hierarchically assemble into structures with handedness. The height of the helical fibers was around 13 nm. The width of the fibers was about 70 nm. It can be easily seen that thick fibers were entangled by thin fibers. With V W = 50 v%, helical fibers almost disappeared, and there were sphere structures with diameters ∼1 µm and the height about 100 nm. Considering the high ratio (∼ 10) of width to height, it could be deduced that actually disk structures were formed. So, the emergence of both H-and J-aggregates could prevent the hierarchical growth of one dimensional fibers and the formation of chiral sense for the Frontiers in Chemistry | www.frontiersin.org nanostructures. On increasing the V W to 75 v%, lots of wires with an average width of 50 nm showed up without obvious helical sense. Since H-aggregation manner was the major way of molecular packing with V W = 75 v%, then it seems that the H-aggregates could promote the growth of one dimensional structures, but inhibit both chiral packing of molecules and hierarchical growth with chirality. It was clear that the solvent polarity indeed affected the assembled structures of DUO-TDI. The formation of helical fibers, disk structures and wires could be manipulated by controlling the solvent polarity and this further confirmed the different molecular packing modes and hierarchical ways within the assembled nanostructures. The AFM data was in accordance with that from the spectral results. Assembly of DUO-TDI at the Liquid-HOPG Interface We also reported the self-assembly of this n-type semiconductor at the liquid-HOPG interface. For DUO-TDI, the large πconjugated core provides the π-π staking interactions with the substrate; and the four long alkyl chains offer potential good affinity with the HOPG surface and Van der Walls interactions among neighboring alkyl chains (Chen et al., 2014;Liu et al., 2015Liu et al., , 2016. Different solvents were tried, since the solvent may affect the self-assembly behaviors of molecules at the interface between the liquid and HOPG (Shen et al., 2010;Li et al., 2016). Here, 1-Octanoic acid and 1-Phenyloctane were used for detect the solvent effect. 1-Octanoic acid is a polar and protic solvent, and 1-Phenyloctane is an apolar and aprotic solvent. In the first stage, the 2D crystallization behaviors of DUO-TDI at 1-Octanoic acid-HOPG interface were investigated. It was found that DUO-TDI could form ordered stable monolayers composed of lamellar structures (Figure 5). Figure 5A showed a large range of monolayers of DUO-TDI molecules. The relative bright dots were attributed to the π-conjugated core of DUO-TDI ( Figure S1). Obviously, only two out of four alkyl chains of DUO-TDI adsorbed at the interface. One DUO-TDI core was enclosed by a white oval and enlarged in the inset of Figure 5A for clarity. The one well-ordered lamellar structure was indicated by a yellow arrow. Similar ordered stable monolayers were obtained for DUO-TDI at the 1-Phenyloctane-HOPG interface (Figure 5B). To inspect the 2D molecular packing of this semiconductor in more detail, we recorded the high-resolution STM images. Figure 6 showed a high-resolution image of DUO-TDI at the 1-Octanoic acid-HOPG interface. The unit cell parameters of the mirror patterns were the same within experimental error: a = 1.53 ± 0.01 nm, b = 1.96 ± 0.02 nm, and γ = 86 ± 2 • for the packing in Figure 6A; a = 1.55 ± 0.02 nm, b = 1.97 ± 0.05 nm, and γ = 86 ± 1 • for the packing in Figure 6B. In addition, the orientation angles of vector a with respect to the main symmetry axes of the underneath HOPG for the enantiomeric patterns were −9 • and + 9 • (Figures 6A,B), respectively. Thus, 2D chirality was not only expressed within the monolayer plane, but also at the level of the monolayer orientation with respect to the HOPG substrate (Elemans et al., 2009;Guo et al., 2017). The tentative models for DUO-TDI are shown in Figures 6C,D, where the mirrorrelated patterns were clearly demonstrated. In rows, two DUO-TDI molecules were aligned in a shoulder-to-shoulder manner to form a dimer, indicated in the zoomed-in images of Figure 6D. Such dimers were connected with each other through two pairs of H-bonds (C-H· · ·O) within the same row. In STM images, the bright rows in the STM image corresponds to the molecular benzene ring skeleton in the model, and the dark rows corresponds to the alkyl chain in the model. The high resolution STM images at the 1-Phenyloctane-HOPG interface were also recorded and the monolayer composed of same nanopatterns with same unit cell parameters was obtained ( Figure S2). However, it should be noted that the orientation angle of vector a with respect to the main symmetry axes of the underneath HOPG was 0 • for both enantiomeric patterns. It can be seen that DUO-TDI molecules could form same longterm ordered nanostructures at both liquid-HOPG interface, but that the monolayer chirality was changed. Form above discussions, the formation of stable monolayers were attributed by the synergistic effect of H-bonds, π-π stacking, and Van der Walls interactions. And the solvent played an important role in the expression of supramolecular chirality, especially at the level of monolayers. Actually, the bulk assemblies for DUO-TDI were examined by using the TGA and DSC ( Figure S3). TGA was used to characterize the thermal stability, and it was showed that DUO-TDI could be stable at the temperature lower than 360 • C. DSC was performed to detect the phase transition of the DUO-TDI molecules. It was found that DUO-TDI showed two peaks at 145 and 137 • C in the first cooling curve. It was obvious a thermotropic liquid crystal behavior was observed (reference). The systematic investigation of thermotropic liquid crystal behavior of DUO-TDI is undergoing. CONCLUSIONS The synthesis and the investigation of supramolecular assembly of a terylene diimide derivative DUO-TDI have been reported. DUO-TDI has a large π-conjugated core decorated with long branched 1-undecyldodecyl at both N positions. It was found that the supramolecular assembly could be manipulated by changing polarities of solutions consisting good solvent THF and poor solvent water. When varying the volume percentage of water (Vw) from 0 to 75v%, monomeric DUO-TDI, J-aggregates, Haggregates with minor J-aggregates were obtained. Moreover, J-aggregates benefited for the formation of helical fibers, Haggregates facilitated the fabrication of achiral nanostructures, such as nanodisks and wires. UV-vis, FL and FT-IR spectra confirmed that π-π stacking and alkyl chain packing were both altered within different nanostructures resulted from the difference in solvent polarity. The assembly of DUO-TDI at the liquid-HOPG interface was also studied. Stable monolayers composed of lamellar structures were observed. Chirality at the pattern level and monolayer level showed up for DUO-TDI at the 1-Octanoic acid-HOPG interface. While the monolayer level chirality disappeared at the 1-Phenyloctane-HOPG interface. The synergetic effect of π-π stacking from the large aromatic core and Van der Walls interactions from alkyl chains was proposed to contribute to the assembly of DUO-TDI in solutions and at the interface. The present investigation provides insight into the design of TDI based semiconductors for both academic research and potential opto-electronic devices or materials. DATA AVAILABILITY The raw data supporting the conclusions of this manuscript will be made available by the authors, without undue reservation, to any qualified researcher. AUTHOR CONTRIBUTIONS ZG and ZL contributed to the idea of the work and the preparation of the manuscript. XZ, LZ, and YW have done all experiments on the assembly and corresponding analysis. YW, WF, and YY did the simulation work. KS synthesized the molecule DUO-TDI. FUNDING The work has been supported by the National Natural Science Foundation of China (21573118, 21434008) and the Shandong Provincial Natural Science Foundation, China (ZR2016JL014, 2018GGX102026).
v3-fos-license
2021-06-12T06:16:37.429Z
2021-06-10T00:00:00.000
235404078
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-021-91666-6.pdf", "pdf_hash": "03b0720b3794d263af9fdb7868bae236fec5a38e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2816", "s2fieldsofstudy": [ "Biology", "Medicine", "Chemistry" ], "sha1": "335fb1158f6cf23005478efcdbc62a58872eded0", "year": 2021 }
pes2o/s2orc
RETRACTED ARTICLE: Taurine ameliorates thioacetamide induced liver fibrosis in rats via modulation of toll like receptor 4/nuclear factor kappa B signaling pathway Liver fibrosis is a significant health problem that can cause serious illness and death. Unfortunately, a standard treatment for liver fibrosis has not been approved yet due to its complicated pathogenesis. The current study aimed at assessing the anti-fibrotic effect of taurine against thioacetamide induced liver fibrosis in rats through the modulation of toll like receptor 4/nuclear factor kappa B signaling pathway. Both concomitant and late taurine treatment (100 mg/kg, IP, daily) significantly reduced the rise in serum ALT and AST activities and significantly reversed the decrease in serum albumin and total protein. These results were confirmed by histopathological examinations and immunehistochemical inspection of α-SMA, caspase-3 and NF-κB. The antioxidant potential of taurine was verified by a marked increase of GSH content and a reduction of MDA level in liver tissue. The anti-fibrotic effects of taurine were evaluated by investigating the expression of TLR4, NF-κB. The protein levels of IL-6, LPS, MyD88, MD2, CD14, TGF-β1 and TNF-α were determined. Docking studies were carried out to understand how taurine interacts inside TLR4-MD2 complex and it showed good binding with the hydrophobic binding site of MD2. We concluded that the anti-fibrotic effect of taurine was attributable to the modulation of the TLR4/NF-κB signaling. Liver fibrosis is a major health condition that can cause serious disease and death 1 . Liver fibrosis is initiated by the activation of immune cells that secrete cytokines and growth factors, leading to the activation of hepatic stellate cells (HSCs) and subsequent collagen production. Consequently, ECM accumulates in the liver and collagenolysis process continues [2][3][4][5] , eventually leading to cirrhosis with its complications of cancer and death 6,7 . There is no FDA approved drug for liver fibrosis, although considerable efforts have been exerted to defeat liver fibrosis through the inhibition of common crucial pathways of the fibrogenesis process 8 . Many trials have been made with the aim of inhibiting the activation of hepatic stellate and Kupffer cells, because they act to propagate oxidative and inflammatory responses, and subsequently to stimulate many fibrogenic mediators 9 . When remedies are used for long periods, it is essential to protect against the development of fibrotic complications that are associated with hepatic injury. Many previous reports have shown that liver fibrosis can be reversed under certain conditions, with a restoration of near normal architecture 10,11 . It is hoped that therapeutic approaches to liver fibrosis and the management of cirrhosis could be developed by understanding the etiology of liver fibrosis and developing improved diagnostic tools. Taurine, 2-aminoethane sulfonic acid, is the most abundant free amino acid in most animal tissues. Taurine is present in our daily foods, and also in anti-fatigue energy soft drinks and energizers for athletes [12][13][14] . It has a crucial role in many biological processes 15 ; stabilizing biological membranes and regulating calcium flux. It also has antioxidant and anti-inflammatory properties achieved by regulating the release of pro-inflammatory Results Biochemical assays. Leakage of the hepatic enzymes alanine transaminase (ALT) and aspartate aminotransferase (AST) from liver cells into the blood after the administration of thioacetamide (TAA) was identified by a significant elevation of their activities. There was a decrease in albumin and total protein concentrations in the TAA group compared to the control group. Concomitant (Con TAA + Tau) and late taurine (late TAA + Tau) treatment significantly decreased TAA induced elevation in the activity of AST and ALT, and reversed the decrease in total protein and albumin concentrations in the TAA group. The activity of AST was significantly decreased in the Con TAA + Tau groups compared to the late TAA + Tau groups, while total protein levels increased significantly in the Con TAA + Tau groups compared to the late TAA + Tau groups, as shown in Table 1. A significant rise in malondialdehyde (MDA) levels was observed in the TAA group compared to the control group. This rise was accompanied by a significant depletion in hepatic GSH content, compared with the control group. Concomitant and late taurine treatment produced significant reductions in MDA level and a marked increase in hepatic GSH content compared to the TAA group. The Con TAA + Tau group had significantly decreased MDA content, and significantly increased GSH content compared to the late TAA + Tau group, as shown in Fig. 1. Protein level and gene expression. In the present work, the levels of IL-6, LPS, MyD88, MD2, CD14, TGF-β1 and TNFα were measured in liver homogenate of rats treated with TAA, some of which were treated with taurine either concomitantly or later groups. The relative expressions of hepatic TLR-4 and NFκB were evaluated using qRT-PCR. Significant increases in the levels of IL-6, LPS, MyD88, MD2, CD14, TGF-β1, and TNF-α were observed in the TAA group compared to the control group. Both taurine treated groups showed decreases in IL-6, LPS, MyD88, MD2, CD14, TGF-β1, and TNF-α levels compared to the TAA group. IL-6 and TGF-β1 levels decreased significantly in the Con TAA + Tau group compared to the late TAA + Tau group, as shown in Fig. 2. The levels of TLR-4 and NFκB were significantly upregulated in the TAA group compared to the control group. A significant downregulation of TLR-4 and NFκB levels was observed in both taurine treated groups in Table 1. Liver function analysis. ALT, alanine aminotransferase; AST, aspartate aminotransferase; Tau, taurine; TAA, thioacetamide. *Significance against control group (P < 0.05), # Significance against TAA group (P < 0.05), $ Significance against Con TAA + Tau group (P < 0.05). A R T I C L E Histological and immunohistochemical examination. Microscope images of hematoxylin and eosin (H&E)-stained liver sections showed normal architecture of the hepatic lobules in the control and Tau groups (score 0). Liver sections of the TAA group showed disarrangement of the hepatic cords, central veins, and portal areas, and ballooning degeneration (short arrows), focal necrosis (red arrow), fibrous expansion of portal areas with infiltration of leukocytic cells (long arrows), and marked portal bridging, as well as portal-to-central bridging (score 5). Liver sections from the Con TAA + Tau and late TAA + Tau groups showed hepatic fibrosis (score 3) (long arrows), as shown in Fig. 4. Microscope images of Sirius red stained liver sections showed significant collagens deposition in TAA induced liver fibrosis, which was significantly reduced by taurine treatment, as evidenced by the decrease of the positively stained area (long arrows point to fibrous tissue), as shown in Fig. 5. Microscope images of liver sections immunostained against α-SMA showed positive expression only in the smooth muscle layers surrounding the blood vessels in the control and Tau groups. The expression of α-SMA protein increased, as indicated by brown staining in the fibrotic areas in the TAA group. The positive staining was decreased in the Con TAA + Tau and late TAA + Tau groups (long arrows point to positively stained fibrous tissue). Immunohistochemistry (IHC) was counterstained with Mayer's hematoxylin, as shown in Fig. 6. Microscope images of liver sections immunostained against caspase-3 showed very mild staining in the control and Tau groups, strong positive brown staining in the TAA group, and mild positive staining in the Con TAA + Tau and late TAA + Tau groups (long arrows point to positively stained hepatocytes). IHC was counterstained with Mayer's hematoxylin, as shown in Fig. 7. Microscope images of hepatic sections immunostained against NF-κB showed negative staining in the control and Tau groups and strong positive brown staining (nuclear reaction) in the TAA group. The positive brown reaction was markedly decreased in the Con TAA + Tau and late TAA + Tau groups (black arrows point to positive cells). IHC was counterstained with Mayer's hematoxylin, as shown in Fig. 8. www.nature.com/scientificreports/ www.nature.com/scientificreports/ Molecular docking study. Analysis of the docking results showed that taurine could be fitted into the hydrophobic binding pocket of MD2 within less than 0.3 Å RMSD where the docking score of best docked pose was 7.63 kcal mol −1 , as shown in Fig. 9A. In order to have good results we performed a series of docking simulations using TLR4/MD2 complex (pdb code: 4G8A). We selected a large docking search area that contained the entire LPS binding pocket in MD2 which has a β-cup fold structure composed of two antiparallel β-sheets separated from each other forming a hydrophobic pocket. Looking closely at amino acid interactions between taurine and the pocket site, we found that the ligand was in contact with the phenyl rings of Tyr131 and Phe126, which move into this position upon binding. The polar interactions with the two amino acids in the binding site Tyr131 and Phe126 were at bond lengths of 2.82 and 3.53 A°, respectively. We also found hydrogen bond interaction with the basic amino acid Lys125 and another polar interaction with Ser127 of the binding site of MD2, as shown in Fig. 9B,C. Discussion The secretion of inflammatory cytokines from Kupffer cells was accompanied by the progression of liver fibrosis, which has an essential role in the pathogenesis of several liver diseases 48 . Liver fibrosis is characterized by alterations of the hepatic ECM. After the activation of quiescent HSCs they are differentiated into myofibroblast-like cells with increased proliferation, accumulation of ECM, and expression of α-SMA. Therefore, collagen accumulation in the liver is considered to be a mark of fibrosis 49 . α-SMA is the most widely used marker of activation of HSCs 50 . Thioacetamide metabolism produces a hepatotoxin metabolite that triggers the overproduction of reactive oxygen species (ROS), leading to liver fibrosis and cirrhosis, and ending with HCC 51,52 . This metabolite is produced by cytochrome2E1 (CYP2E1) 53 , the principal P450 for the metabolism of many xenobiotics, such as TAA [54][55][56][57][58][59][60][61] . Excess formation of superoxide free radicals leads to an increase in lipid peroxidation, and consequent formation of MDA. MDA targets DNA and causes mutations. GSH usually counteracts the deleterious effects of oxidative stress. GSH also detoxifies many toxic compounds 52 . Our study found that concomitant and late treatment with 100 mg/kg taurine significantly conserved hepatocyte integrity, as indicated by the reduced serum activities of ALT, AST, and hepatic MDA. Taurine reversed the reduction of serum albumin, total protein www.nature.com/scientificreports/ concentration, and GSH. Histological results confirmed the protective effect of taurine against hepatic injury and fibrosis induced by TAA. In this study, there was a significant increase in the levels of IL-6, LPS, MyD88, MD2, CD14, TGF-β1, and TNF-α in the TAA group compared to the control group. Previous reports have shown that collagen accumulation and the activation of fibroblasts are closely related to epidural fibrosis 62 , and that alpha-smooth muscle actin (α-SMA) plays an important role in fibrotic pathogenesis scars 63,64 . The overproduction of TGF-β is considered to be one of the underlying mechanisms by which fibrosis occurs 65,66 . TGF-β activates specific receptors, TGF-βRI and TGF-βRII, which leads to the activation of Smad2 and Smad3 phosphorylation, and then the formation of a complex with Smad4. The SMAD complex translocates into the nucleus and activates the transcription of collagens 67 , as confirmed by histological examination with H&E and Sirius red, and α-SMA immune-expression. A R T I C L E Elevated collagen expression stimulates the transdifferentiation of myofibroblasts, which secrete ECM that hinders the cellular capacity for ECM degradation, with the net result being fibrosis 68 . The sustained signaling by the TGF-β1 cascade promotes the proliferation of HSCs, which also produce ECM, resulting in fibrous scars 69 . TGF-β1 induces the differentiation of myofibroblasts through the PI3K-Akt pathway, resulting in liver fibrosis 70 . The hepato-protective role of taurine was clarified by the reduction of either CYP2E1 metabolic activity or oxidative stress caused by hepatotoxin. The antioxidant and anti-inflammatory effects of taurine have previously been explained by the diminution of lipid peroxidation and neutrophil adhesion 12-14 a suggestion that was confirmed, in the present study, by the significant decrease observed in IL-6, LPS, MyD88, MD2, CD14, TGF-β1 and TNF-α levels in the taurine treated groups compared to the TAA group. TLRs include a highly conserved family of receptors that recognize pathogens and facilitate the host detection of microbial infection. Recent studies have indicated that TLR4 may be linked to inflammatory and fibrogenic response [71][72][73] . Previous studies indicated an important role for TLR4 in several signaling pathways involved in liver fibrogenesis. TLR4 polymorphism is strongly associated with fibrosis insult 74 . Signaling of TLRs is initiated when their ectodomains engage and complex with their respective ligands, and consequently enhances the recruitment of TLR adaptors, mainly through interaction of the TLRs with these adaptors. Known TLR signaling adaptors are MyD88, TRIF, TRAM, TIRAP, SARM, and BCAP 75 . Inflammatory diseases, liver diseases and subsequent ROS overproduction can be the result of LPS induced hepatic injury by the resuscitation of dormant organisms which shed inflammatory molecules 76,77 . This process occurs by disruption of intestinal barrier and the release of a large amount of cell wall components, such as LPS, by intestinal flora passing the systemic and portal circulation and activating the release of inflammatory cytokines which further injure the intestinal mucosa 78 . The reduction of ROS signaling by the administration of www.nature.com/scientificreports/ antioxidant and anti-inflammatory agents such as taurine is beneficial, since it relieves such damage as indicated by previous studies 79 . TLR4 is expressed in liver cells which are constantly confronted with gut-derived LPS. Normally, liver has relatively low expression of TLR4 and its adaptor molecules, MD2 and MyD88, and negatively regulates TLR4 signaling, a process known as "liver tolerance". A breakdown of liver tolerance due to increased exposure and/ or sensitivity of TLR4 to LPS may induce an inappropriate immune response 72 . Initially, the intestinal barrier is disrupted and a large amount of LPS is released by intestinal flora passing into the circulation 80 . R E T R A C T E D A R T I C L E LPS induces excessive release of pro-inflammatory cytokines, including TNF-α and IL-6, and the production of ROS, by binding with toll like receptor 4 on the surface of Kupffer cells 75 . According to previous studies, the response of hepatocytes to LPS is complex, and requires cell-cell interaction between hepatocytes, Kupffer cells, sinusoidal endothelial cells, and stellate cells. The hepatocytes were assumed to have a direct response to LPS, similar to that of monocytes and macrophages. Hepatocytes have a rapid response, and therefore bypass the time needed for Kupffer cells, which are also considered to be highly responsive to LPS, to synthesize cytokines such as TNF-α, IL-1β, IL-6, IL-12, IL-18, IL-10, in addition to nitric oxide and oxygen radicals 81 . Because of the unique anatomical link between the liver and intestines, Kupffer cells are the first cells to encounter LPS and accordingly, Kupffer cells express TLR4 72 . The complexity of Kupffer cell participation in hepatic toxicity is becoming more and more apparent, as some hepatic injury has been attributed to the deleterious effects of activated Kupffer cells 82 . Although there is no specific marker to identify hepatic Kupffer cells, Kupffer cells can be identified by their expression of CD14, CD16, CD68, CD68, and CD16 83 . The expression of CD14 is considered to be a marker of activation of Kuppfer cells in the liver, which is thought to cause inflammation and fibrosis 84 . Upregulation of CD14 in Kupffer cells has been implicated in the pathogenesis of several forms of liver injury. TNF-α production by Kupffer cells, a marker for Kupffer cell activation, increases in a dose-dependent manner with increasing concentrations of LPS. CD14 knockout mice and CD14 antibodies show significantly decreased production of TNF-α from Kupffer cells in response to LPS 85 . www.nature.com/scientificreports/ The degree of liver injury correlates with the level of LPS, and with the level of Kupffer cell CD14 expression. Also, CD14 expression on Kupffer cells is low in normal human liver, but increases in different inflammatory liver diseases 86 . A R T I C L E Therefore, the secretion of chemokines is upregulated, and HSCs are sensitized to the action of TGF-β 73 . The TLR/NF-κB signaling pathway is the main pathway involved in the synthesis and secretion of inflammatory mediators during inflammation, in which TNF-α is the principal factor 87 . TNF-α regulates other inflammatory mediators, such as IL-1β, IL-6, and IL-8 16,88 . It has been reported that taurine decreases TNF-α, IL-6, and peroxide levels and, thus fibrogenic mediators and collagen accumulation were reduced during fibrogenesis 36 . Taurine reduces the elevation of TNF-α and modulates the inflammatory response through the TLRs/NF-κB signaling pathway 89 . We used RT-PCR to assess the expression of mRNA for some factors involved in the LPS induced signaling pathway, such as TLR4 and NF-κB, because they are considered to be the main factors involved in these signaling pathways 80 . Taurine reduced the level of mRNA for TLR4 and NF-κB, consequently blocking the activity of the pathway, and decreasing the synthesis and release of inflammatory cytokines 80 . Previous studies have shown that TLRs can regulate immune receptors and modulate the inflammatory process 90 . The stimulation of TLR4 by LPS is a complex process, which involves LBP, CD14, and MD2. LBP, a soluble protein, extracts LPS from the bacterial membrane and shuttles it to CD14 72 . CD14 is considered to be a receptor of LPS on the membrane surface of KC, which mediate LPS signal transduction 80 . The CD14-LBP-LPS complex stimulates TLR4, a specific receptor of LPS, to trigger a KC signaling pathway involving the phosphorylation of IκB proteins, and subsequently activates the translocation of NF-κB. According to Wu et al., taurine can inhibit the LPS-KC signal pathway by downregulating the expression of CD14 and its combination with LPS 80 . This conclusion was supported by the results of the present study, as there was a significant decrease in TLR-4 and NFκB gene expression and IL-6, LPS, MyD88, MD2, CD14, TGF-β1, and TNF-α levels in both taurine treated groups compared to the TAA group. There was also a significant decrease in the NF-κB immune-positive brown staining in the taurine treated groups compared to the TAA group. CD14 transfers LPS to MD2 91 which is considered to be a secondary associated protein which forms a complex with CD14-LPS. It is also considered as a co-receptor which physically associates with TLR4, and binds the 72 , which in turn may trigger the KC signaling pathway. Consequently, NF-κB translocation is activated, leading to overexpression of pro-inflammatory cytokines 80 , including TNF-α, which provoke the release of ROS 91 . Therefore, docking studies were conducted to predict the binding mode of a ligand to its receptor, and also to explore the binding mode of taurine, to investigate the possible interactions, and consequently to understand the binding mode and key active site interactions. The damaging effects lipopolysaccharides take place via interactions with LBP and binding to TLR4 through CD14 and MD2 93 . TLR4 activates both the MyD88-and TRIF-dependent (MyD88-independent pathway) signaling pathways. Early activation of NFκB is strongly related to the MyD88-dependent pathway, while the TRIFdependent pathway is involved in late phase activation of NFκB. Activation of both pathways is important for the induction of downstream pro-inflammatory cytokine secretion in response to TLR4 stimulation 75 . MyD88 recruits IRAK4, TRAF6, and TAK1. Subsequently, degradation of the IκB kinase complex subunits occurs, and NF-κB is released. NF-κB is then translocated into the nucleus 94 , and as a result the transcription of IL-6, IL-12, and TNFα is induced 95 . Taurine administration could inhibit the binding and the expression of TLR4 to the CD14-LBP-LPS complex, thus decreasing the synthesis and release of cytokines that injure the hepatocytes 80 . It appears that taurine can alleviate liver injury and inhibit KC activation resulting from overproduction of LPS, TLR4, and NF-κB. The suggested mechanism for the effect of taurine on the TLR4/NF-κB signaling pathway is summarized in Fig. 10. R E T R A C T E D A R T I C L E NF-κB, a key regulator of transcription of inflammatory genes 96 , functions as a transcription factor after its translocation to the nucleus 42,97 . The activation of NF-κB stimulates inflammatory responses, cell growth, and survival, during carcinogenesis 98,99 . Therefore, targeting NF-κB and its related pathways is considered to be a promising approach for the management of liver fibrosis 100 . A previous study showed that taurine may modulate inflammatory injury induced by S. uberis in mammary tissue, through TLR-2 and TLR-4. Taurine treatment also markedly repressed NF-κB DNA binding activity 89 . These findings were supported by the results of the present study, as there was a significant decrease in the o NF-κB immune-positive brown staining and relative gene expression in the taurine treated groups compared to the TAA group. Another study, conducted in rheumatoid arthritis patients, found NF-κB activity and DNA binding were reduced 16 . TLR4 deficient mice resist hepatic fibrosis in multiple models 73 , TNF-α stimulates the proliferation of HSCs, and consequently inflammatory signaling 101 . www.nature.com/scientificreports/ The exploration of the mechanisms of apoptosis has highlighted the involvement of NF-κB signaling in the regulation of apoptosis. However, apoptosis is dependent on caspase activation and the cleavage of specific death substrates within the cell, and therefore apoptosis may be viewed as a caspase-mediated form of cell death. Activated HSCs showed high levels of NF-κB and NF-κB-regulated anti-apoptotic proteins, such as IL-6 102 . There are two major pathways that link apoptosis: intrinsic (mitochondrial) and extrinsic. The extrinsic pathway of apoptosis is primarily initiated through caspase-8, which activates the downstream effector caspases-3, leading to apoptosis 103 . The intrinsic apoptotic pathway involves the disruption of the mitochondrial membrane and the release of apoptotic factors. Caspase-9 activation promotes the production of caspase-3, and consequently the morphological and biochemical changes associated with apoptosis 104 . In the present study, there was a significant decrease in caspases-3 immunostaining in the Con TAA + Tau and late TAA + Tau groups compared to the TAA group. The effect of taurine is based on its anti-oxidative and anti-apoptotic activities, which are consistent with previous murine models with respect to reduction in oxidation, apoptosis, and necrosis of liver cells 105 . A R T I C L E According to Marshall, et al. 106 , the TLR4/MD2 binding site is a known target for small molecule agonists activating TLR4. A recent study 107 reported that the synthetic peptidomimetic ligand Neoseptin-3 causes dimerization of TLR4/MD2 and activation of TLR4 signaling, with a ligand-binding mode distinct from that of the native LPS molecule. The pocket containing Phe126 and stretching from residue 120 to residue 129 exhibits a backbone conformational change when bound to a ligand. Upon binding, the side chain of Phe126 flips, and is directed inside the binding pocket. Molecular docking studies showed good binding of taurine with the hydrophobic MD2 binding site, forming four polar interactions with the conservative amino acids Lys125, Phe126, Ser127, and Tyr131, which may be a reasonable interpretation for the results obtained from the biological experiments. www.nature.com/scientificreports/ Technology (Approval Number: FPDV 9/2019). All experiments were carried out in accordance with relevant guidelines and regulations and in compliance with the ARRIVE guidelines 108 . Animals were retained under specific measured environmental settings: 22 ± 2 °C, 50 ± 10% humidity and a 12 h light/dark cycle, and were fed with standard pellets and free access to water was allowed. A R T I C L E Experimental design. After a one week adaptation period, 40 rats were assigned into five groups (n = 8) as follows: Control group (Control), Taurine group (Tau), TAA group, Concomitant Taurine (Con TAA + Tau); and Late Taurine (late TAA + Tau). The different groups received the doses shown in Table 2 Experimental design rationale. We aimed to investigate the indirect and direct effect of taurine on liver fibrosis induced by TAA, by measuring its antioxidant effects, anti-inflammatory effects, and anti-apoptotic effects, and also by performing docking studies. Collection of samples. All rats fasted for at least eight hours at the end of the experiment. Blood samples were collected from the retro-orbital vein and were centrifuged for fifteen minutes to obtain serum, which was stored at − 80 °C. Rat livers from all groups were collected, weighed, and divided into three specific portions. For histological and immunohistochemical examination, the first portion was fixed in 10%formalin saline (El-Nasr Chemicals Co, Cairo, Egypt). To prepare liver homogenate, the second portion was homogenized in tenfold volume of sodium potassium phosphate buffer (0.01 M, pH 7.4) containing 1.15% KCl. To prevent protein hydrolysis, PMSF (protease inhibitor), EDTA (chelating agents) and DTT (reducing agents) were added to homogenizing solutions and then were centrifuged (5000 × g) for five minutes at 4 °C. The clear solution was stored at − 80 °C for further biochemical tests. For gene expression assessment by qRT-PCR, the last portion was immediately frozen in liquid nitrogen and was stored at − 80 °C. Table 3. Histological and immunohistochemical techniques. Liver samples from different experimental groups were instantly fixed in 10% formalin saline then paraffin blocks were prepared and 5-μm thick sections were sliced and stained with hematoxylin-eosin (HE) for histopathological analysis to reveal the hepatic structural variations. The degree of liver fibrosis was evaluated and scored blindly as previously reported 111 . Where, portal tracts expansion and fibrosis was graded according to scoring system; (0): no fibrosis was detected, (1): some portal areas showed fibrous expansion with or without short fibrous septa detected, (2): most portal areas showed fibrous expansion with or without short fibrous septa, (3): most portal areas showed fibrous expansion with sporadic portal-to-portal (P-P) bridging, (4): portal areas showed fibrous expansion with obvious (P-P) as well as (P-C) bridging, (5): marked bridging (P-P and/or P-C) with infrequent nodules (incomplete cirrhosis), (6): probable or definite cirrhosis was identified. In addition; Sirius red staining was performed using standard protocols for morphometric analysis of collagen content indicating liver structural changes using image J program. Immunohistochemical analysis for caspase-3, nuclear factor kappa B (NF-κB) and Alpha-smooth muscle actin (α-SMA) antibody was performed (Cat. No: 54-0017; from Genemed Biotecnologies, CA, USA) where, an antigen retrieval (EDTA solution, PH 8) was added to liver sections slides followed by hydrogen peroxide 0.3% and protein block, then incubation with either rat anti-Caspase-3 or anti-NF-κB or anti-α-SMA antibody (1: 100 dilution) was conducted. Incubation with anti-rat IgG secondary antibodies (HRP) was performed followed by visualization with (Liquid DAB + Substrate Chromogen System). Mayer's Hematoxylin was used as a counterstain. The positive staining results of liver tissue due to positive reaction was detected using Image J analysis software (National Institutes of Health, MD, USA) and the % of stained area (quantification of IHC staining) was expressed as mean ± SEM 112 . Molecular docking study. TLR4 has a characteristic horseshoe-like shape. MD2 is bound to the side of the horseshoe ring and smoothly interfaces with the ligand. MD2 has a β-cup fold arrangement consisted of two antiparallel β-sheets, forming a large hydrophobic pocket for ligand binding. The molecular docking study is an effective method for the prediction of the binding mode of a ligand to its receptor. To explore the binding mode of taurine, we simulated docking of the compound into the large hydrophobic binding pocket of MD2, to see the possible interactions, and consequently, to understand the binding mode and key active site interactions. We performed our molecular docking study using the Molecular Operating Environment software (MOE) 113 on the crystal structure of human TLR4 in complex with MD2 and LPS (PDB code: 4G8A) 114,115 . The compound was built using MOE builder, energy minimized using the force field algorithm and the protein was prepared by adding missing hydrogens and 50 runs for the docking were performed with Triangle Matcher placement technique and energy of binding was scored according to London dG scoring function. In the present study, we tried to explore direct effect of taurine on TLR4 through docking study to understand how taurine interacts inside TLR4-MD2 complex. Statistical analysis. The mean ± standard error was used for descriptive statistics of quantitative variables. One-way analysis of variance was used for comparisons between groups. Student t-tests were used to compare differences between two groups. Examination of non-parametric data (fibrosis score) was done using Kruskal- Table 3. Primer sequences used for the RT-PCR step. TLR-4, toll like receptor-4; NF-κB, nuclear factor-kB. www.nature.com/scientificreports/ Wallis tests and Dunn's post hoc test. GraphPad Prism 7 (GraphPad Software, San Diego, California USA, www. graph pad. com) was used for statistical analysis. Statistical significance was predefined as P < 0.05. Conclusion Both concomitant and late taurine treatment showed significant activity against TAA induced liver fibrosis in rats, with the concomitant treatment showing much more promising results. The anti-fibrotic effect of taurine was attributable to its antioxidant (MDA, GSH), anti-inflammatory (LPS, MyD88, MD2, CD14, TLR4, NF-κB, IL-6, TGF-β1, TNF-α) and anti-apoptotic effects (caspase-3), mainly through modulation of the TLR4/NF-κB signaling pathway, indirectly by downregulation of LPS. The docking studies demonstrated good, direct binding of taurine inside the binding site of TLR4-MD2 complex with a binding energy of 7.63 kcal mol −1 . www.nature.com/scientificreports/
v3-fos-license
2019-01-02T19:31:06.503Z
2016-04-09T00:00:00.000
89607852
{ "extfieldsofstudy": [ "Sociology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/09669760.2016.1165652?needAccess=true", "pdf_hash": "43179fd05d3737321cbc77f4b2b46fcb9efee0d3", "pdf_src": "TaylorAndFrancis", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2820", "s2fieldsofstudy": [ "Education", "History" ], "sha1": "3b7da770aec82523a38560e94e45fcc9c4bdae29", "year": 2016 }
pes2o/s2orc
A historical reflection on literacy, gender and opportunity: implications for the teaching of literacy in early childhood education ABSTRACT This paper presents a historical reflection on gender and literacy, with a view to informing the present teaching of literacy in early childhood. The relationship between gender, literacy and opportunity in the labour market is examined, given that despite girls’ achievement in literacy, in comparison with boys’, women continue to earn substantially less than men. In order to understand this relationship, this paper reflects on literacy as a socio-historical construct as well as examining the ways in which the past is constitutive in forming enduring notions of gender that penetrate all elements of society, including the literacy classroom. This critical analysis of what is learned about and through the medium of literacy in the early childhood classroom has major implications for the teaching of literacy today. It is argued that in order to address this issue, early childhood educators need to value and nurture children’s digital literacies as well as create learning environments that allow all children genuine opportunities to question, challenge and explore dominant discourses that are embedded in text. Introduction Few would argue against the claim that children need to gain skills in literacy in order to improve their life chances. For example, the National Literacy Trust Report 'Literacy Changes Lives' (Morrisroe 2014, 5) argues that literacy 'influences individual capability' and that individuals and communities that have low levels of literacy are 'more vulnerable to inequality, increasing the risk of social exclusion and undermining social mobility' (Morrisroe 2014). Similarly, the 1970 British Cohort Study showed a strong link between poor basic skills and disadvantaged life courses when participants were aged 34 years (Parsons and Bynner 2005). More recently, a number of studies by the National Research and Development Centre provide indicators as to the place of reading and writing in relation to social mobility (e.g. Bynner and Parsons 2006;Parsons and Bynner 2008;Hodge, Barton, and Pearce 2010) suggesting that literacy skill is positively associated with such factors as health, well-being and family structure. Given that girls outperform boys in literacy, yet women are substantially over-represented in low-paid work, this raises some very important questions about the relationship between attainment in literacy in school and outcome in the labour market. Moreover, as formal literacy education begins in early childhood, it is crucial that this issue is discussed from the perspective of early childhood education. Much of this paper focuses on the teaching of literacy within contexts where English is a first language; however, the issues addressed in this paper are global therefore it is suggested that this paper has implications for many different international contexts. The purpose of this paper is to consider how the teaching of early childhood literacy connects with gendered opportunities in the labour market in order to inform presentday teaching practice. More specifically, this paper analyses what young children learn about literacy, as well as what children learn through the medium of schooled literacy, that impacts upon opportunity in the work force on the basis of gender. In order to do this, this paper takes a historical perspective, reflecting back on the ways in which changing constructions of literacy connect with issues of opportunity. It must be stressed that it is not the purpose of this paper to analyse the past as such, rather it makes a historical reflection in order to inform an aspect of the present and future teaching of literacy. The rationale for doing this can be summarised in the words of Green (2006, 8) who states that he remains 'thoroughly convinced that any inquiry into the future of English teaching, into the shape of things to come, must be historically informed'. Having traced movements in the curriculum and cultural politics of English teaching, Green argues strongly that reflection on the past has a substantial role in informing future teaching, because the past is 'constitutive', meaning that it 'is never really past: but continuously constitutive of the present' (Bryant 1994, 1, cited in Green 2006. With this in mind, this paper reflects on the past in order to understand some of the factors that have influenced women's opportunities in the workforce and consider how the present teaching of literacy in early childhood education may be constitutive in maintaining these influences. This paper begins with a historical look at the notion of literacy, examining how social change and economic structures have influenced how definitions of literacy are created and developed. This provides a foundation for the next section which draws on historical data to explore why women have struggled to compete with men in the workforce despite sustained achievement in literacy. In doing so, this paper critically examines the relationship between literacy, gender and achievement beyond the school system. The implications of this for the teaching of literacy in schools today are then made explicit. Learning about literacy Given that girls do well in literacy, yet this fails to translate into financial success in the labour market in comparison with men, it seems prudent to begin with an exploration of the perceived values attached to literacy and, more specifically, attainment in literacy. Indeed there is much to suggest that literacy skills are highly valued in our society. This is evident in the fact that attitudes towards ownership of literacy skill are quite different from attitudes towards skills in numeracy. This is summed up in the words of Jennifer Ouellette (2010), in her exploration of calculus, when she said: I think scientists have a valid point when they bemoan the fact that it's socially acceptable in our culture to be utterly ignorant of math, whereas it is a shameful thing to be illiterate. (2010,13) We live in a society that carries an expectation that it is unacceptable to have poor, or no, skills in literacy. In other words, it is expected that everyone should reach a basic level of skill in reading and writing, with the term 'illiteracy' being used over the years to describe individuals who have poor, or indeed no skills in reading and writing printed text. However, Ramsey-Kurz (2007) argues that it is important to recognise that 'illiteracy' is not an autonomous category, but rather it is part of a binary constructan 'opposite' of literacy. She goes on to explain that constructions of 'illiteracy' can only ever exist in relation to literate cultures. She states: Individuals or cultures without a script are not comprehended as illiterate purely on account of their orality, but only when they come into contact with a writing system or its users. It is only by virtue of their particular relationship to a literate civilization, then, they qualify as 'il ', 'non-', or 'preliterate'. (2007, 19) Harvey Graff, who closely examined the historical development of literacy, takes this point further when he asserts his growing belief that literacy is 'profoundly misunderstood' (italics in original) (1987,17). He argues that many discussions about literacy flounder because 'they slight any effort to formulate consistent and realistic definitions of literacy, have little appreciation of the conceptual complications that the subject of literacy presents, and ignoreoften grosslythe vital role of socio-historical context ' (1987, 17). A brief glance back into history soon reveals that present-day definitions of 'literacy', and indeed 'illiteracy', are not as fixed as we would often like to believe. For example, Eric Havelock (1976) argues forcibly that as human beings have used oral speech for far longer than the comparatively late invention of alphabetic literacy, then this should take precedence within a definition. He states: The biological-historical fact is that homo sapiens is a species which uses oral speech manufactured by the mouth, to communicate. This is his definition. He is not, by definition, a reader or a writer … The habit of using written symbols to represent such speech is just a useful trick which has existed over too short a time to have been built into our genes. (Havelock 1976, 12) As Ramsey-Kurtz argues, this explains why Western societies did not begin to perceive or discuss concepts of 'illiteracy' as a concern much before the nineteenth century, because up to this point, an absence of ability to read and write printed text was regarded as a 'cultural norm' while literacy skill was an 'exception to this norm'. Even into the early twentieth century, attitudes towards 'illiteracy' were less condemnatory than became apparent a few decades later, and indeed exist strongly today. This raises two vital issues for this discussion. First, the necessity to be 'literate' is a relatively recent phenomenon, and only exists because an ability to read and write printed text has now become seen as 'the norm'. However, there is now an expectation that all those living in Western society should not only be literate but have achieved a degree of mastery in literacy skill (Jones and Marriott 1995;Street 1997). But rather than suggesting that high achievement in literacy is therefore valued, this merely shows that poor performance in literacy is condemned. It is therefore important to reflect on the extent to which attainment in literacy in school is valued in comparison with attainment in other school subjects. This leads to some intriguing questions about the perceived status of literacy and literacy skill acquisition, when it is compared with other academic skills. Indeed there is substantial literature to suggest that the Science, Technology, Engineering and Mathematics (STEM) subjects are regarded as 'prestigious' and of 'high status' (Watts 2014) as well as being 'difficult' and labour intensive (Brea et al. 2012). This is exemplified in a research report by Coe et al. (2008, 1) who reported that 'at A Level, the STEM subjects are not just more difficult than non-sciences, they are without exception the hardest of all A levels'. They go on to conclude that 'to say that one subject is harder than another means that the same grade in it indicates a higher level of general ability' (2). This clearly indicates that for many, STEM subjects are not only regarded as being more difficult that non-science subjects, but achievement in these subjects indicates a higher level of general ability. This may begin to explain why literacy skill is perceived quite differently from skills in maths and science, in relation to prestige and perceived value. A degree of literacy skill is expected of everyone, but this is not the same for skills in maths and science. The fact that societal attitudes towards a lack of literacy are condemnatory suggests that literacy skill is regarded as unchallenging and attainable, whereas skill acquisition in maths and science is regarded as being substantially more demanding and requires greater academic prowess. To put it another way, mastery in literacy skill does not carry the same status as mastery in STEM subjects. This has serious implications for all teachers, but especially those working with young children given that concepts and ideas about learning, and what it means to be a learner, are generated during children's earliest years in school (Aubrey et al. 2000). Second, reflecting on the ways in which constructions of literacy have operated throughout history reminds us that the current definition of literacy is a product of social convention and is not a 'natural' state. This means that we must acknowledge that literacy is a fluid construct that adapts to accommodate time, place and context. This raises serious questions about the value of the literacy that children are 'attaining' when they achieve in standardised literacy assessment in school. Concepts of 'being literate' and indeed being 'good at literacy' are defined by narrow constructions of literacy, situated within a school discourse (Levy 2011;Lankshear and Knobel 2011). There is a wide body of literature to suggest that the school discourse uses accepted definitions of literacy that pertain largely to the reading and writing of print in paper-based text (Levy 2009;Wohlwend 2009), yet as we travel deeper into the twenty-first century we need to ask whether achievement in school-based literacy really serves to support individuals who are entering the labour market. Moreover, as many studies have already documented, the culture surrounding computer and other technologies reveals a 'masculinisation of both tools and expertise' (Jenson and Brushwood Rose 2003, 169) situating technology within a paradigm that is traditionally male (Schofield 1995;Volman and Ten Dam 1998;Littleton and Hoyle 2002). Certainly much of the research into attitudes towards technology has focused on the skills of accessing technology, and studies have indeed shown that girls feel less competent than boys in this domain (Charles and Bradley 2006). However, further research into the ways in which technology use connects with identities has concluded that an overemphasis on technological skills is unhelpful. Indeed in their study of teachers' working identities, Jenson and Brushwood Rose (2003, 179) argued that 'the extent of teachers' use of new technologies in schools is not only socioculturally mediated, but at times has very little to do with how technologically skilled or unskilled teachers actually are'. Similarly, in their evaluation of 'Computer Clubs for Girls' (a high-profile publicly funded initiative in England, introduced to help increase female participation in IT courses and careers), Fuller, Turbin, and Johnston (2013, 501) concluded that strategies such as this were ineffective because they did not address the fact that 'the IT paradigm is … culturally and historically male'. By reflecting on the ways in which constructions of literacy are modified by time and place in history, we are forced to recognise that a disparity exists between the construct of literacy that is taught and assessed in schools today and the construct that is potentially needed for success in the labour market. Girls do well in school-based literacy assessment in comparison with boys; however, if schooled literacy is not aligned to the technological demands of twenty-first-century life, it is clear to see that 'attainment' in literacy may actually fall short of providing children with the literacy skills that are needed to achieve highly in the modern labour market. Through a critical reflection on the ways in which literacy has been defined, perceived and valued over the years, we can see that what we teach children about literacy in schools today is related to opportunity for success in the labour market. It is clearly the case that the acquisition of literacy skill is regarded as essential for success in society; however, high attainment in literacy does not seem to carry the same status as high attainment in other subjects such as maths and sciencea message that children receive from their earliest years in school. In addition, history also teaches us that literacy is a mobile construct, shaped by the socio-historical context within which it occurs. As we progress further into the twenty-first century, it is evident that technology is having a powerful and enduring impact on the ways in which literacy is defined and utilised today, yet this remains largely unrecognised within the school discourse. It therefore stands to reason that if we want to help equip children for success in the labour market then we must ensure that they develop literacy skills that accommodate technology, regardless of whether or not they attain highly in school-based literacy assessment. Bringing this together, it becomes increasingly apparent that early childhood educators have a particular responsibility to recognise that what they teach children about literacy can have a major impact on opportunity in the labour market and can particularly disadvantage girls from achieving in the workplace. The role of the early childhood educator will be returned to later in this paper, but before this is considered, it is also important to explore what children learn through the medium of schooled literacy, and how this impacts upon opportunity in the work force on the basis of gender. The next section maintains a focus on historical reflection in order to understand how gendered stereotypes are reinforced through the teaching of literacy. This means that the literacy classroom may in itself carry responsibility for promoting views that prevent girls from going on to achieve success in the labour market. This paper now reflects on the ways in which unhelpful gendered stereotypes penetrate the literacy classroom, and discusses ways in which this knowledge can be used to inform the teaching of early literacy. Learning through literacy It is no secret that a main reason why women do not achieve as highly as men in the workplace is due to established social norms that dictate that home and children remain primarily the woman's responsibility. In her book Delusions of Gender, Cordelia Fine (2010, 83) describes the 'psychological scrambles' of well-educated couples who could not resist the 'strength of the push to maintain gendered roles'. For example, Fine references the work of Tichenor (2005) who reported that husbands and their high-earning wives engaged in significant 'psychological work' to maintain gendered conventionality. Tichenor concluded that for many women, 'the cultural expectations of what it means to be a good wife shape the domestic negotiations of unconventional earners and produce arrangements that privilege husbands and further burden wives' (Fine 2010, 82). This is not to suggest that intensions towards gender equity do not exist. As Selmi (2005) points out, the vast majority of people born between 1965 and 1981 support the concept of equal caregiving for example, yet progress towards this has remained, in her words, 'glacial'. Gender stereotypes relating to work, child-rearing and the home are both deliberately and unwittingly reinforced in almost every aspect of our lives, through the context of the media, the school system, the entertainment industry and so on. Moreover, these influences have an effect on children from the moment they are born, if not before (Eliot 2009), hence reinforcing the importance of addressing the issue as early as possible. One particularly powerful authority is the texts that children come into contact with. Margaret Meek spoke specifically about this in her seminal publication of 1988 entitled How Texts Teach What Readers Learn. Meek explains how text teaches children to not only make sense of print, but also how to decode image, understand context, read 'between the lines' and learn about culture and so on. Since Meek published this book, we have come to acknowledge that constructions of 'text' are changing rapidly and now include digital and screen texts as well as paper-based texts, as discussed earlier in this paper. However what is clear is that text, in all its forms, contributes greatly to the ways in which children make sense of the world they live in. Part of the function of education must be to help children to learn to read text, however as we know, reading is not just the decoding of print and image but includes a capacity to extract information, engage with concepts, understand ideas and form opinions. For this reason, early childhood education has a particularly important role in helping children to acknowledge the ways in which harmful stereotypes are introduced and reinforced in text. However, this is not a straightforward issue. Davies and Saltmarsh (2007,12) argue strongly that while the learning of literacy is positioned as being 'desirable and innocent', literacy discourses 'are intricately entangled in the ways in which becoming masculine and feminine are accomplished'. They go on to describe the ways in which literacy learning feeds directly into the reproduction of gendered neo-liberal discourse when they state: What is of interest here is the extent to which the gender orderwhich inevitably shapes the social and economic landscape out of which education policy emergesis in turn shaped in literacy classrooms in ways that both reflect and reinscribe the hidden gender dimensions of neo-liberal discourse. Davies and Saltmarsh (2007) are here arguing that the literacy classroom is part of a whole system that continues to reinforce gendered stereotypes, even though teachers are concerned about promoting equity. Davies and Saltmarsh (2007, 6) go on to explain that these constraints 'lie in the very practices of teaching reading, writing, speaking and listening', therefore making it very difficult for teachers to really step outside of the existing discourse, no matter how well intentioned. This raises further difficult yet important questions about what 'achievement' in literacy really is, and the extent to which this is reflected in student 'attainment'. As already discussed in the previous section, notions of 'doing well' in literacy are generally marked by success in standardised tests, yet it is clearly the case that achievement is more than attainment (Francis and Skelton 2001). Attainment in literacy may well result in further reproducing gendered constraints that are at best unhelpful and at worst harmful. As Davies and Saltmarsh (2007, 8) point out, a system that is based on standardised testing claims to produce 'generic students for whom equity issues are no longer relevant'; however, this fails to 'get to the heart of the ways in which literacy, gender and social power are mutually constitutive'. In order to understand this, it is again helpful to reflect back on how such stereotypical views have developed and permeated belief structures throughout history. Galbraith's (1997) account of the autobiographies of British men and women, born between 1860 and 1914, provides a particularly interesting insight into attitudes towards women and work. Galbraith documents the words of a number of middle-class women who all spoke regretfully about the years that their brothers went to boarding school, while they were left at home. For example, Katherine Chorley (born in 1897) talked about the 'separate spheres' marked by gender, which allowed men access to the 'big world' while women stayed at home. Galbraith notes that 'she remembered that after the 9:18 train had taken all the men off to work, a town of women was left behind' (1997,15). Many of these women continued to receive an education at home however this was often met with resentment. To illustrate, Galbraith refers to Helena Swanwick, born 1864, who wrote of 'the intense desire … for more opportunities for concentration and continuity' and her anger against 'the assumption that whereas education was important for my brothers, it was of no account for me' (Galbraith 1997, 15-16). These women clearly articulated their frustration that opportunities for participation in the workforce were denied to them. However, major world events did have an impact on women's opportunities and this has been particularly well documented in relation to the Second World War. Founded in 1937, the Mass Observation Archive hosts a detailed and authentic record of the everyday lives of ordinary people, which spanned the Second World War. Sheridan's (2000) anthology of mass-observation records (1937)(1938)(1939)(1940)(1941)(1942)(1943)(1944)(1945) provides a rare insight into the lives of women in wartime, many of whom responded to monthly open-ended 'directives' or themed questionnaires, as well as those who kept full personal diaries throughout the war. The opening pages of Sheridan's book present a quote from Miss K, a young Jewish woman working as a journalist in London, who writes, 'my horror of all this war business is qualified by an eagerness to be a unit of it'. This somewhat dichotomous view is a recurrent theme in Sheridan's anthology. While there is no doubt that these women were marked by the horror, uncertainty and disruption caused by the war, Sheridan also concluded that they 'recognised something that has now gained wider currency; that active participation in war might be advantageous for women, even, in a limited way, emancipatory' (2000, 1). Twentieth-century war catalysed opportunities for many women in a way that had never occurred before, however what is especially interestingand indeed important for the present-day discussion, are the attitudes towards women and work that prevailed after the war ended. In the January 1944 'directive' to members of the Mass Observation panel, respondents were asked the question, 'Should married women be able to go out to work after the war? ' Sheridan (2000, 215) presents a range of responses to this question, but what is clear is that the entries suggest a tangible tension between the increasing realisation that being a full-time homemaker lacked mental stimulation and satisfaction for many, while there was also a belief that paid work for many women was an unnecessary indulgence and something that would damage families, and children in particular. Responses from women included; 'going out to work is incompatible with children', and a married woman can work 'provided she doesn't neglect the home too much and that her husband really feels happy about it'. Comments such as these were plentiful, however, so were concerns about the mundaneness of being at home all day. For example, a 45-year-old woman from Wembley wrote that she would be sorry to leave her job and worried that she would 'have not enough to do to occupy [her] intelligently in the home'. Another respondent wrote, 'I admit that very many women are bored by their homes and long to get back to work', while another stated 'that domestic work is on the whole so unpopular that men will do their damnest to push women back into it and keep them from "outside" jobsand women must fight hard to hold their present positions'. One particularly revealing comment came from a 53-year-old married woman from Reading, who claimed that 'a lot of women will want to have their cake and eat it (i.e. have a husband, home and children, and a job)'. The view that a working women is somewhat selfishly trying to 'have it all' may have proliferated in the post-war era but what is truly remarkable about this statement is that the same sentiment still exists 70 years later. For example, in her longitudinal study of the ways in which young women who had gone to school, during an era of 'equal opportunity', made decisions about career and life-paths, Aveling (2002, 265) reported that: More than a decade later, the problem of 'having it all' had begun to surface for some of them. Those women who had already become mothers increasingly found that instead of effortlessly being able to combine the demands of small children with the pressures of a challenging job, a more workable option was to put their careers 'on hold'. While these women have demonstrated that they can succeed on male terms, a number of competing discourses, coupled with a workplace culture that enshrined male patterns of participation as the norm, ensured that their work patterns essentially replicated the employment patterns of women of an earlier generation. By including a reflection into the past within this discussion, we can see that opportunities for women continue to be impeded by the maintenance of a social norm that dictates how roles are perceived inside and outside of the home on the basis of gender. As Aveling points out, we live in a climate that is supposedly committed to equality of opportunity, yet evidence indicates that 'automatic gender associations' (Fine 2010) which influences how we think and act, can actually undermine these conscious beliefs (Hochschild 1990). This creates quite a challenge for those working with young children. Given the enduring and insidious nature of gendered norms, how can practitioners ensure that the early literacy classroom offers young children what they need in order to promote equality of opportunity for their future? In particular, what are the implications for the ways in which text is utilised? We know that children are exposed to a wide variety of texts, many of which present stereotyped views about gender and work. Much of the children's literature available today continues to present themes related to traditional heteronormative notions of gendered roles Collins 2009, 2010;Taber and Woloshyn 2011), even though there is an evident attempt in some books to portray females as active participants in events (Jackson 2007); for example, children now have access to books such as Rosie Revere. Engineer by Andrea Beaty (2013), I Can Do It Too by Karen Baicker (2003) and The Kite Princess by Juliet Clare Bell (2012). However, it would be a mistake to think that educationalists can 'solve' the problem by introducing and using these kinds of texts with children. As already discussed, it is important to recognise that text comes in many different forms including digital and screen versions, yet digital technology in itself is far from being regarded as gender-neutral. Indeed in their article entitled 'New media, Old images', Mendick and Moreau (2013, 325) found that online representations of women and men in science, engineering and technology 'largely re/produce(d) dominant gender discourses'. Moreover, these themes are not just present in children's literature, but penetrate all aspects of daily life. Mums still 'go to Iceland' in order produce satisfying meals that are compatible with a family budget (or more recently to swoon over Peter Andre), and the purchase of Kentucky Fried Chicken still allows mums to have 'a night off'. This suggests that children from their earliest years need to be taught how to respond to stereotypic ideas that are embedded within all texts, including media, screen and popular culture. So what does this mean for teachers and practitioners working with young children? It is not the job of teachers to censor children's exposure to text on the grounds of gendered stereotypes. This is partly because we would be doing children a considerable disservice to assume that they are passive recipients of text and have no active engagement with these issues. Jackson (2007, 75) discovered in her analysis of early school reader illustrations that young children were 'active in making sense of gender rather than being social blotters, simply absorbing stereotypical notions of gender'. In particular she found that children were drawing on their understandings of gender from other contexts in order to bring meaning to text. This supports the suggestion that it is a futile exercise to attempt to eliminate texts with gendered constructions from children's reading diets or even to try and mitigate the damage by introducing texts that actively promote non-stereotypical roles. Rather there is a need to consider how we can support children in their everyday reading of text. As Wharton (2005, 249) concludes, 'the way that gender is portrayed in school books may be less important than the ways in which teachers and parents use these books with children'. The final section of this paper now turns to the role of the early childhood educator in ensuring that the teaching of literacy includes a concern for achievement in the labour market as well as in the school environment. The role of the early childhood educator This paper has demonstrated that the early childhood educator has an opportunity to take strides towards tackling the issue of gender and inequality in the labour market, through the context of what is taught about and through the medium of literacy. In respect to the former, the educator has an obligation to teach literacy, as defined by the curriculum; however, there is no obligation to teach that this is literacy. Rather, the early years classroom can be a place where children are taught that literacy is a broad and dynamic concept. The first step towards achieving this is for teachers and educators to show children that their own constructions of literacy are valued in the classroom. There is a vast and growing body of literature on technology and literacy which supports the argument that young children use technology in ways that are innate and natural (Prensky 2001a(Prensky , 2001bBearne et al. 2007); as a consequence, many young children enter the early years setting with the ability to handle digital texts with confidence and skill (Levy 2009). Given that it is becoming more and more evident that success in the twenty-first-century labour market will demand a proficiency in literacy that accommodates technology, these are skills that teachers need to value and nurture in all children. In addition, this paper has suggested that early childhood educators must also consider what is taught through the context of literacy study, focusing specifically on the role of text. Text, in all its forms, offers powerful constructions of stereotypical femininity and masculinity, yet Gilbert (1992, 191) argues that these can only ever be understood as 'plausible' if 'readers begin with particular cultural expectations of gender'. Gilbert goes on to argue that in order to challenge stereotypical constructions of gender (or any other social convention for that matter), it is necessary to become a 'resistant reader to what has come to pass as the socially conventional 'reading' of a story' (189). However, she makes the further point that this can only be achieved if you have access to different discourses that challenge the text in question. She concludes: It is less possible to be a resistant reader if you see nothing to challenge in the dominant reading position offered: if you cannot denaturalise the apparent naturalness and opacity of the language; or if you cannot conceive of other ways to construct a plausible narrative sequence of events; or if you are unable to reconstruct what counts as a narrative 'event' differently. (Gilbert 1992, 189) This suggests that promoting skills of critical reflection need to be embedded within the teaching of literacy from children's earliest years in school. Helping children to consciously reflect on the sociocultural implications of a text will help them to not only develop their own awareness of stereotypical constructions of gender in text but also actively challenge a dominant discourse. However, this is not straightforward. Children are rarely given opportunities to challenge anything and this may be a particular issue for children in their early years of schooling. To illustrate this point, it is worth remembering that questions asked to children within a school context tend to be those that the teacher already knows the answer to, rather than a genuine invitation for children to offer views and opinions (Levy 2011, 128). This has implications for the ways in which teachers position themselves in relation to the child as well as the learning experience in itself. Hare (1992) discusses this in relation to the acquisition of knowledge when he argues that teachers have a responsibility to ensure that children grow up understanding that knowledge is tentative and that the teachers' own answers are not necessarily 'the best'. Hare describes this as cultivating 'humility' within the teacher-pupil relationship. In other words, he is arguing that young children need to learn from adults who are positioned as engaging in the learning experience alongside them, rather than being an absolute authority over them. Similarly, in his iconic publication Children's Thinking, Bonnett (1994) claims that this has particular salience for teachers working with young children who are developing their own relationship with the learning process. Bonnet argues that in order for teachers to encourage children to become effective learners they must share the sense of curiosity with them and respond to them 'in a manner that offers mutual interest in what is being learned, is non-evaluative and non-judgemental and that children need to know that their responses are being taken seriously' (Levy 2011, 127). In other words, encouraging critical reflection means that teachers need to carefully consider their role in the learning process, and in relation to the children themselves, and actively create an environment where children feel confident and safe enough to challenge accepted discourses. As well as creating a learning environment that offers young children genuine opportunities for questioning, challenging and consolidating, it is also important to remember that as Gilbert stated above, it is not possible to challenge an accepted discourse if you do not see anything in the text to challenge. This means that teachers and educators have a further responsibility to offer children a wide variety of texts that offer various views, ideas, opinions and perspectives. However as already stated, it is not enough to simply include these texts in school, but children need to be taught how to read with criticality and resistance. Literacy is a broad and changing construct and as a result children are exposed to a wide variety of texts that exist in paper and screen forms. However, it has been further argued that picture books, which are strongly associated with the early childhood literacy classroom, offer unique opportunities to help young children engage with challenging concepts. Haynes and Murris (2012) argue that picture books are 'philosophical sources' that provide teachers with opportunities to engage with 'transformative pedagogy' (218). These authors speak of their 'delight in the philosophical thinking and dialogue' (2012, 1) that picture books provoke, and call for teachers to be 'intrigued by the controversy' that they cause. Haynes and Murris are arguing that pictures provide rich and fertile territory for adults to engage children in exactly the kind of philosophical discussions that are necessary if they are to learn how to think critically about issues. Similarly, Roche (2015, 3) argues that picture books can help children to become 'real readers' who are able to not only read for enjoyment and understanding but 'who can look beneath the surface and challenge any assumptions and premises that may be hidden there'. Roche and Haynes and Murris agree that there is no manual on how teachers should use pictures and picture books to promote critical thinking and encourage children to challenge accepted discourses. While there is value in selecting texts for children that offer positive role models, this is more about encouraging children to respond to all texts with criticality and resistance. This is not about instructing children on 'correct' beliefs or courses of action, but rather it is about creating an environment that encourages thought and facilitates discussion. Central to this is the view that children must be really listened to, thus establishing what Haynes and Murris call a 'critical practice of philosophical listening ' (218-229). In summary, this suggests that early childhood educators have a unique and vital role in helping young children to recognise and challenge accepted discourses that may inhibit their opportunity for success in the labour market on the grounds of gender. It is argued that in order to help children to challenge these discourses, educators need to focus on creating learning environments that allow all children genuine opportunities to question, challenge and explore ideas and concepts that are embedded in text. Conclusion This paper has critically reflected on the past in order to show how the teaching of early childhood literacy can help to address the issue of gender and inequality in the labour market, given that girls' attainment in literacy at school does not appear to translate into achievement in the labour market, in comparison with men. This historical reflection has focused on two main avenues for consideration. Firstly, history shows us that definitions of literacy are not only fluid and mobile but are embedded within a socio-historical context. This reminds us that current 'schooled' definitions of literacy are not as fixed as we have come believe. As we move deeper into the twenty-first century, we see the impact of technology on constructions of literacy and indeed conceptualisations of what it means to be 'literate'. We know that girls currently achieve more highly than boys in literacy, but if these skills do not match the literacy demands of the twenty-first-century labour market, then this 'achievement' will have little consequence in this domain. This paper has argued that early childhood educators must value and nurture the digital literacy skills that children bring into formal education, as well as actively teach all children how to be literate in a digital society. Second, this historical reflection showed that societal constructions of gender are resilient and enduring, and continue to prevail despite genuine attempts to challenge harmful stereotypes. This paper has shown that the literacy classroom may in itself perpetuate stereotypical constructions of gender, as they appear in all manner of texts. This paper has argued that the early childhood educator has the unique opportunity to help all children to become critical and resistant readers of text from their earliest years, and thus begin the process of challenging a dominant discourse. The first step must include a commitment to creating an environment where children feel able to question, challenge and discuss ideas, safe in the knowledge that their voices will be heard and that their views will be taken seriously. Only then will teachers be able to offer genuine opportunities for children to challenge accepted discourses that prevent equality of opportunity for children and the women and men that they grow into.
v3-fos-license
2019-05-20T13:03:13.873Z
2017-03-30T00:00:00.000
158809510
{ "extfieldsofstudy": [ "Economics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://juniperpublishers.com/artoaj/pdf/ARTOAJ.MS.ID.555663.pdf", "pdf_hash": "a8cb5db8abe080d68c036d024ebfe133983d4d7a", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2821", "s2fieldsofstudy": [ "Environmental Science", "Economics" ], "sha1": "e5990521987026bcd1397f36613c6cb9a5d276b5", "year": 2017 }
pes2o/s2orc
Biofuels and Sustainable Economy Recent publications by Wasiak & Orynycz [8-10] present analysis of energy efficiency of biofuel production systems, as well as offer an attempt to redefine energetic aspects of sustainability Wasiak [11]. It was shown that (EROEI type) indicator of energetic efficiency, ε, of the energy production system built of, i, subsystems can be expressed by the law of additivity of reciprocals of partial efficiencies, εi, describing each of subsystems: Mini Review Shortage of fossil energetic resources, parallel to harmful effects to natural environment became a challenge to contemporary economies. Both factors mentioned constitute the real problems especially that they cause contradicting results. Expected shortages of fossil fuels may lead to energy crisis, while further use of those fuels leads to almost catastrophic environmental problems. In this situation bio fuels are frequently recommended [1][2][3][4], and considered as the replacement for fossil ones. It is bellied that use of bio fuels may contribute to mitigation of the danger of energy crisis, as well as of those fuels to reduce environmental threats. Implementation of bio fuels is also considered as important contribution towards achieving sustainability of agriculture [5][6][7]. [8][9][10] present analysis of energy efficiency of biofuel production systems, as well as offer an attempt to redefine energetic aspects of sustainability Wasiak [11]. It was shown that (EROEI type) indicator of energetic efficiency, ε, of the energy production system built of, i, subsystems can be expressed by the law of additivity of reciprocals of partial efficiencies, ε i , describing each of subsystems: Recent publications by Wasiak & Orynycz The partial efficiency of a subsystem, in turn, is defined as the ratio of the energy, E tot , obtained (during a chosen period of time) from the whole system to the sum of the energy inputs, E k , needed to maintain functioning of that subsystem, i.e.: Application of the above approach to the analysis of agricultural part of bio fuel production systems […] have shown that energetic efficiency of s.c. "energetic" plantation varies from about 10 to about 200 depending upon the choices of plant species being cultivated, production technology, productiveness of machinery used, as well as some aspects of production organization. On the other hand, Transportation of goods between fields, and transportation of crops from plantation to the factory converting biomass into bio fuel require additional inputs of energy. For the case of rapeseed grain transportation on the distance of 100km, it was estimated, that energetic efficiency of this part of production system varies between 100 and 150 depending on the type of transportation means used. Results of various combinations of the values above mentioned are given in Table 1. It is seen, that in some combinations the effects are quite substantial. The effects are mostly pronounced when both values of effectiveness, being combined, are large. In these cases the aggregate value is reduced to almost half of each contributing ones. It has to be mentioned, that further decrease of energetic effectiveness should be expected due to energy inputs necessary to achieve conversion of biomass into bio fuel in the industrial part of the production system. Conclusion, that can be drawn from those computations, shows that in order to achieve maximum efficiency, production system dedicated to bio fuels production, should contain as small as possible number of energy consuming processes, and should be built basing on high yield plants, and highest energy content crops. Further question is how the energy efficiency of bio fuel production system corresponds to sustainability of agriculture, as well as in wider sense sustainability of the whole economy, at least on the level of particular Country. It has to be pointed out that agriculture is not only one resource providing biomass for mankind. Biomass is being harvested also from other sources like forestry, fisheries, etc. Some of those resources are cultivated, some still belong to wild nature.. Any of those resources, however, require inputs of energy in order to obtain some amount of useful biomass. Consequently, the considerations given above are applicable to all of the resources. Considering biomass as replacement of fossil fuels one has to keep in mind that such use competes with other applications like food production or construction materials. Evidently, although with slightly different results, one may assume that a unit of land cultivated for energetic purposes may cover energy needs of other, i.e. "non-energetic" plantations at the proportion given by numbers taken from Table 1, depending upon technological and agricultural factors corresponding to particular case. Consequently, these numbers give the idea of necessary reduction of other crops in favor of "energetic" ones to achieve sustainability or at least selfsufficiency of agriculture itself. This consideration does not consider energy needed to supply agriculture with industrial products, like machinery, fertilizers, etc. The development of technology, and biotechnology may improve the situation to some extent through developing more efficient tools and conversion methods, as well as an increase of productivity of biomass (e.g. through genetic modification of organisms). Also, in order to reduce the danger caused by competitive use of bio-resources, it seems advisable to develop technologies of energy production from wastes of biomass occurring in e.g. food production, so that edible parts of biomass would not undergo conversion to energy. The above consideration could also be applied to discuss what part of land, should be converted to energetic production to replace fossil fuels in a degree covering whole needs of economy. This task extends the range of this paper. It seems, however, that assumption of such possibility is rather unrealistic.
v3-fos-license
2019-04-26T14:24:49.047Z
2017-01-27T00:00:00.000
31087473
{ "extfieldsofstudy": [ "Geology", "Environmental Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1002/2016jd025514", "pdf_hash": "31c93e19b3bd4b16b127273de46ec0e5c4a980ce", "pdf_src": "Wiley", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2822", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "0579b4473e279b81b2ccaf4cc6836906fa173cf3", "year": 2017 }
pes2o/s2orc
Determination of global Earth outgoing radiation at high temporal resolution using a theoretical constellation of satellites New, viable, and sustainable observation strategies from a constellation of satellites have attracted great attention across many scientific communities. Yet the potential for monitoring global Earth outgoing radiation using such a strategy has not been explored. To evaluate the potential of such a constellation concept and to investigate the configuration requirement for measuring radiation at a time resolution sufficient to resolve the diurnal cycle for weather and climate studies, we have developed a new recovery method and conducted a series of simulation experiments. Using idealized wide field‐of‐view broadband radiometers as an example, we find that a baseline constellation of 36 satellites can monitor global Earth outgoing radiation reliably to a spatial resolution of 1000 km at an hourly time scale. The error in recovered daily global mean irradiance is 0.16 W m−2 and −0.13 W m−2, and the estimated uncertainty in recovered hourly global mean irradiance from this day is 0.45 W m−2 and 0.15 W m−2, in the shortwave and longwave spectral regions, respectively. Sensitivity tests show that addressing instrument‐related issues that lead to systematic measurement error remains of central importance to achieving similar accuracies in reality. The presented error statistics therefore likely represent the lower bounds of what could currently be achieved with the constellation approach, but this study demonstrates the promise of an unprecedented sampling capability for better observing the Earth's radiation budget. Introduction The Earth reflects part of the incoming solar radiation back to space and responds to the remaining absorbed energy by adjusting its temperature and emitting thermal infrared radiation out to space. Observing these energy flows exiting the top-of-the-atmosphere (TOA), referred to as Earth outgoing radiation (EOR), has advanced our understanding of fundamental climate parameters such as the planetary brightness [Vonder Haar and Suomi, 1971], the greenhouse effect [Dickinson, 1985], and the zonal and meridional heat transports required by the atmosphere and ocean to redistribute regional energy imbalances [Rasool and Prabhakara, 1966;Charney, 1975]. EOR observations also play a crucial role in studying climate forcing and feedbacks [Futyan et al., 2005;Loeb et al., 2007;Brindley and Russell, 2009;Ansell et al., 2014], global energy imbalance and its implication in the hydrological cycle [Trenberth and Fasullo, 2011;Stephens et al., 2012;Allan et al., 2014;Hegerl et al., 2015], and in the development, improvement, and validation of weather and climate models [Forster and Gregory, 2006]. [Smith et al., 1977;Barkstrom, 1984;Kyle et al., 1993], the current Clouds and the Earth's Radiant Energy System (CERES) [Wielicki et al., 1996], and the Geostationary Earth Radiation Budget (GERB) experiment . CERES has proven invaluable for studying global energy balance; the uncertainty on the monthly net TOA irradiance determined from CERES ranges from À2.1 to 6.7 W m À2 [Loeb et al., 2009]. Improved absolute accuracy is attainted by incorporating ocean heat content observations [Lyman et al., 2010;Church et al., 2011]. Unlike CERES, operated mainly in Sun-synchronous orbits, the geostationary capability of GERB provides outgoing radiation products at a 15 min temporal resolution, but over the European and African continents and surrounding areas only. interpolated data set [Doelling et al., 2013[Doelling et al., , 2016] that has been used in studies of cloud and aerosol radiative forcing [Taylor, 2012;Su et al., 2013], explaining TOA diurnal cycle variability [Taylor, 2014;Dodson and Taylor, 2016], and to make comparisons with models and reanalyses [Itterly and Taylor, 2014;Hill et al., 2016]. However, incorporating different geostationary observations with unique sensor characteristics and varying degrees of quality can produce significant artifacts resulting in unnatural spatial patterns in radiation fields [Doelling et al., 2013]. It is clear that current missions are not designed for these high temporal resolution applications, despite the apparent need for global EOR at high temporal resolution, preferably sufficient for resolving the diurnal cycle. Measurements of EOR have been made from dedicated missions since 1975; examples include the early Earth Radiation Budget (ERB) experiments Such EOR measurements require a new observation strategy. Recently, because of a technology revolution in small satellites and sensor miniaturization [Sandau et al., 2010], a constellation approach has drawn considerable attention and has been applied in monitoring surface wind speeds over oceans and within tropical cyclones [Komjathy et al., 2000;Ruf et al., 2012], surface albedo [Nag et al., 2015], and providing surface imagery for natural disaster monitoring [Underwood et al., 2005]. Its potential for providing high temporal resolution EOR observations, however, has not been fully explored. This paper aims to evaluate whether the new constellation concept can potentially provide an EOR observational data set with high temporal resolution and accuracy for weather and climate studies and to identify the key factors affecting the performance of the constellation. While various advanced sensors like scanning narrow field-of-view (FOV) radiometers can be potential candidates for a constellation, we focus on applications using wide FOV (WFOV) broadband radiometers. This type of sensor is low cost and, more importantly, has features preferable for any Earth radiation budget mission, such as mature instrumentation technology and no moving parts. WFOV measurements have proven invaluable for monitoring planetary EOR [Smith et al., 1977;Barkstrom, 1984;Kyle et al., 1993], but their large footprint size makes it less straightforward to obtain EOR at a synoptic scale. To enhance the spatial resolution of WFOV observations, Hucek et al. [1987Hucek et al. [ , 1990 applied spherical harmonic analysis to 6 day measurements of a single WFOV radiometer and showed significant improvement not only in spatial resolution but also in the root-mean-square error (RMSE) of the global mean albedo. Their recovery analysis greatly enhanced the EOR product from monthly mean to weekly mean; however, the enhancement reached a limit, because a time interval of 6 days was necessary to accumulate sufficient instantaneous measurements from a single radiometer. Clearly, an increase of available satellites using a constellation can help push this time limit further and enhance the temporal resolution, but it is unclear what the required configuration of the constellation will be if we aim to observe EOR hourly to monitor fastevolving systems such as dust storms and cyclones in the tropics and extratropics. An enhanced ability to track these systems allows us to capture their evolution and understand the underlying radiative processes, further constraining future warming of our climate system. We will perform extensive simulation experiments to explore the capability of the constellation concept for observing EOR. Similarly to Hucek et al. [1987Hucek et al. [ , 1990, Han et al. [2008], and Salby [1988a, 1988b, we use spherical harmonic analysis for detailed EOR recovery from WFOV measurements, but necessarily, different constraints are applied to make the recovery work for a much higher temporal resolution. In this paper, section 2 details the recovery method and the design of simulation experiments, while section 3 presents the results from a baseline constellation and related sensitivity tests. Finally, section 4 summarizes our key findings and highlights the upcoming opportunities that this work can be directly applied to. Simulation of Measurements To develop and evaluate our recovery method, synthetic data sets were generated using output from the Met Office global numerical weather prediction (NWP) model [Walters et al., 2014]. The model was run from an operational 00 Z analysis using 1024 longitudinal and 769 latitudinal grid points, with a coarsest spatial resolution of approximately 39 km × 26 km along the equator. Normally, the time step used for global NWP with this configuration is 12 min and for reasons of computational expense, full radiation calculations are only done every hour [Manners et al., 2009]. In this simulation, however, the time step was reduced to 5 min and to represent evolution of the radiation fields better, the radiation scheme [Edwards and Slingo, Journal of Geophysical Research: Atmospheres 10.1002/2016JD025514 1996 was called on every one of those time steps. Arbitrarily, we chose a 1 day model output from 12 Z on 28 August 2010 (T + 12 to T + 36) to simulate satellite WFOV measurements for our experiment. Figure 1 shows an example of the modeled outgoing shortwave (SW) and longwave (LW) irradiance fields at the TOA of 80 km, for 00 Z on 29 August 2010. These irradiance fields contain clear structures associated with meteorological phenomena including large regions of tropical convection in the central and western Pacific and a midlatitude frontal system in the northern Pacific. Meteorological phenomena of this type are large in spatial scale (~1000 km), evolve quickly (approximately hourly) and appear to dominate the variability of the outgoing irradiance fields, and will therefore be of interest to observe when recovering outgoing irradiance fields. Using these modeled irradiance fields, the instantaneous irradiance measured by each satellite, F sat , at colatitude θ and longitude λ, is then the integration of radiation received from all points within the WFOV, given as where I TOA (θ ', λ ') is the radiance reaching the satellite from TOA location (θ ', λ ') within the satellite FOV Ω, A(θ, λ, θ 0 , λ 0 ) is a factor to account for the laboratory-measured angular response of the instrument, β is the angle between the satellite nadir and the line of satellite to location (θ ', λ '), and e represents systematic and random instrument error, which is investigated in section 3.2.2. For the sake of simplicity, we assume that the instrument has a flat spectral response that is insensitive to the spectral composition of the observed scene and a flat angular response to omit A(θ, λ, θ 0 , λ 0 ) from hereon in. Since the model output provides irradiance rather than radiance, I TOA were generated using an isotropic assumption. In other words, equation (1) can now be rewritten as where F TOA is the instantaneous outgoing irradiance at the TOA. This assumption inevitably introduces uncertainty in the recovery, which will be evaluated in section 3.2.1 using angular distribution models from the CERES [Loeb et al., 2003]. Using equation (2), synthetic measurements are generated at the sampling rate of the satellite instruments. At each time step, a satellite will have progressed through space, giving a different value of θ and λ and thus a different FOV and F sat . This forms nonuniformly distributed yet dense satellite measurements, which rely on spherical harmonic analysis to recover the global distribution of the outgoing irradiance field, as explained next. Recovery Method Spherical harmonic analysis serves as a homogenization of multiplatform observations to provide the complete distribution of the irradiance field on the entire Earth. This distribution can then be used to interpolate the irradiance field to smaller scales by optimally exploiting the dense measurements. At a given location (θ, λ), F TOA can be represented exactly by a spherical harmonic series as or be approximated by a truncated series as where L is the truncation limit, leading to (L + 1) 2 terms on the right-hand side. The spherical harmonic functions Y C lm and Y S lm represent the spatial distribution, defined as where P lm is the associated Legendre function. The coefficients S l0 are always zero, while the first term containing the harmonic coefficient C 00 finds the global mean outgoing irradiance. In contrast, the higher degree harmonics, C lm and S lm , represent the amplitude of structures at finer resolutions. Denoting R E as the Earth radius, the subscript l represents structures at a wavelength of 2πR E /l in spherical harmonic analysis and thus a resolution (half of the wavelength) of approximately 20,000/l km. In other words, the spatial resolution of the recovered irradiance field is determined by the truncation limit L; a value of 20 leads to a resolution of 1000 km, recovered from 441 (i.e., (L + 1) 2 ) spherical harmonic coefficients. Substituting equation (4) into equation (2) gives where the truncation inequality has been dropped; Y C lm and Y S lm represent the spatially integrated spherical harmonic functions within the WFOV and respectively relate to Y C lm and Y S lm in equation (5) by For M satellite measurements, equation (6) becomes a system of equations and can be written in matrix form as where F sat is a M × 1 matrix comprising all satellite measurements, Y is a M × N matrix containing N number of the spatially integrated functions (i.e., Y C lm and Y S lm ) for each simulated satellite measurement, c is a N × 1 matrix comprising the unknown coefficients C lm and S lm , and e is a M × 1 matrix containing the measurement errors. Using a constrained least squares approach, an estimate of c in equation (8), ĉ, is given by where ε 2 is empirically chosen to be 10 À4 and I is an N × N identity matrix. The ε 2 factor was chosen to be large enough to improve the condition number of the normal matrix and therefore stabilize the solution, while being 6 orders of magnitude smaller than the mean of the simulated satellite measurements and therefore not introduce significant bias in the recovery. Once equation (9) is solved, we place the coefficients, ĉ, back into equation (4) to recover the TOA outgoing SW or LW irradiance F TOA (θ, λ) for all global locations. 10.1002/2016JD025514 Assuming that the errors in F sat are uncorrelated and have constant variance σ 2 , the variance-covariance matrix of ĉ can be given as [Hastie et al., 2009, p. 47] var where Then, the variance in the recovered outgoing irradiance field at each location (θ, λ) can be estimated by where Y(θ, λ) ia a 1 × N matrix comprising the spherical harmonic functions Y C lm and Y S lm in equation (5), and the square root of it (i.e., the standard deviation representing about 68% of the normal distribution) is used as the recovery uncertainty estimate. Note that the spherical harmonic analysis above uses one set of matrices (i.e., equation (8)) to recover one irradiance field (i.e., equation (4)), which implicitly assumes that the irradiance field does not change during the analysis. This assumption is a particular concern for the SW spectral region, because SW irradiance fields evolve fast for two reasons. One is that the illuminated portion of the Earth changes rapidly due to Earth's rotation and solar geometry; the other is that albedo varies with changes in the radiative properties of the atmosphere and surface. For a WFOV, the former is found to be the dominant factor for SW radiation evolution and needs to be incorporated into the recovery method. Therefore, instead of instantaneous satellite measurements, we used time-averaged values in equation (9) through the following process. For each WFOV satellite measurement, the instantaneous albedo, A INS , is calculated as the ratio of this measurement and the instantaneous incoming solar irradiance over the WFOV. The average incoming solar irradiance within the measurement collection period (e.g., 1 h), S, can be also calculated. The time-averaged satellite measurement in the SW, F ' sat;SW , can then be estimated by and the set of measurements generated in this way replaces F sat in equation (9) when performing the recovery. The LW measurements remain unchanged. Experiment Design and Setup The configuration of a satellite constellation is defined by several parameters including the inclination angle, altitude, number of satellites, number of orbit planes, phasing of the satellites between planes, instrument sampling rate, and collection time period used to recover the irradiance field. While possible configurations can be unlimited, they unavoidably depend on launch opportunities, available finances, and, importantly, the required accuracy of the recovered irradiance. To provide a general guideline, we focus on a baseline constellation configuration that consists of 36 satellites spread evenly across six orbital planes ( Figure 2a), with no phase difference between satellites in adjacent planes. Each orbital plane resides in an 86.4°non-Sun-synchronous orbit, allowing every satellite to precess throughout 24 h and provide coverage all the way to the poles, while maintaining plenty of separation to avoid any possibility of collision. The altitude of the satellites is chosen to be 780 km, the upper end of typical small satellite altitudes [Maessen, 2007;Lücking et al., 2011]. This altitude enables a FOV as large as possible, corresponding to a horizon-horizon footprint of approximately 6000 km in diameter for an instrument FOV of 126°. For simplicity, we use circular orbits with all satellites at the same altitude, but the performance for varied satellite altitudes and other combinations of orbits is consistent given that the same sampling density shown in Figure 2b is provided. Note that this baseline configuration typically requires six launches, but it is possible to use fewer launches and let satellites drift and spread out, although it may take several months for satellites to reach the specified planes [Bingham et al., 2008]. Additionally, we use a 5 s sampling rate to be consistent with past WFOV radiometer response times [Hucek et al., 1987]. We also focus on measurements over a 1 h collection period because of the following trade-off. For scientific purposes, it is crucial to enhance our ability to observe the diurnal cycle of the global irradiance field; therefore, the collection time period must be as short as possible. However, in practice, it is necessary to accumulate sufficient measurements to provide dense global coverage that is required for the success of the recovery. To optimize this collection time period, we used the model output to examine how many times each grid point needs to fall within satellite's FOVs (i.e., to be seen by the satellites) to produce a stable recovery. As expected, the most challenging region is at lower latitudes. As shown in Figure 2b, while it is not difficult to sample high-latitude regions densely, the sample size at low latitudes is at least 5 times smaller than that at high latitudes during a 1 h time period. Interestingly, several regions along the equator with small sample sizes somehow coincide with satellite ground tracks (see Figure 2a), which is counterintuitive. The reason is that the "gaps" between satellite tracks have a better chance of being observed from both sides; in other words, the overlapping FOVs from satellites in adjacent orbital planes make the regions between tracks better sampled than those along the tracks. In general, we found that a minimum density of 500 samples is required for stable recovery, which can be achieved for the entire globe by a 1 h collection time period as shown in Figure 2b. A much longer duration is inappropriate because it would result in time aliasing, strongly violating the assumption of a static irradiance field as discussed in section 2.2, and introduce a source of recovery instability. The performance of the recovery method is measured by three metrics. In addition to commonly used global mean bias and RMSE, we also use the signal-to-noise ratio (SNR) from power spectra to quantify the recovery errors dependent on spatial resolution, investigate how fast the recovery errors increase with spatial resolution, and when the recovery noise becomes dominant and thus the recovery outcome is no longer reliable. For each spatial resolution, the SNR is defined as the ratio of the recovery amplitudes and error amplitudes. The average recovery amplitude associated with a spatial resolution of 20,000/l km, A r,l , is given by where Ĉ lm and Ŝ lm are the recovered coefficients in ĉ lm and the corresponding error amplitude, A e,l , is calculated by We used a value of SNR of unity to identify the spatial scale at which fast-evolving meteorological phenomena can be reliably recovered. The performance of the baseline configuration is also used to compare and contrast recovery errors introduced by other factors, such as the number of satellites, the configuration of the satellites, instrument performance/calibration, and the isotropic assumption. We conducted a series of sensitivity tests to identify which of these factors play the most crucial roles and to provide guidelines about what needs to be considered when designing an optimal configuration. Recovered Outgoing Irradiance From the Baseline Constellation The performance of the recovery method depends on the required spatial resolution. The SNR of the recovered irradiance field, calculated from the power spectra analysis in Figure 3, generally decreases as Journal of Geophysical Research: Atmospheres 10.1002/2016JD025514 the spatial resolution becomes finer. These variations are determined by the spatial structures in the radiation field and are therefore nonmonotonic in nature. For example, the error at the largest planetary scales in the SW is relatively large due to the need to recover the position of the terminator. At a 1000 km spatial resolution, the SNR remains greater than unity in both the SW and the LW, suggesting that features of the atmosphere and surface are still recovered well at this resolution. Taking 1000 km resolution as an example, Figure 4 shows the truth irradiance field averaged from the finer-resolution model output (i.e., Figure 1) over the 1 h observation period, along with the synthetic WFOV satellite measurements and the recovery from the baseline constellation. As shown in Figures 4a and 4b, although the truth irradiance field at a coarse resolution loses fine structures of the atmospheric systems, synoptic-scale features such as tropical cyclones in the central Pacific and midlatitude frontal systems that were previously identified as dominant features of the outgoing irradiance field are retained. Due to the WFOV of the satellites, these features are not directly revealed in satellite measurements at the native resolution (Figures 4c and 4d). However, once the spherical harmonic analysis is applied, the recovered irradiance fields (Figures 4e and 4f) over a 1 h collection time period show a remarkable resemblance with the truth fields, which reconfirms the added value of performing the spherical harmonic analysis. The uncertainty associated with the recovery is shown in Figures 4g and 4h, which has a spatial distribution similar to that of the sample density in Figure 2b. Interestingly, the regions with large recovery uncertainty do not coincide with the gaps of the sample density but fall between the gaps. Recall that the gaps of the sample density coincide with satellite ground tracks. Compared to the samples close to the edge of the FOV, nadir samples along the satellite track have much larger radiative contribution imparted to the WFOV measurement due to the cosine factor as seen in equations (1) and (2). Therefore, even though the corresponding sample density along the satellite tracks is relatively lower than neighboring regions, nadir radiation is retained well in WFOV observations and can be recovered with small uncertainty. In contrast, although regions between ground tracks are seen more frequently in the WFOV than along the tracks, their radiative contribution is small and thus the corresponding irradiance has larger uncertainty. Additionally, the regions with large recovery uncertainty in the Eastern Hemisphere are generally to the south of the equator, while those in the Western Hemisphere are to the north of the equator. This is as a result of sampling. Note that the satellites are going north in the Eastern Hemisphere and going south in the Western Hemisphere. Since all satellites have an orbital period longer than our collection time of 1 h and have not completed a full orbit yet, some regions are therefore not sampled as well as others, leading to larger recovery uncertainty. We evaluate the recovery performance further via scatterplots and histograms of the recovery errors, calculated by subtracting the recovered outgoing irradiance from the truth on a 1000 km grid. Consistent with Figure 4, the scatterplots in Figure 5 show that the truth and the recovered outgoing irradiance are correlated well in the both SW and LW regions, and the majority of the data points fall onto the 1:1 line. Compared to the SW, the LW recovery errors in Figure 5d are smaller (within 10%), because the LW irradiance field is smoother and evolves more slowly, leading to less truncation error and time aliasing. As a result, the absolute errors in the LW recovery (Figure 5f) are distributed more evenly around the globe but increase generally toward the equator as the magnitude of the outgoing LW irradiance increases. In contrast, while 94% of recovery in the SW agrees with the truth within 25% (Figure 5c), Figure 5e shows that large isolated errors can exist, particularly on the daylight side of the terminator and in the regions where tropical convection is common. As shown in Figures 4 and 5, the recovery errors are significantly larger than the estimated uncertainty, which calls into question the sources of the errors. To investigate the issues involved, we first looked into two convection systems that are associated with similar SW outgoing irradiance of over 600 W m À2 but have very different recovery errors (one less than 25 W m À2 and the other~120 W m À2 ). The case with small errors in Figure 6a is mainly composed of an isolated convective system at the center of the domain. There are many surrounding cloud bands, but they are small and far from the main convection system, leading to a good recovery in both location and magnitude. Unlike the case in Figure 6a, the case with large errors in Figure 6b is composed of many complex cloud systems that surround each other closely, making it difficult to capture the fine gaps between systems and to recover the exact boundaries of convections. Consequently, although the recovery is able to capture the main features of the truth, the magnitude and location of the main system centered at the domain are off, leading to large recovery errors surrounding the main convection system. These case studies indicate that the recovery includes two types of errors: one is the omission error that is owing to a limited spherical harmonic degree (i.e., a spatial resolution issue) and the other is the commission error mainly due to producing a static field from measurements collected over some finite time interval (i.e., a temporal resolution issue). To diagnose their relative impact on the recovery, the following experiments were performed. We started with a low resolution of 1000 km, static irradiance field (averaged over 1 h as shown in Figures 4a and 4b) as input to simulate synthetic satellite measurements. As expected, this leads to an excellent recovery with negligible errors of less than 0.01 W m À2 for all grid points. We then put spatial complexity back into the input irradiance field by replacing the static, low-resolution field with a field averaged over 1 h at the original high resolution. The corresponding recovery (Figure 7a) reveals similar errors to those found in Figure 5e in the western and central Pacific region, suggesting that the spatial resolution is the primary issue for the large errors found in the daylight side of the terminator (except the edges). For the temporal resolution issue, we estimate the largest possible errors by taking the difference between 00 Z and 01 Z irradiance fields. As shown in Figure 7c, the rapid change of AE100 W m À2 in SW occurs not only on the edges of the terminator but also in the regions approximately 45°east and west extended from the edges, making the recovery prone to large commission errors. Note that we take an extreme estimate in Figures 7c and 7d and that the actual errors are much less, as shown in Figures 5e and 5f. Based on these analyses, adding low-inclination satellites to improve samples in the tropics and shortening the collection time period will be effective ways to reduce both the omission and commission errors. The recovery errors over all grid points in Figures 4e and 4f are 0.5 AE 16.4 W m À2 and À0.02 AE 5.94 W m À2 (mean with one standard deviation) in the SW and LW, respectively. To examine whether the reported bias and errors for this particular time period are representative, Figure 8 shows a 24 h time series of the hourly recovered SW and LW global mean biases. Recall that the previous results and discussions are based on the 1 h observations from 00 Z to 01 Z, around the middle of the time series in Figure 8. Overall, on this day, the hourly global biases in the SW have a mean 0.16 W m À2 with a standard deviation of 0.45 W m À2 . Stephens et al. [2012]), demonstrating that a constellation can potentially greatly enhance our capability to observe global EOR with much smaller uncertainty even at the hourly and daily time scale. However, the results from this experiment, which we will refer to as the control experiment, are clearly based on theoretical simulations containing some simplifying factors when compared with the real observational problem. To identify the factors that are most important in limiting the performance of the recovery method, including the assumption of isotropic radiation (section 3.2.1), instrument performance (section 3.2.2), and the number of satellites (section 3.2.3), we next present results from a series of sensitivity tests. Sensitivity Tests 3.2.1. Assumption of Isotropic Radiation As described in section 2.1, an isotropic assumption was made to convert modeled irradiance output to radiance during the generation of the WFOV satellite measurements. To evaluate the magnitude of the recovery errors introduced by the isotropic assumption, we capitalize on the Angular Distribution Models (ADMs) outlined by Loeb et al. [2003] to account for anisotropy in our synthetic satellite measurements. These ADMs consist of a set of anisotropic factors that empirically relate the isotropic radiance to that observed under certain conditions. In the SW, the anisotropic factors depend on the solar-viewing geometry, the surface type, and the meteorology of the scene (i.e., near-surface wind speed, cloud phase, cloud optical depth, and cloud fraction) which we also obtain from the Met Office NWP model output. In the LW, anisotropic factors span a smaller range and are typically much closer to 1 [Gupta et al., 1985], so we focus our attention on the SW here where the most serious consequences of making an isotropic assumption are likely to exist. To include a more realistic radiance distribution, we simulate the 1 h set of measurements from the baseline constellation again but this time apply an appropriate anisotropic factor to each and every grid point within each WFOV measurement (i.e., insert this factor within the integral in equation (2)). We found that producing this new set of simulated measurements introduces a systematic bias to the measurements of 1.61 W m À2 which will of course hamper the recovery accuracy. To determine the origin of this bias, we simulated another set of measurements by randomly including the meteorology in the anisotropic factors to isolate the influence of meteorology and the viewing geometry. Those two sets of measurements are only offset by 0.10 W m À2 (Figure 9a), suggesting the bias is largely related to the viewing geometry. We call the set of anisotropic factors that fully account for the scene meteorology "Full-ADM," and the set that randomly include the meteorology as "Random-ADM." To alleviate the anisotropic issue, we perform the recovery on the simulated measurements using the Full-ADM (i.e., the most realistic estimate of what the measurements would be) but use the Random-ADM in the recovery process. This is a reasonable step to take since at any instant in time we will always know the solar-viewing geometry and surface type, even if simultaneous observations of the scene meteorology are not available. As shown in Figures 9b and 9c, the SW anisotropic factors used in this experiment can change the radiance by more than a factor of 10 in extreme cases. However, the resulting bias in the hourly recovered global mean outgoing SW irradiance is just 0.56 W m À2 larger than the control experiment and is 0.93 W m À2 smaller than the experiment that uses Full-ADM measurements but isotropic radiation in the recovery process. Interestingly, this suggests that the instrument's WFOV of 6000 km in diameter encompasses such Journal of Geophysical Research: Atmospheres 10.1002/2016JD025514 a large area that contains a range of meteorological regimes and makes the meteorological dependence of the ADM less critical for obtaining the hourly global mean. If one wishes to consider regional information, however, the errors increase significantly using the Random-ADM in the recovery process. In this case we recommend performing the recovery assuming that the radiation is isotropic, which only increases the RMSE by 6% compared to the control experiment. In practice, the regional error could be reduced by using climatological distributions of the meteorological variables or better still by incorporating the simultaneous meteorology using geostationary imagery or observations similar to the Moderate Resolution Imaging Spectroradiometer. Note that Su et al. [2015] have recently provided a new generation of ADMs. The new ADMs incorporate additional anisotropic effects, such as those of aerosols and sastrugi. Although the change in regional monthly mean instantaneous irradiances over some regions can be large (up to 5 W m À2 ), the global mean irradiances do not change dramatically (less than 0.5 W m À2 ). Again, since our 5 s instantaneous measurements are generated from a WFOV that samples a large area and a wide range of viewing angles, regional differences would have a limited effect on our results and, therefore, the difference between the old and new ADMs is unlikely to affect our estimate of the anisotropic impact significantly. Instrument Performance and Calibration Because there is no existing constellation for us to properly estimate potential instrument performance, and because development and demonstration of miniaturized broadband radiometers are ongoing research activities [Swartz et al., 2015[Swartz et al., , 2016, we test a number of possible scenarios to investigate how fast recovery errors grow with varying instrument performance characteristics to provide a level of uncertainty that engineering development should aim for. These scenarios include adding random instrument noise and systematic calibration errors to synthetic satellite measurements for the case used in Figure 4. If these additional uncertainties are applied based on certain probability density functions, we repeat each test 10 times to ensure that the mean result is robust. A summary of these sensitivity results is listed in Table 1, which applies to both the SW and LW. First, a random instrument noise is included by adding 0.1 W m À2 of white noise to the simulated measurements, more than an order of magnitude larger than that for similar early instruments [Kyle et al., 1985]. This could account for unknown variations in the angular and spectral response across the WFOV, wobbles in instrument pointing, blurring of scenes due to insufficient instrument response time, or other imperfections in the instrument or satellite performance. Since we applied white noise, the resulting difference in average global mean irradiance from the control experiment is essentially zero with a standard deviation of approximately 0.002 W m À2 (i.e.,~0.002% in SW and less than 0.001% in LW). The reason that this difference is almost negligible is that there are a huge number of individual simulated measurements that are fed to the recovery inversion such that any random noise quickly averages to zero, one of the key advantages of a large constellation approach. When this white noise is increased to 0.25 W m À2 and 0.5 W m À2 , the average global mean difference remains essentially zero, but the associated standard deviation increases to approximately 0.01 W m À2 . This change is still an order of magnitude smaller than the absolute bias derived from the recovery performance (0.16 W m À2 in SW and 0.13 W m À2 in LW; see Figure 8) and therefore is tolerable. Second, we consider the influence of a systematic calibration bias on the recovered global mean outgoing irradiance. A flat systematic bias of 0.5 W m À2 across the constellation is found to result in a change in the recovered global mean of 0.63 W m À2 . Including 0.1 W m À2 of instrument noise on top of this systematic bias results in the same mean with a standard deviation of 0.003 W m À2 , consistent with that from instrument noise alone. Finally, instead of a systematic calibration bias, we included a standard deviation of 0.1 W m À2 between individual satellites, with a mean bias of 0.5 W m À2 and 0.1 W m À2 of instrument noise; this results in an average change of 0.657 W m À2 in the recovered global mean outgoing irradiance with a standard deviation of 0.043 W m À2 . Overall, the error growths due to compounding sources of uncertainty generally follow a linear behavior. Table 1 suggest that maintaining any systematic bias to a minimum would be crucial for a satellite constellation to measure the EOR accurately, consistent with most climate monitoring missions. To achieve it, in addition to prelaunch and on-flight calibrations, several steps that are unique to the constellation approach can potentially help facilitate further calibration. The footprints of satellites in adjacent orbital planes have considerable simultaneous overlap at the poles, and the footprints of adjacent satellites in the same orbital plane are almost identical just minutes apart, providing the perfect opportunity to carry out frequent cross calibrations as suggested by Wiscombe and Chiu [2013]. In addition, the slowly evolving, bright, and largely homogeneous polar regions present an ideal natural calibration target. Absolute calibration, required to track and correct for long-term drift, would require a small subset of the constellation to have a highly accurate onboard calibration system or have absolute calibration transferred to the constellation from an external source, both of which are active research areas [Wielicki et al., 2008;Swartz et al., 2015Swartz et al., , 2016. Number of Satellites Since the number of satellites available in reality may differ from the 36-satellite baseline constellation, we also test how sensitive the recovery performance is to the number of satellites using two distinct configurations: satellite limited and orbit plane limited. The satellite-limited configuration is defined as having six Journal of Geophysical Research: Atmospheres 10.1002/2016JD025514 equally spaced orbit planes while varying the number of satellites in each plane, useful for when launch opportunities are plentiful but fewer satellites are desirable. Conversely, the orbit plane-limited configuration has six satellites in each plane while the number of orbit planes is varied, useful for when mass production of satellites is possible (i.e., CubeSats) but launch opportunities are restricted. Since these two different configurations directly lead to sampling differences, we performed recoveries every hour throughout a 24 h time period to ensure that we capture any potential worse-case scenarios within the diurnal cycle. As the number of satellites increases and the Earth becomes better sampled, Figure 10 shows that the absolute global mean bias in the recovered irradiance decreases. The improvement in absolute bias from 36 satellites to 42 satellites is small in both satellite limited and orbit plane limited, and both in the SW and LW, because the recovered irradiance from 36 satellites already achieves a satisfactory SNR for a recovery to a 1000 km spatial resolution (see Figure 3). For fewer than 36 satellites, the density criterion of 500 samples per hour is no longer met globally; mathematical instability begins to influence the recovery process, contaminating the global mean bias and resulting in a sharp increase in this bias. The instability becomes more widespread as the number of satellites is further reduced below 30, increasing both the average bias and the RMSE. To overcome this problem with fewer satellites would require relaxation of the 1000 km spatial resolution requirement. Additionally, while the bias and RMSE are comparable in the SW and LW for 36 and 42 satellites, the SW bias generally becomes much larger for fewer satellites. This is due to the nature of the SW irradiance field evolving more quickly in space and time, and spanning a larger range of magnitudes, resulting in more pronounced instabilities when they do occur. When comparing the global mean bias and RMSE between the satellite-limited and orbit plane-limited configurations, the orbit plane-limited configuration generally yields improved results. This improvement is most apparent for fewer satellites and suggests that if one needs to reduce the total number of satellites from the baseline constellation, limiting the total number of orbit planes would provide the most scientific value with the same number of satellites, recovering the global mean outgoing irradiance more accurately and better representing spatial patterns. Conclusions and Summary We investigated the capability of a new constellation concept to determine global EOR with sufficient temporal resolution and accuracy, crucial for understanding fundamental atmospheric processes and feedbacks (Figures 10a and 10c) and an orbit plane-limited configuration (Figures 10b and 10d). Dots and error bars represent the means and the 25th and 75th percentiles, respectively, from 24 recovered irradiance fields. Journal of Geophysical Research: Atmospheres 10.1002/2016JD025514 at both weather and climate scales. Capitalizing on technology revolutions in small satellites and sensor miniaturization, the proposed baseline constellation comprises 36 identical WFOV radiometers, evenly distributed in six non-Sun-synchronous orbit planes. The WFOV feature is desirable to provide a global coverage over a short time scale, allowing us to track fast-evolving synoptic phenomena such as cyclones and dust storms, for understanding their interactions with radiation and for model evaluations. To investigate the errors associated with the baseline constellation, and the error growth with spatial resolution and time scale, we developed a recovery method and performed extensive simulation experiments. The baseline constellation provided sufficient 5 s instantaneous WFOV measurements for a stable recovery of hourly outgoing irradiance fields at both planetary and synoptic scales. At a spatial resolution of 1000 km, assuming isotropic radiation and perfect calibrations, the mean hourly global errors averaged for 1 day are respectively 0.16 AE 0.45 W m À2 and À0.13 AE 0.15 W m À2 for SW and LW. These results are unprecedented when compared with current observational products but are based on simplified radiance fields and idealized instrument characteristics. To identify the influence of these simplifications and changes in the constellation setup, we performed a series of sensitivity tests. First, the influence of the anisotropic nature of the radiance field was investigated by generating a new set of simulated measurements using angular distribution models that take into account the directional effects of the solar-viewing geometry, the surface type, and the meteorology of the scene. We found that the global mean outgoing shortwave irradiance could be recovered to within 0.56 W m À2 of that in the control experiment by randomly accounting for the directional effects of the meteorology. This method significantly increases regional errors, but low regional errors can be obtained by assuming that the radiation is isotropic during the recovery process, which kept the RMSE to within 6% of the control experiment. It is also possible to reduce the error by incorporating angular distribution models directly into the core of the recovery method, but in that case, ancillary observations will be needed to help classify the scene and determine its associated angular distribution model. Second, we tested the impact of a systematic calibration error of 0.5 W m À2 , with and without random instrument noise up to 0.5 W m À2 . Overall, the recovery errors in hourly global means due to compounding sources of uncertainty grow linearly, and thus, one can adjust the total error estimate linearly if the actual calibration errors are smaller or larger than the tested value. A systematic calibration error of 0.5 W m À2 was found to have the largest individual impact of 0.63 W m À2 on the recovered SW and LW irradiance, while random instrument noises tend to have a negligible impact. Third, the recovery errors in hourly global mean irradiance increase with decreasing number of satellites. Since irradiance fields in the SW tend to be more inhomogeneous and evolve faster than that in the LW, the recovery errors grow faster in the SW when the number of satellites is reduced. When a reduction in the number of satellites is necessary in reality, our results suggest that keeping the same number of satellites per orbital plane but reducing the total number of orbit planes would be more effective to retain the observation capability in measuring global mean outgoing irradiance. Finally, we note that other constellation configurations beyond those that were tested in this study are possible, including hosted payloads on commercial satellites and a single dedicated launch with differing satellite drifts. Similarly, global mean irradiance fields using various configurations can be obtained using our recovery method, although we envisage that some modification or prior constraints may be necessary to ensure a stable recovery. Nevertheless, this study demonstrates the great potential of a new constellation concept for monitoring global Earth energy flows in the climate system at a time scale shorter than monthly, which is challenging to achieve using existing radiation budget measurements.
v3-fos-license
2016-05-12T22:15:10.714Z
2016-03-13T00:00:00.000
16784595
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/cripe/2016/3034170.pdf", "pdf_hash": "51df362d04337ab883f9a24ffd645ab243f8abd3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2823", "s2fieldsofstudy": [ "Medicine" ], "sha1": "5cb2b248e71e5d45a1aa9e9884590abf5f664e70", "year": 2016 }
pes2o/s2orc
Transient Creatine Kinase Elevation Followed by Hypocomplementemia in a Case of Rotavirus Myositis We report an infant case of rotavirus myositis, a rare complication of rotavirus infection. Complement levels of the patient were normal when serum creatine kinase (CK) level was at its peak and then decreased when the CK level became normalized. In a previous case report of rotavirus myositis, transient decrease of serum albumin, immunoglobulin, and complement levels was reported. The authors speculated that intravascular complement activation was caused by rotavirus and resulted in the pathogenesis of myositis, although complement levels at onset were not measured by the authors. In this report, however, we demonstrate that the complement activation of our patient is a result of, rather than the cause of, skeletal muscle damage. Introduction Rotavirus infection is a common cause of acute gastroenteritis among infants. Extraintestinal complications such as encephalitis and myositis are rare [1]. So far, only two cases of rotavirus myositis have been reported [2,3]. Bonno et al. speculate that complement activation is a cause for rhabdomyolysis [3]. Previous studies using animal models, however, demonstrate that complements play an essential role for the recognition and clearance of dead muscle cells by phagocytic macrophages and are not involved in the pathogenesis of virus-induced myositis [4,5]. Recently, we experienced the case of a patient with rotavirus myositis that supports the latter case. Case Presentation A previously healthy 3-year-old boy visited a clinic after 4-day history of watery diarrhea and vomiting. Rotavirus was detected from his stool, leading to a diagnosis of rotavirus gastroenteritis. He was referred to our hospital because he was drowsy and reluctant to move even after he had received saline intravenously. Upon admission to our hospital on day 4, he could not stand or walk. Laboratory data showed markedly elevated serum creatine kinase (CK) 11637 IU/L, with mildly elevated serum enzymes including lactate dehydrogenase 691 IU/L, alanine aminotransferase 117 IU/L, aspartate aminotransferase 415 IU/L, and aldolase 118.9 U/L (2.7-7.5). Myoglobin was also elevated to 380 ng/mL (20-82). Serum complement levels were all normal: C3 83 mg/dL, C4 22 mg/dL, and CH50 33.9 U/mL. Other laboratory data were normal except for glucose 56 mg/dL, uric acid 9.4 mg/dL, C-reactive protein 1.5 mg/dL, and soluble interleukin-2 receptor (sIL-2R) 979.5 U/mL (332.9-586.7). Occult blood was not detected in the urine. Stool bacterial culture and throat bacterial and viral cultures were all negative. With a diagnosis of rotavirus gastroenteritis and hypoglycemia, the patient was treated with intravenous glucose infusion and fluid therapy. Even after the serum glucose level was corrected, the patient was still reluctant to move. On day 6, his vomiting and diarrhea stopped. He could stand and walk by himself, although his movements were still unstable. On that day, his CK level rapidly returned to 2927 IU/L (Figure 1). In contrast, his serum CH50 activity decreased to 24.6 U/mL, with C3 75 mg/dL and C4 14 mg/dL. Other laboratory data were CRP 0.7 mg/dL and sIL-2R 1458.6 U/mL. He was discharged from our hospital without sequelae on day 9. Laboratory data were all normalized on day 21 and remained within the normal range thereafter. Discussion Only two cases of rotavirus myositis have been reported in the literature [2,3]. One of them describes transient decrease of serum albumin, immunoglobulin, and complement levels on day 6 of the onset. The authors speculate that intravascular complement activation caused by rotavirus resulted in the pathogenesis of myositis, although complement titers were not measured by the authors at onset [3]. In our case, however, complement levels were normal when serum CK level was at its peak and decreased when CK level became normalized. Thus, we did not observe direct correlation between complement levels and CK level. Previous studies with animal models have demonstrated that complements are recruited to damaged tissues, playing a crucial role in facilitating phagocytic macrophages to recognize and clear dead cells during skeletal muscle regeneration [4,5]. We therefore argue that the complement activation of our patient is a result of, rather than the cause of, skeletal muscle damage. The mechanism of rotavirus myositis is not clear. In our case, a T cell activation marker sIL-2R increased along with complement activation but it showed negative correlation with both CK and CRP until day 6 of onset. In viral myositis, it was suggested that viruses initially trigger the disease process in muscle tissues and subsequently provoke immune responses [6]. Our result supports this hypothesis. In addition, we demonstrate that complement activation is a result of skeletal muscle damage. Further accumulation of cases will clarify the pathogenesis of rotavirus myositis.
v3-fos-license
2019-05-30T23:45:59.296Z
2018-09-28T00:00:00.000
169965024
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "https://ejournal.undip.ac.id/index.php/lawreform/article/download/20866/14102", "pdf_hash": "3fb100a87cc343c991b8fe48e02354070d21931f", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2824", "s2fieldsofstudy": [ "Law", "Political Science" ], "sha1": "a901e49f4f551a0068cbd85da22b878965de4acd", "year": 2018 }
pes2o/s2orc
SEVERAL STRATEGIES TO ABOLISH THE DEATH PENALTY IN DEVELOPING COUNTRY The death penalty practice have been an issue in various country. Since the deployment of the ICCPR, there are many country have succesfully abolish the practice of the death penalty or put it in to a moratorium. This international regulation is also affected the developing country. From all over country around the world, several developing countries are still actively use the death penalty as their capital punishment. They argued that executing people have successfully decrease the level of crime in their country. However, it is important to understand that the international regulation are ordered country to abolish the death penalty. This article then will give several strategies for developing country in order to promote the abolishment of the death penalty in all condition. Keyword: Death Penalty; Abolition; Strategy. ABSTRAK Hukuman manti telah menjadi isu yang krusial di berbagai negara. Semenjak adanya ICCPR, telah banyak negara-negara di dunia yang menghapus praktik hukuman mati atau meletakkannya dalam sebuah moratorium. Regulasi internasional ini kemudian juga berlaku bagi negara berkembang. Dari sekian banyak negara di seluruh dunia, bebera negara berkembang masih secara aktif menggunakan hukuman mati sebagai hukuman tertinggi di negaranya. Negara berkembang tersebut berendapat bahwa dengan melakukan eksekusi terhadap pelaku kejahatan telah terbukti dapat menurunkan angka kriminalitas di negara tersebut. Hal ini kemudian bertentangan dengan semangat internasional untuk dapat menghapuskan praktik hukuman mati. Sebagai kesimpulan, artikel ini akan menawarkan beberapa langkah bagi negara berkembang untuk dapat menghapuskan praktik hukuman mati bagi negaranya. Kata Kunci: Hukuman Mati; Penghapusan; Strategi. death penalty is not only arisen in a major power country such as the United States, but also affected the developing country. B. RESEARCH METHOD This research using normative method in order to accomplish the article (Yani, 2011). Moreover, the approach that used in this article is statutory approach followed by analysing the secondary data (Asikin, 2004). Lastly, in order to examine the problem, this article adopt the deductictive reasoning in order to change the general condition in to more specific logic and reality (Soekanto, 1984). DEVELOPING COUNTRIES In relation to the fulfilment of human rights in developing countries, there are several critical issues which can be considered obstacles for developing countries. Such factors include corruption and mismanagement, and personality politics (Monshipouri, 2001). Furthermore, Baral argues that many developing countries continue to struggle to find a suitable political model in order to establish a 'just, dynamic and exploitation-free society' (Baral, 1981). A number of developing countries suffer from 'soft-state syndrome' (Myrdal, 1968) which begins with the failure of the enforcement of human rights legal instruments. This causes an inappropriately designed legal development regarding human rights (Goodpaster, 2003). Goodpaster argues that this is caused by insufficiently analytical and strategic politics, not only in the sense of common politics but also social and cultural transformations which require a political response from the regime (Goodpaster, 2003). It is important to promote a universal culture of human rights based on a common humanity which at the same time respects the diversity of cultures (Mahoney, 2007). Since human rights have become an international issue, developing countries should also fulfil their obligation to enforce international human rights legal instruments. However, this effort has met with obstacles in developing countries (Monshipouri, 2001). At the same time, Risse and Ropp argue that 'human rights campaigns should be about transforming the State, not weakening or even abolishing it' (Risse & Ropp, 1999). The remaining part of this section will discuss several factors, namely economic, social and political ones, which contribute to the development of human rights in developing countries. C.1. Economic factor There is a relation between economic growth and the development of human rights in a developing country. The relation between the two different things could occur because of the inequality of the international economic growth, which has been wider among the state (Heredia, 1997). Monshipouri argues that some 'critical issues such as unequal access, distribution, and uneven access to information have intensified the tensions within the developing countries' (Monshipouri, 2001). Moreover, a state of inequality will lead a developing country into a state of widening poverty, with sharpened inequalities, increased number of crimes and less safety (Malley, 1999). These clearly affect the human rights aspects of a developing country. In such conditions, a poor-country government would face extreme difficulties creating a long-term policy, particularly in relation to the protection of human rights (Gershman & Erwin, 2000). So too, inequality of international economic growth will undermine the ability of a developing country to give the highest protection of human rights (McCorquodale & Fairbrother, 1999). C.2. Social and political factors Most developing countries emerged from colonial regimes and transformed from this into a liberal constitutional democracies based on the recognition of the existence of every man (Emerson, 1975). For such a country, the political structure would be based on the old regime, with the educated elite taking over the running of the government (Emerson, 1975). This weakness can have an impact on the development of human rights in developing countries. According to Goodpaster, the government will be so weak and untried, which could be very dangerous (Goodpaster, 2003). The social condition of the people who live in a developing country is also affected by this particular circumstance. People cannot enjoy their own rights such as political freedom, and the liberation and preservation of freedom (Pardesi, 1976). Moreover, those kinds of rights will only work as long as there is a relation between those rights and the promotion of economic development (Pardesi, 1976). According to Konz, in some developing countries there is a need for major reform of the governmental, political and legal system in order to increase the promotion of human rights (Konz, 1969). Since developing countries are also part of a huge international community, the enforcement of international human rights law also becomes an obligation for such a country not only as a soft law but also as a hard law (Baxi, 2008 (Hood, 2002). Research conducted by Rankin shows that people who live in developing countries are likely to support the death penalty for some criminals (Rankin, 1979). They are also more likely to be supportive of the death penalty compared to people who live in countries which oppose harsh and invasive criminal justice tactics (Rankin, 1979 D.2. Religion Religion is one of the reasons for judges to punish someone who has committed a crime. A state with a religious affiliation may mean that its religion influences its system of law, such as Islam with its Shari'a law in Saudi Arabia and Pakistan (Miller & Hayward, 2008 In the international level, systems of law are not subordinate to universal international law, but correlated to it. At the time, there were three emerging regional system of international law: Latin-American, Asian, and European international law. Today, there are three well established regional inter-governmental legal bodies: the Organization of American States, the Organization of African Unity, and the Council of Europe which lead into regional customary law and multilateral agreements (Rosen & Journey, 1993 (Knop, 2000). Therefore, an international law instrument can become a valid consideration for the judge who uses domestic law as the main consideration in a verdict. Another argument from retentionist states is that the human rights issue is not a matter of international law. This argument can be easily countered by the fact that, in practice, international law has become 'common law' for every country in the world whether they are a state party to a convention or not. The customary law of international practice can easily turn into an obligation if the majority of states in the world accepted the behaviour as law (Elias, 2012). Subsequently, following acceptance of the behaviour as international law, the other states would be bound by the law as it is internationally accepted (Rehman, 2002). In the context of the abolition of the death penalty, the fact that the amount of states that have abolished the death penalty has risen every year can be a primary consideration for the remaining developing countries to accept the international law that aims to abolish the practice of the death penalty. the application of the death penalty and provide an alternative that respects human dignity. G. CONCLUSION This final chapter will draw a conclusion from the research that has been conducted in
v3-fos-license
2020-04-30T09:01:39.000Z
2020-04-29T00:00:00.000
216650480
{ "extfieldsofstudy": [ "Geography" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41599-020-0454-z.pdf", "pdf_hash": "0fc737550c1ae9d4241ebc12a4a3ebba606d2c08", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2825", "s2fieldsofstudy": [ "History" ], "sha1": "9cabcdb71b3f399ef985f4c57ec38530b7557660", "year": 2020 }
pes2o/s2orc
When was silcrete heat treatment invented in South Africa? Silcrete heat treatment, along with a suit of other innovations, have been used to argue for an early onset of modern or complex behaviours in Middle Stone Age hominins. This practice was confined to South Africa’s southern and western Cape regions where it was continuously practised since the Still Bay industry. However, the exact moment that this technological advancement occurred still remains unclear. This is partly due to the scarcity of silcrete assemblages dating to the first half of the Middle Stone Age. To determine when silcrete heat treatment began to be well-established, we compare the silcrete assemblages from two archaeological sites situated along the south western coast of South Africa: Hoedjiespunt 1, one of the earliest Middle Stone Age silcrete assemblages dating to 119–130 ka, and Duinefontein 2, one of the latest Early Stone Age assemblages dating to 200–400 ka. Our results suggest that the invention of heat treatment occurred sometime between 130 ka and 200–400 ka, as it is still absent in the earlier assemblage but fully mastered and well-integrated in the recent one. This period corresponds to the time that Homo sapiens became the major hominin species in the southern African subcontinent and it is roughly the time that silcrete use became widespread in the second half of the Cape-coastal Middle Stone Age. This opens interesting new questions on the relation between silcrete use and heat treatment and on why early modern humans spontaneously invented heat treatment when they began using silcrete in the Cape region. Introduction S ilcrete heat treatment is commonly understood as a technical process that aims at improving the quality of raw materials for knapping. It has in the past decade become one of the arguments for an early onset of modern or complex behaviours in the Middle Stone Age (MSA) (see for example : Sealy, 2009;Wadley, 2013). This is because it was argued to proxy for several archaeological and anthropological traits like abstract thinking (Wadley and Prinsloo, 2014) or high investments in resources (Brown and Marean, 2010). Although other authors (Schmidt et al., 2015;Schmidt et al., 2013) have argued against such interpretations, the early appearance of heat treatment unquestionably documents one the first moments humans attempted to transform their material world with fire (Stolarczyk and Schmidt, 2018). Knowing the exact moment of its first invention is therefore a crucial factor for our understanding of human evolution. When heat treatment was first documented at Pinnacle Point (Brown et al., 2009), the main argument was made for the Howiesons Poort (HP; roughly dated to 50-85 ka, depending on what site it was found at). The same paper, also proposed that heat treatment may have been invented as early as 164 ka. This is even more interesting, as this date falls into a period where silcrete use was generally rare in southern Africa (Will and Mackay, 2017). In fact, except at Pinnacle Point, there are no other assemblages in the Cape from before~130 ka that yielded more than a handful of silcrete pieces, and there is no silcrete outside of the Cape coastal zone at all. This situation is uncomfortable for MSA archaeologists. All MSA silcrete assemblages younger than the Stillbay (SB;~70-80 ka) that were inspected for heat treatment have yielded abundant heated artefacts (see for example: Schmidt and Högberg, 2018;Delagnes et al., 2016;Schmidt et al., 2015). With few exceptions (for one exception see: Schmidt and Mackay, 2016), the relative prevalence of heat treatment varies between~70 and >90% in these silcrete assemblages. Thus, at least from the SB onwards, heat treatment seems to have been an important step in the reduction sequences associated with silcrete. The possibility to artificially improve its knapping quality might even have governed the choice of using silcrete as a raw material. At least, there seems to be a correlation between silcrete use and heat treatment that needs to be explained. There are, however, two arguments that might change our view on MSA heat treatment. It has been argued that heat treatment might not have been practised to improve knapping quality but rather for heatfracturing raw material blocks, to reduce nodule size before knapping even began (Schmidt et al., 2015;Porraz et al., 2016). This argument was proposed because at some sites, many silcrete blocks broke from the action of fire before knapping (see for example: Schmidt et al., 2015;Delagnes et al., 2016). Improved knapping quality would in this case only be a by-product. The other argument is that natural fires might have caused what archaeologists recognise as heat treatment. It could be imagined that bushfires or fire-based site maintenance (Goldberg et al., 2009) produced accidentally heated silcrete. If this were the case, the entire MSA heat treatment signal might not reflect any human activity at all. Based on these considerations a few important questions can be posed: do all silcrete assemblages in the Cape coastal region show signs of heat treatment? If heating proxies, as they have been used to identify heat treatment in MSA assemblages so far, can be identified on all silcrete assemblages regardless of their age, it might be worthwhile to investigate the bushfire hypothesis or other natural causes. If on the other hand, we can identify silcrete assemblages without heat treatment, the bushfire hypothesis becomes unlikely. In the latter case, if intentional heat treatment were real, the time of its invention becomes important. For example, can we identify a gradual onset of heat treatment during the MSA or did it appear with the earliest silcrete use in the MSA? Was there a period in the MSA where unheated silcrete was used? If there was, can we determine at least approximately when heat treatment was invented? If there was not, we may conclude that at least in early assemblages there was an intricate, perhaps causal, relationship between silcrete use and heat treatment. One way to approach these questions is by investigating the earliest silcrete-bearing MSA assemblages and comparing them with silcrete assemblages from before the MSA. Some of the oldest known silcrete assemblages that have yielded sufficient artefacts for such a study come from sites located on the south western coast of South Africa (Will and Mackay, 2017). There, two sites from between 100 and 130 ka are potential candidates for our study: Ysterfontein 1, initially dated to between 120 and 132 ka (Avery et al., 2009) and Hoedjiespunt 1 (HDP1) initially dated to between 100 and 130 ka . As the Ysterfontein 1 assemblage appears problematic (the dates were rejected, see: Avery et al., 2009) and the HDP assemblage can confidently be attributed to MIS 5e (119-130 ka), we chose the latter. The Western Cape region also provides a large enough silcrete assemblages from before, but still reasonably close to, the MSA: The assemblage from Duinefontein 2 (DFT2), dating to 200-400 ka, is one of these. We included DFT2 in our study investigating whether heating proxies are associated with all coastal silcrete assemblages (i.e., also in the Early Stone Age) or whether heat treatment was confined to the MSA. Methods Samples and sample preparation. We inspected 200 silcrete artefacts >5 mm recovered in situ from (DFT2) for macroscopic indicators of heat treatment. These artefacts were randomly chosen by picking bags, one after the other, each time inspecting all silcrete artefacts from within the bags. No other selection (size except for >5 mm, weight, typology) was made prior to inspecting the artefacts for indicators of heat treatment. The site's aeolian sands were estimated to date between 400 and 200 ka based on its faunal record (Klein et al., 1999). We chose DFT2 here because of its geographical proximity to HDP (~85 km), because of the relative abundance of silcrete artefacts and because it has been mentioned as one of the latest Acheulian site in the Western Cape region (Patterson et al., 2016). It is therefore suitable as a pre-MSA point in this study. Seventy-two of these silcrete pieces underwent a quantitative surface roughness analysis using the replica tape method (Schmidt, 2019). In parallel, we inspected 121 artefacts from HDP for macroscopic indicators of heat treatment. Forty-one of these came from the HDP1 site and the remaining 79 came from HDP3 (Parkington, 2003). The HDP1 deposit was attributed to MIS 5e (119-130 ka) based on radiometric dates and sea level correlation and, although no radiometric dates have been obtained from HDP3, it nevertheless seems likely, from a point of view of stratigraphy, that artefacts from both sites are of the same age (Parkington, 2003). There is currently a research project attempting to obtain an absolute age for the HDP3 deposit. While results have not been published yet, one of their observations relevant to our study is that the HDP3 sediments were likely deposited during the last interglacial, as revealed by paleoclimatic arguments (Hare, 2020, pers. comm.). Contemporaneity of HDP1 and 3 is, therefore, highly likely, based on stratigraphy and paleoclimate. Fifty-two of these silcrete artefacts from HDP underwent quantitative surface roughness analysis using the replica tape method. We chose not to integrate artefacts made from one silcrete type in our analysis. There is a coarse-grained silcrete with a clast size ranging up to ARTICLE PALGRAVE COMMUNICATIONS | https://doi.org/10.1057/s41599-020-0454-z >2 mm in the HDP assemblage (Fig. 1a). It can be difficult to distinguish heat-treated from unheated silcrete with similarly large clasts, based on fracture pattern . In total, there were 20 artefacts of this silcrete type in the HDP assemblage that we excluded from our analysis. In parallel, an experimental references collection was produced from 30 South African west coast silcrete types. Geological samples were collected in a large area between the town of Hopefield and the Olifants river, an area measuring~160 km north-south. Samples were chosen to represent a large variety in terms of grain-size and texture. To produce the reference collection, a control flake was removed from each sample, the remaining samples were heat-treated at 450°C (heating ramp of 4 h, hold time at maximum temperature 2 h; for justification of these parameters see: Schmidt et al., 2017;Schmidt et al., 2016b) and a second flake was removed after the samples had cooled to room temperature. The roughness data measured on this 60-piece references collection is published in tabular form elsewhere (Schmidt, 2019, Table 2) but they are used here as comparison with our DFT2 and HDP archaeological data. Visual classification of heating proxies. As initially proposed by Schmidt et al. (2015) and subsequently applied during several other studies on heat treatment in the South African MSA and LSA (Delagnes et al., 2016;Porraz et al., 2016;Schmidt and Mackay, 2016), four proxies were used for this visual classification: [1] Pre-heating removal scars: relatively rough fracture surfaces corresponding to the removal of flakes from unheated silcrete ( Fig. 1e-g). [2] Post-heating removal scars: relatively smooth fracture surfaces that correspond to the removal of flakes from heat-treated silcrete (Fig. 1b). [3] Heat-induced non-conchoidal (HINC) fractures: surfaces produced by thermal fracturing in a fire (sometimes termed overheating (Schmidt, 2014)). HINC fracture surfaces can be recognised due to their strong surface roughness, the presence of scalar features on the surface (Schmidt et al., 2015) and concave morphologies with frequent angular features (Fig. 1c). Fracture surfaces were only identified as HINC fractures when they are cross-cut by a post-heating removal. This technological relationship indicates that the failure occurred during heat treatment, i.e., within the lithic reduction sequence, and that the reduction was continued afterwards. In the opposite case, when such a fracture surface is not cross-cut by a flake removal, it may result from fracturing at any stage, e.g., during accidental burning after discard, so that no technological information concerning heat treatment can be retrieved from it. [4] Tempering residue: a black organic tar (wood tar) produced by dry distillation of plant exudations that was deposited on the silcrete surface during its contact with glowing embers during burning (Schmidt et al., 2016a;Schmidt et al., 2015). In some previous work (Delagnes et al., 2016;Schmidt et al., 2015) these heating proxies were identified on artefacts through a piece-by-piece comparison with an experimental (external) reference collection. Here, the assignment to different heating proxy categories was solely based on an "internal calibration" (Schmidt, 2019): first, artefacts made from different silcrete types, that show a clearly distinguishable roughness contrast between adjacent pre-and post-heating removal scars on their dorsal side ( Fig. 1d), were selected. Such pieces are called 'diagnostic' artefacts because the roughness difference between two adjacent scars on one side of a single piece (provided that the smooth scar is posterior to the rough scar) cannot be explained by different silcrete types, inner sample heterogeneity or taphonomy, i.e., only one explanation of this pattern is left: rough pre-and smoother post-heating removal scars result from knapping before and after heat treatment, respectively, , meaning that these pieces document a stage of pre-heating knapping, the transformation of their fracture mechanics (heat treatment) and a second stage of post-heating knapping. Such pieces have consistently been used to identify heat treatment in assemblages since the beginning of archaeological research on heat treatment (see for example : Bordes, 1969;Inizan et al., 1976;Inizan and Tixier, 2001;Binder, 1984;Binder and Gassin, 1988;Léa, 2004;Léa, 2005;Terradas and Gibaja, 2001;Mandeville, 1973;Marchand, 2001;Mourre et al., 2010;Tiffagom, 1998;Wilke et al., 1991). In light of these considerations and the acceptance of diagnostic pieces in the archaeological community, it can be concluded that they unambiguously result from heat treatment and, consequently, that they can be used as comparative reference to identify pre-and post-heating fracture scars on undiagnostic samples (provided that these are made from the same silcrete types). The known pre-and post-heating scars on diagnostic artefacts were therefore used to 'calibrate' the identification of pre-and post-heating scars on the other undiagnostic artefacts made from the respective silcrete types. Practically, this meant that a set of diagnostic artefacts was laid out on a large table and all other undiagnostic artefacts were compared with the pre-and post-heating scars on these diagnostic pieces. Artefacts that could Fig. 1 Photos of analysed Lithic pieces from Hoedjiespunt (a-d and g) and Duinefontein 2 (e and f). a coarse-grained silcrete from Hoedjiespunt excluded from this study. b Heat-treated artefact entirely covered by smooth post-heating scars. c Heat-treated artefact with a heat-induced non-conchoidal (HINC) fracture surface (artificially darkened for better recognition in this photo). Note the scalar features indicted by the black arrow. d Heat-treated artefact with a remnant rougher pre-heating scar (artificially darkened for better recognition in this photo) that is cross-cut by smoother post-heating scars. e-g Unheated artefacts entirely covered by rough pre-heating scars. not be clearly identified as belonging to one of the frequently occurring silcrete types (for which no diagnostic comparisons could be identified) were left indeterminate in this study. HINC surfaces were identified through the presence of concave, sometimes angular, structures and scalar features (Schmidt et al., 2015). Surface roughness measurements with replica tape. To estimate the quality of this visual classification of heat treatment proxies, quantitative fracture surface analysis was conducted using replica tape for three-dimensional (3D) surface mapping. The replica tape method is explained in detail in Schmidt (2019) and only the details absolutely necessary are repeated here. A layer of compressible foam is applied with force to the measured surface (the method it is entirely non-destructive). The foam replicates the surface irreversibly by creating a negative of it. The so-produced surface negative contains thicker and thinner parts that correspond to the valleys and peaks of the original surface, respectively. These thicker and thinner areas on the replica tape, when scanned by light transmission, appear more or less transparent. Transparency values measured in this way can then be converted to a 3D map of the surface. To perform these scans, a DeFelsko PosiTector RTR-P tape reader was used in combination with optical grade Testex PRESS-O-FILM replica tape of the grades Coarse and X-Coarse. Measurements made on DFT2 artefacts (ventral surface was measured where possible) were compared with roughness data of the west coast reference collection (as in: Schmidt, 2019, Table 2). For the HDP assemblage, an "internal reference" of surface roughness measurements was established: measurements on diagnostic artefacts, i.e., artefacts with both preand post-heating scars, were used as reference measurements. Ten pre-heating removal scars, large enough for replica tape measurements, and 13 suitable post-heating removal scars were identified on HDP diagnostic artefacts. The advantage of such an internal calibration is that, instead of using our external reference collection containing a random number of silcrete types from the greater west coast region, with this method only the silcrete types actually used at HDP are taken into account. The 3.8 × 3.8 mm wide 3D surface models resulting from replica tape measurements were processed using the Gwyddion free software package. Two statistical quantities were extracted from the 3D surface maps (no filtering applied): the mean roughness (Ra) in µm and the dimensionless differential entropy S of the height value distribution (or Shannon differential entropy, Shannon, 1948). As proposed by Schmidt (2019), we transformed Ra values to their natural logarithm, so that the data can be fitted with a linear function in a scatter plot of S over Ln(Ra). As both values are tightly correlated, the variance between samples in such a plot is one-dimensional and lies on the fitted function (the best fit of the scatter plot). Data quality can be visually estimated by evaluating the straying of data plots around the fitted function. It can be quantified by calculating the mean distance of the plots from this function. Results Visual inspection. The fracture patterns on DFT2 silcrete artefacts are rather rough looking. Only one of the four criteria described in section 'Visual classification of heating proxies' can be observed: rough pre-heating removal surfaces. None of the artefacts showed recognisable roughness contrast between adjacent fracture negatives or between different artefacts. On the other hand, three of the four proxies described in section 'Visual classification of heating proxies' can be observed on HDP artefacts. Although silcrete types from both sites are fairly similar macroscopically, only surfaces on the HDP assemblage could be assigned to distinct groups using the visual identification protocol. These groups are summarised in Table 1. Most artefacts were knapped after heat treatment. Depending on whether undetermined artefacts are included or not, 65-73% of all pieces show traces of heat treatment (smooth post-heating surfaces). On 30% of these heated artefacts, remnants of rough pre-heating surfaces are preserved along, and cross-cut by, a second generation of smoother post-heating scars (these are the diagnostic artefacts). Nine percent of the heat-treated artefacts show signs of heat-induced fracturing during heat treatment (HINC fractures) after which knapping continued. None of the artefacts show black tempering residue. Surface roughness measurements. Figure 2a is a plot of Ln(Ra) and S values measured on unheated and experimentally heated reference samples from the West Coast region (as taken from: Schmidt, 2019). Figure 2b is a plot of our DFT2 data onto the fitted function of the reference data (S = 0.826Ln(Ra)-11.86). Values measured on artefacts are summarised in Table 2. The reference scatter plot (Fig. 2a) can be separated into three areas: a zone where only heat-treated samples plot in the lower left corner; a zone where both heated and unheated samples plot in the middle (the indeterminate zone); and a zone in the top right corner where only unheated samples plot. Only the zone in the lower left of the plot, where no unheated reference samples plot, is of importance here. Its limit is marked by a black line perpendicular to the data plots' best fit. Comparing Fig. 2a west coast silcrete. Thus, there is no reason to suggest that any of the DFT2 artefacts were heat-treated. Figure 3a is a plot of Ln(Ra) and S values measured on diagnostic artefacts from HDP onto the fields of the heated-, unheated-and indeterminate zone of west coast reference samples (best fit and zone limits of HDP data in continuous lines; in broken lines for reference data). Comparing both, postheating surfaces from HDP roughly plot in the same zone as heattreated reference samples. Most HDP pre-heating surfaces plot in the reference samples' indeterminate zone. This is because the analysed HDP silcrete assemblage is finer-grained than some of the west coast reference silcretes (i.e., yielding overall lower S and Ln(Ra) values), so that comparing them is inadvisable. Our estimation of the number of heated artefacts in the HDP test group (undiagnostic artefacts in Fig. 3b) is therefore based on a comparison with known pre-and post-heating surfaces from the same assemblage (internal calibration, Schmidt, 2019). This is shown in Fig. 3b. Black lines perpendicular to the best fit of the data (S = 0.833Ln(Ra) -11.92) indicate the boundaries of the indeterminate zone, as measured on known pre-and post-heating surfaces (Fig. 3a). Unlike the DFT2 data, this HDP plot can be separated into three zones, indicating that there are heat-treated and unheated samples (one sample is indeterminate). Our HDP surface roughness analysis allows the classification of 34 (65%) pieces as heat-treated and 17 (33%) unheated (Table 2). Thus, the number of heat-treated artefacts is in agreement with the data obtained by visual inspection; the number of unheated artefacts is 8% higher than found during visual inspection. Discussion We found no sign of heat treatment in the silcrete assemblage from the late Acheulean site of Duinefontein 2. During visual inspection, we observed neither roughness contrast, nor particularly smooth fracture surfaces. Replica tape surface roughness analysis showed that the fracture patterns on DFT2 flake scars fall in the range of unheated silcrete. The quality of this roughness data is expressed by the mean distance of all test data points from their fitted function (in Euclidean distance in the scatter plot). The mean distance for our DFT2 data is 0.031. This value lies slightly above the values obtained during previous studies (0.022, as recalculated from artefact data in: Schmidt, 2019; and 0.02, as recalculated from the data in: Schmidt and Hiscock, 2019). Thus, the quality of our DFT2 artefact roughness data appears to be 30-35% worse than in previous similar studies. The reasons for this are unclear and a more precise interpretation of mean distance values must await future studies providing more scattering distance data. However, we note that the data points of our experimental west coast reference collection scatter around their fitted function with a mean of 0.028 (as recalculated from reference samples data in: Schmidt, 2019), being in agreement with our DFT2 distance data within 10%. Thus, based on our visual and roughness data there is no reason to suggest that there was heat treatment in the Acheulian of DFT2. This has one important consequence: the visually observable heat treatment signal, smooth post-heating fracture surfaces, is not ubiquitous on archaeological silcretes from the South African silcrete coastal belt. While this might seem insignificant or even common-sense at first glance, we note that our study is the first to specifically investigate this question. Several studies have so far shown the abundance of heat-treated silcrete artefacts in the South African MSA (see among others: Delagnes et al., 2016;Porraz et al., 2016;Schmidt and Mackay, 2016;Schmidt et al., 2015); in fact almost all silcrete-bearing MSA sites have yielded abundant heat-treated artefacts so far. Our description of an unheated older silcrete assemblage sustains the argument that heating proxies (like smooth post-heating fractures) are not an intrinsic property of archaeological silcrete assemblages, but are specific to the MSA in this region and they can be used to identify and quantify heat treatment. The latter of these statements is based on the study of a single pre-MSA site only. This is mainly due to the scarcity of late ESA sites that yielded silcrete assemblages. Based on this samples number (n = 1), it cannot be entirely ruled out that the absence of ESA heat treatment in our study may have been caused by other factors like site-use, technology or settlement patterns that were only active at DFT2. We do, however, note that heat treatment has never been suggested elsewhere in the MSA and, given our first observations at DFT2, it appears highly unlikely that there was ESA heat treatment. We found a different pattern in the MSA at Hodjiespunt (MIS5e, 130-119 ka). There, most of the analysed silcrete artefacts Schmidt and Högberg, 2018), the HP at Diekploof (90-96%: Schmidt, 2019;Schmidt et al., 2015), Klipdrift (92%: Delagnes et al., 2016) and Mertenhof Shelter (37-78%: Schmidt and Mackay, 2016), the post-HP at Mertenhof (85-89%: Schmidt and Mackay, 2016) and even from the Later Stone Age at Elands Bay Cave (92%: Porraz et al., 2016). Thus, by~130 ka, heat treatment was already fully mastered with no significant change occurring after that in terms of its prevalence. Two potential sources of error should, however, be taken into account: the precision of our estimation of heated artefacts and the dating of our HDP assemblage. Concerning the first source of error, measurement precision, we note that the number of heat-treated artefacts estimated visually and by roughness analysis is in good agreement. The major disagreement resides in the number of unheated artefacts. It was 8% higher when identified by surface roughness analysis than for visual inspection. The reason for this may be that visually some of the unheated artefacts were classified as indeterminate, while surface analysis allowed us to assign them to the Not-heated group. Scattering of data points around the fitted function is slightly better than for the DFT2 data with 0.0294, being similar to the mean distance obtained from our west coast reference collection data. Thus, measurement precision appears to be sufficient for comparing our HDP assemblage with more recent MSA ones. The second source of uncertainty is the dating of the analysed HDP assemblage. Only HDP1 was physically dated , and we tentatively extended this date to HDP3 for our analysis, based on stratigraphy (following Parkington, 2003). While this is, in our opinion, likely to be correct, it might be wrong. If so, the relative prevalence of heat-treated silcrete artefacts securely attributed to 130-119 ka (those from HDP1) would be 65.7% (this percentage would be statistically less solid because it is calculated from 35 determinable pieces only). The relative prevalence of heat-treated silcrete artefacts from HDP3 would be 75.7% (as determined from 75 determinable pieces). Thus, our conclusion that close to 70% of all silcrete from between 130 and 120 ka at HDP was heat-treated still holds, even if we have wrongly assumed the age of our HDP3 assemblage. Conclusion What are the implications of our results for understanding the antiquity of silcrete heat treatment in southern Africa? The invention of heat treatment must predate 130 ka but postdate the DFT2 assemblage (dating somewhere between 400-200 ka). This is the time that Homo sapiens began to play a major role in the subcontinent (Dusseldorp et al., 2013) and it is the time that silcrete use appeared in the Cape coastal region (Will and Mackay, 2017). Early modern humans must have either transferred the idea from another similar technique or spontaneously invented heat treatment when they began using silcrete in the Cape region. A supplementary argument in the quest to identify the antiquity of heat treatment comes from Pinnacle Point, where Brown et al. (2009) suggested that silcrete might have been heattreated as early as 164 ka. Unfortunately, the assemblage they published only contained 22 silcrete artefacts, 6 of which were reported to show stronger surface gloss than unheated reference samples. No photos of these pieces were shown in the publication and no information on the most unambiguous heating proxy, roughness contrast, was given. The data provided by Brown et al. (2009) are therefore insufficient to pronounce on whether the 165 ka Pinnacle Point assemblage documents an early stage of heat treatment. As it stands, the most likely scenario is that the invention of heat treatment occurred somewhere in the Cape coastal zone (where silcrete can be naturally found) between the appearance of H. sapiens in the region and 130 ka. Only future discoveries of new silcrete assemblages from before 130 ka will allow us to narrow down this gap of uncertainty. What do our observations imply in terms of the reasons for inventing heat treatment? After 130 ka (and perhaps even in the one assemblage predating 130 ka), silcrete use seems to correlate with frequent heat treatment. However, it has been shown that silcrete can be, and sometimes was, also knapped without heat treatment (Schmidt and Mackay, 2016). It can therefore not easily be argued that early MSA knappers absolutely needed the improvement in knapping quality to be able to use silcrete. Instead, the heat-fracturing hypothesis (Porraz et al., 2016;Schmidt et al., 2015) provides an interesting alternative explanation of why heat treatment might have been invented. Most types of rock react to rapid heating in fires by fracturing. Silica rocks like silcrete present the additional advantage of also improving in knapping quality. If early Homo sapiens regularly heat-fractured all types of stone raw materials before knapping, then they would also have done so with silcrete when they first encountered it. In such an operational scheme, early knappers can be expected to rapidly discover that silcrete is also knapped more easily when heat-fractured. At least the earliest improvements in knapping quality would in this case be no more than an unexpected by-product. This theory has important implications for understanding heat treatment as a proxy for archaeological and anthropological concepts like modernity or planning depth. Its test conditions are clear: it would become likely if raw materials other than silcrete from between~300 and 130 ka would show signs of systematic heat-induced nonconchoidal fracturing after which knapping was continued. Examples of such rocks, commonly used to make tools in the southern African subcontinent, that do not improve in knapping quality but can potentially be heat-fractured are quartzite, dolerite and other igneous rocks. Investigating these test conditions, on the other hand, should prove to be more difficult, or at least labour intensive, as it is not easy to recognise heatinduced fracture surfaces on these types of stone raw materials without extensive experimental work. The stigmata produced on quartzite and different igneous rocks by heat-fracturing must first be analysed in terms of the mechanics that cause fracturing, then in terms of their roughness and surface structure, so that they can eventually be identified on artefacts. Data availability The data set generated for this study is available in Tables 1 and 2. Received: 20 December 2019; Accepted: 31 March 2020;
v3-fos-license
2023-12-06T06:17:50.100Z
2023-12-04T00:00:00.000
265657702
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-023-48812-z.pdf", "pdf_hash": "946032b83f2c4563391cb171b29460ced328eec8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2826", "s2fieldsofstudy": [ "Medicine" ], "sha1": "aee812aa63e0ac86f1d2b34cbb2ffecefb540b7c", "year": 2023 }
pes2o/s2orc
Hepatoprotective effects of aspirin on diethylnitrosamine-induced hepatocellular carcinoma in rats by reducing inflammation levels and PD-L1 expression Aspirin, as a widely used anti-inflammatory drug, has been shown to exert anti-cancer effects in a variety of cancers. PD-L1 is widely expressed in tumor cells and inhibits anti-tumor immunity. This study aims to clarify whether aspirin exerts its anti-hepatocellular carcinoma (HCC) effect by inhibiting PD-L1 expression. The rat model of HCC was established by drinking 0.01% diethylnitrosamine (DEN), and aspirin was given by gavage. The gross and blood biochemical indexes of rats were analyzed. CD4 and CD8 expression in liver tissues were investigated by immunohistochemistry. CCK8 assay was used to detect the inhibitory effect of aspirin on the proliferation of HCC cells. The regulatory effect of aspirin on PD-L1 expression was analyzed by western blot. As a result, the tumor number and liver weight ratio in the DEN + ASA group were lower than those in the DEN group (P = 0.006, P = 0.046). Compared with the DEN group, the expression of CD4 in the DEN + ASA group was significantly increased, while CD8 was decreased (all P < 0.01). Biochemical indexes showed that there were differences in all indexes between the DEN and control group (P < 0.05). The levels of DBIL, ALP, and TT in the DEN + ASA group were lower than those in the DEN group (P = 0.038, P = 0.042, P = 0.031). In the DEN group, there was an obvious fibrous capsule around the tumor, and the portal vein was dilated. The pathological changes were mild in the DEN + ASA group. Compared with the DEN group, the expression of PD-L1 in liver tissue of the DEN + ASA group was decreased (P = 0.0495). Cytological experiments further showed that aspirin could inhibit the proliferation and PD-L1 expression in Hep G2 and Hep 3B cells. In conclusion, aspirin can inhibit the proliferation of HCC cells and reduce tumor burden by reducing inflammation and targeting PD-L1. seems a plausible explanation for its anticancer effect 11 .Therefore, exploring the anti-cancer mechanism of aspirin is of great significance for the clinical treatment of cancer in the future. In recent years, immune checkpoint regulators such as Programmed Death Receptor 1 (PD-1)/Programmed cell death 1 ligand 1 (PD-L1) have emerged as effective targets for cancer therapy, garnering increasing attention 12 .PD-1, an immune checkpoint inhibitory receptor, is commonly expressed on immune cells and plays a crucial role in activating immunosuppressive signaling by binding to its ligand PD-L1 13 .PD-L1 is frequently expressed in tumor cells and contributes to their spread within the body.By binding to PD-1 on T cells, highly expressed PD-L1 allows cancer cells to evade immune cell recognition, facilitating their metastasis 14 .Research has indicated that aspirin can hinder the progression of ovarian 15 , lung 16 , and colorectal 17 cancer by inhibiting PD-L1 expression.However, the role of PD-L1 in aspirin inhibition of HCC remains unclear. Liver function indicators, such as bilirubin, ALT, AST, albumin (ALB), and related markers, crucially reflect liver health and are closely associated with HCC development.Bilirubin, categorized into direct and indirect forms, undergoes breakdown and elimination within the liver.HCC frequently leads to bilirubin accumulation and subsequent jaundice due to liver cell damage.Research indicates that bilirubin, particularly in combination with PIVKA-II and AFP, serves as a diagnostic marker for HCC 18 .Transaminases are closely related to liver inflammation, and high levels of ALT and AST increase the risk of HCC, especially in males and patients with viral hepatitis 19,20 .Total protein (TP) and its constituent, ALB, significantly impact HCC prognosis, with studies indicating that low TP and ALB levels correlate with shorter survival in HCC patients 21,22 .Elevated alkaline phosphatase (ALP) levels, commonly observed in cholestasis and hepatocyte damage, serve as an independent risk factor for HCC patient prognosis 23 .Furthermore, cholinesterase (CHE), a primary marker of hepatic protein synthesis, has been reported as a crucial predictor of prognosis in HCC patients undergoing sorafenib treatment 24 .Cholesterol (TCHO) and triglyceride (TG) levels similarly reflect the inflammatory state of the liver, and elevated levels are associated with increased HCC risk 25,26 . Platelet (PLT) counts and coagulation indicators, including fibrinogen (FIB), prothrombin time (PT), activated partial thromboplastin time (APTT), among others, frequently exhibit abnormalities in HCC.Previous studies have shown that preoperative low levels of PLT indicate poor prognosis in HCC 27 .Elevated levels of FIB and PT among coagulation markers have been linked to a poorer prognosis 28,29 .Moreover, APTT serves as an independent prognostic risk factor for early HCC recurrence within 1 year following curative hepatectomy 30 .Hence, early intervention and detection of PLT and coagulation indicators are particularly important for predicting the progression and prognosis of HCC. Therefore, this study aims to establish a rat model of liver cancer and intervene with aspirin, analyze the changes in gross indexes, blood biochemical indexes, coagulation indexes, and T cell count in rats, and explore the effect of aspirin on the progression of liver cancer.Furthermore, the expression of PD-L1 in liver tissues and liver cancer cell lines under different intervention conditions was compared to explore the potential role of PD-L1 as a target in the inhibition of liver cancer by aspirin. Aspirin significantly inhibited the development and progression of DEN-induced liver cancer in rats The observation of the gross liver specimens showed that: all 10 rats in the DEN group had several gray-white tumor nodules of varying sizes on the surface of the liver, some of which had hemorrhage and necrosis; the liver outside the tumor nodules was rough, and tough (Fig. 1a1); the DEN + ASA group there were several gray-white tumor nodules of varying sizes on the liver surface of 8 rats, and the other 2 had no tumor nodules visible to the naked eye, and the liver surface outside the tumor nodules was smooth and tough (Fig. 1b1).The livers of the ASA group and the control group were smooth, ruddy in color, and soft (Fig. 1c1,d1). The cancer nodule specimens of 18 rats were observed by HE staining, all of which were HCC.Most of the cancer cells were arranged in a mass, and infiltration into the surrounding tissues could be observed, and some areas had hemorrhage and necrosis.In the DEN group, the peritumoral fibrous capsule was significantly thickened, and the tissues adjacent to the cancer nodules formed obvious pseudolobules; dilated portal veins and hyperplastic bile ducts were seen in the liver tissue, and a large number of fibrous connective tissue hyperplasia extended into the lobules (Fig. 1a2).Compared with the DEN group, the fibrous capsule around the cancer was significantly reduced in the DEN + ASA group; the morphology of the hepatocytes was normal, the lobular structure was clearer, and there was less inflammatory cell infiltration (Fig. 1b2).In the control and ASA groups, the morphology of hepatocytes was normal, the structure of the hepatic lobule was clear, and no inflammatory cell infiltration was observed (Fig. 1c2,d2). To further explore the effect of aspirin use on immune cells, we used IHC staining to analyze the expression of CD4 and CD8 in the DEN group and DEN + ASA group.Semi-quantitative analysis revealed that the AOD of CD4 was significantly higher in the DEN + ASA group than in the DEN group (P = 0.0012) (Fig. 2a,c).Interestingly, however, the AOD of CD8 was significantly lower in the DEN + ASA group than in the DEN group (P < 0.001) (Fig. 2b,d). Aspirin significantly reduces the tumor burden of DEN-induced liver cancer in rats By measuring the weight, liver weight, and spleen weight of rats, it was found that the weight of the DEN group was significantly decreased compared with the control group and ASA group (all P < 0.001) (Fig. 3a); the liver body weight of the DEN group was significantly increased compared with that of the control group (P = 0.016), ASA group (P = 0.002) and DEN + ASA group (P = 0.046) (Fig. 3c); the spleen body weight ratio was significantly increased compared with that of the control group (P = 0.004) and ASA group (P = 0.004) (Fig. 3e).In addition, we also found that compared with the control group (P = 0.003) and ASA group (P = 0.007), the body weight of rats in the DEN + ASA group was significantly decreased, while the liver weight/body weight ratio was significantly increased (P = 0.032; P = 0.007) (Fig. 3a,c).Similarly, the spleen/body weight ratio of the DEN + ASA group was significantly higher than that of the control group (P = 0.028) (Fig. 3e).There were no significant differences in body weight, liver weight, spleen weight, liver-to-weight ratio, and the spleen-to-weight ratio between the ASA Figure 1.Gross observation and HE staining light of rat livers in each group: (a1,a2) Gross liver specimens and HE staining pictures of rats in the DEN group.There were gray-white tumor nodules on the liver surface, and HE staining showed cancer cell infiltration and pseudolobular expansion.(b1,b2) Gross liver specimens and HE staining pictures of rats in the DEN + ASA group.There were a few gray-white tumor nodules on the surface of the liver.HE staining showed a small amount of fibrous cysts around the cancer and the structure of liver lobules was clear.(c1,c2) Gross liver specimens and HE-stained pictures of rats in the control group.The surface of the liver was smooth, and HE staining showed that the morphology of hepatocytes was normal and the structure of hepatic lobules was clear.(d1,d2) Gross liver specimens and HE staining pictures of rats in the ASA group.The surface of the liver was smooth, and HE staining showed clear hepatic lobules without inflammatory cell infiltration. group and the control group (all P > 0.05) (Fig. 3).Combined with the tumor situation, it showed that the DEN group had a higher tumor burden, and the use of aspirin could effectively reduce the tumor burden. Further, we counted tumor nodules with a diameter of more than 2 mm and analyzed the tumor formation, tumor number, and tumor length and diameter in each group.The analysis found that aspirin intervention significantly reduced the number of tumors (P = 0.006).In addition, the sum of the tumor long diameter in the intervention group was also smaller than that in the model group (9.1 vs. 17.2), but the difference was not statistically significant (P = 0.074) (Fig. 3f). Aspirin significantly reduces serum DBIL and ALP levels in DEN-induced liver cancer rats Compared with the control group and ASA group, the levels of total bilirubin (TBIL), direct bilirubin (DBIL), alanine aminotransferase (ALT), aspartate aminotransferase (AST), and CHE in DEN group and DEN + ASA group were significantly increased (all P < 0.05) (Figs.4h, 4a-d).In addition, the TP in the DEN group and DEN + ASA group was significantly higher than that in the control group (P = 0.048, P = 0.024) (Fig. 4e).Similarly, ALP in the DEN group was significantly higher than that in the control and ASA groups (P = 0.005, P < 0.001), and the ALP level in the DEN + ASA group was also higher than that in the ASA group (P = 0.011) (Fig. 4g).However, TG levels in the DEN and DEN + ASA groups were significantly lower than those in the control group (P = 0.029, P = 0.040) (Fig. 4j).In addition, ALB and TCHO did not show significant differences among the groups (Fig. 4f,I). By comparing the biochemical indexes of the DEN group and the DEN + ASA group, we found that the DBIL and ALP levels of the ASA intervention group were significantly lower than those of the model group (P = 0.038, www.nature.com/scientificreports/P = 0.042) (Fig. 4b,g).In addition, there was no significant difference between the control group and the ASA group (all P > 0.05). Aspirin intervention can affect coagulation indexes in rats with liver cancer Analysis of hematological indexes showed that there was no significant difference between the DEN group and the control group and the ASA group (P > 0.05).Interestingly, the PLT level in the DEN + ASA group was significantly lower than that in the ASA group (P = 0.046) (Fig. 5a); FIB and thrombin time measurements (TT) were significantly lower than those in the control (P < 0.001, P = 0.046) and ASA groups (P < 0.001, P = 0.038) (Fig. 5g,h).Compared with the DEN group, the TT in the DEN + ASA group was also significantly shortened (P = 0.031) (Fig. 5h).However, PT, prothrombin time ratio (PTR), international normalized ratio (INR), prothrombin activation (PT%), and APTT were not significantly different between the DEN group and the DEN + ASA group (Fig. 5b-f).In addition, there was no significant difference between the control group and the ASA group (all P > 0.05). Aspirin can inhibit the proliferation of liver cancer cell lines We conducted experiments with different concentrations and durations of intervention to investigate the impact of aspirin on the growth of liver cancer cell lines under various conditions.Except for the 6-h intervention group, the proliferation of Hep G2 cells in the remaining five groups was significantly suppressed as the intervention time and aspirin concentration increased (compared to the control group, all P < 0.01) (Fig. 6).Furthermore, aspirin treatment was administered to Hep 3B cells to further confirm its inhibitory effect on liver cancer cell proliferation.As depicted in Fig. 7, the proliferation of Hep 3B cells in each group was significantly inhibited with the increase of intervention time and aspirin concentration (compared to the control group, all P < 0.05). Aspirin can inhibit the expression of PD-L1 To further clarify the mechanism of aspirin inhibiting the development of liver cancer, we detected the expression of PD-L1 in different groups of liver tissues.The expression of PD-L1 in the DEN group was significantly higher than that in the ASA group (P = 0.0002) and the control group (P = 0.0026).And by comparing the DEN group with the DEN + ASA group, it can be shown that aspirin can significantly inhibit the expression of PD-L1 (P = 0.0495) (Fig. 8a).We further treated Hep G2 cells and Hep 3B cells with 2.5 mM and 5 mM aspirin for 24 h and analyzed PD-L1 expression.Figure 8b illustrates that the expression of PD-L1 was significantly lower in the 2.5 mM intervention group and the 5 mM intervention group compared to the control group (P = 0.0048; P = 0.0008).Similar results were observed when Hep 3B cells were treated with aspirin, as shown in Fig. 8c.The expression of PD-L1 in the 2.5 mM intervention group and the 5 mM intervention group was significantly lower than that in the control group as well (P = 0.0040; P = 0.0001). Discussion Our study preliminarily demonstrated that aspirin could inhibit the occurrence of HCC by reducing the expression of PD-L1, and this study is the first according to our literature review. HCC is one of the top three cancers causing the most deaths, and the survival and prognosis of HCC patients are poor due to delayed diagnosis and lack of effective treatment strategies.Undoubtedly, there is an urgent need for an efficient and low-toxic treatment method to prolong the overall survival time of HCC patients.Numerous previous studies have shown that aspirin plays an important role in cancer prevention and treatment.Our previous meta-analysis including large population studies also demonstrated that aspirin can inhibit the occurrence and progression of liver cancer 31 .In this experiment, we confirmed that aspirin can significantly inhibit the occurrence of liver cancer (P = 0.006) and reduce the tumor burden (P = 0.046).PD-L1, as an important immune checkpoint molecule and involved in weakening the immune response to infection, can allow cancer cells to escape immune surveillance 32 and has been shown in ovarian cancer, melanoma, colon adenocarcinoma, lung squamous cell carcinoma, breast adenocarcinoma, and many other cancers 33 .However, whether PD-L1 plays a role in aspirin inhibition of HCC remains unclear.In this experiment, we found that the level of PD-L1 in the DEN + ASA group was significantly lower than that in the DEN group (P = 0.0495).This result proves that aspirin can significantly reduce the expression of PD-L1 at the protein level in cancer cells, so PD-L1 is a new target for aspirin to inhibit the growth of liver cancer.Interestingly, we found that there was no significant difference in PD-L1 expression between the Control group and the ASA group (P = 0.1901).Therefore, we speculate that because PD-L1 is abundantly expressed in cancer cells, it leads to tumor immune escape, while normal cells express little or no PD-L1.Therefore, the inhibitory effect of aspirin is mainly reflected in cancer cells and has less effect on normal cells.Moreover, our in vitro experiments on two HCC cell lines additionally showcased that aspirin effectively suppressed the growth of HCC cells by inhibiting the expression of PD-L1.Although our study demonstrated that aspirin can reduce the expression of PD-L1 in HCC, the specific signaling pathway of aspirin action remains unclear.In the study of other cancers, Zhang et al. found that aspirin could inhibit the growth of lung cancer cells by targeting the TAZ/PD-L1 axis 16 .In addition, Xiao et al. 's study of ovarian cancer found that aspirin inhibited the expression of PD-L1 by inhibiting KAT5, thereby inhibiting the signaling pathway of PD-1 and PD-L1 to attenuate the progression of ovarian cancer 15 .Henry et al. showed that aspirin inhibited the growth of PI3K-mutant breast cancer by activating AMP-activated protein kinase (AMPK) and inhibiting the mammalian target of rapamycin complex 1 (mTORC1), independent of its effects on cyclooxygenase-2 (COX-2) and nuclear factor-kappa B (NF-κB) 34 .The above studies suggest that aspirin may act on multiple targets in HCC to suppress PD-L1 expression by regulating an integrated cellular signaling network.In addition, we note a study by Zuazo et al. demonstrating that PD-L1/PD-1 blockade induces the expansion of systemic CD8 + and CD4 + T cell subsets to exert a direct antitumor response 35 .Our study also showed that the use of aspirin can upregulate CD4 expression and inhibit PD-L1 expression, which coincides to some extent with their study.However, interestingly, our study found that the use of aspirin simultaneously inhibited CD8 expression (Fig. 2b,d).The reason for this contradictory result may be that most of the previous studies used specific inhibitors of PD1/PD-L1, such as nivolumab and atezolizumab.However, aspirin has a wide range of action pathways, and the specific mechanism by which aspirin inhibits the expression of PD-L1 in liver cancer cells is still unclear.Therefore, aspirin may act on multiple signaling pathways and ultimately suppress CD8 expression. It is well known that inflammation is closely related to the development of tumors.In the inflammatory cascade in which inflammatory cells provide the basis for the development of the tumor microenvironment, disrupting this cascade may prevent further proliferation of malignant cells 36 .In this experiment, the DEN + ASA group exhibited a lower presence of fibrous capsules surrounding cancer and no noticeable infiltration of inflammatory cells under microscopic examination.These findings suggest that aspirin can potentially impede liver cancer progression by suppressing inflammation levels, minimizing the incidence of liver cancer, and alleviating the tumor burden.Our research further demonstrated that aspirin had a significant impact on reducing DBIL and ALP levels in HCC rats, as indicated in Fig. 4 of our blood biochemical analysis.It is well known that DBIL is converted from IBIL by UDP-glucuronosyltransferase 1A1, and both together form TBIL 37 .Previous studies have found that increased DBIL often indicates hepatocyte injury 38 .In a retrospective study of NAFLD patients, Salomone et al. found that unconjugated bilirubin levels were lower in patients with high degrees of liver inflammation and fibrosis, indicating more conversion of IBIL to DBIL 39 .Therefore, DBIL levels are closely related to the degree of liver inflammation.The significant decrease in DBIL level in the aspirin intervention group compared with the DEN group also suggested that the use of aspirin reduced the level of liver inflammation, thereby inhibiting the progression of HCC.ALP, a hydrolytic enzyme highly expressed in the liver, is associated with poor prognosis in HCC 40,41 .The abnormal increase of ALP is usually caused by cholestasis and liver inflammation, which can also lead to the development of HCC.In our study, the use of aspirin significantly reduced ALP expression levels in HCC rats compared with the DEN group, which should also be achieved by the anti-inflammatory effect of aspirin.Additionally, we observed significantly elevated levels of TBIL, ALT, AST, and CHE in both the DEN and DEN + ASA groups compared to the control group.However, there was no notable difference between the two treatment groups (Fig. 4).As conventionally understood, TBIL, ALT, AST, and CHE are usually elevated in response to liver inflammation 42,43 .Consequently, all these indices were significantly higher in the DEN group than in the control group.Intriguingly, ASA treatment exhibited a tendency to mitigate liver inflammation and subsequently reduce these parameters, albeit without significant differences observed.TP and its major component, ALB, are mainly synthesized by the liver, and therefore liver cancer is often reduced 44 .However, in our study, TP was higher in the DEN group and the DEN + ASA group than in the control group, which may be due to the fact that the chronic wasting stage of HCC has not yet been entered.TCHO metabolism is closely related to the liver, and liver function disorders due to HCC affect TCHO metabolism.Our study revealed a slight elevation in TCHO levels in rats from both the DEN and DEN + ASA groups, although the difference was not statistically significant (Fig. 4i).TG is mainly synthesized by the liver, and previous studies have shown that TG reduction is closely related to the high risk of HCC 45 .A population study in Korea also showed that decreased TG levels increased the occurrence of HBV-related liver cancer 46 .In our study, TG levels in the DEN and DEN + ASA groups were significantly lower than those in the control group (Fig. 4j), suggesting that low TG levels are highly correlated with HCC, which coincides with the results of the above two population studies. However, our study also indicates a potential risk of bleeding with aspirin use.Given that the liver is the main site for the synthesis of coagulation factors, both exogenous and endogenous coagulation pathways are highly dependent on the liver 47 .Consequently, alterations in the quantity and quality of coagulation factors due to liver disease result in varying degrees of coagulation dysfunction.However, aspirin's anti-platelet and coagulation effects 48 might potentially heighten the bleeding risk among HCC patients.Our study demonstrated this risk, as indicated in Fig. 5, where PLT, FIB, and TT indexes in the DEN + ASA group were lower compared to the control or ASA groups.It is worth noting that although PT, PTR, INR, PT%, and APTT were not significantly different between groups, we still cannot ignore the potential risk of bleeding (Fig. 5b-f).Because a variety of cytokines are involved in the balance of hemostasis, PT and INR will not be sufficient to show the true state of the body when there is a lack of procoagulant and anticoagulant factors at the same time 47 . In conclusion, our findings identify a novel mechanism by which aspirin alleviates liver cancer.Aspirin has been shown to fight the growth of liver cancer.PD-L1 has been determined to be decreased in aspirin-suppressed liver cancer.Aspirin inhibits the expression of PD-L1 and causes liver cancer growth arrest.In addition, the anti-inflammatory effect of aspirin has also enhanced its effect of blocking the occurrence and development of liver cancer to a certain extent.Our findings suggest that aspirin may be a promising new drug for liver cancer treatment.In the future, the upstream molecular mechanism of its inhibition of PD-L1 expression in liver cancer cells should be explored in more detail, and a large-scale population study should be conducted to explore the advantages and disadvantages of its single drug and combined targeted drugs in the treatment of liver cancer. Establishment of animal liver cancer model and specimen processing Thirty male SD rats (purchased from the Experimental Animal Center of Xi'an Jiaotong University), weighing about 170-210 g, were reared in separate cages, fed with standard chow, regularly changed bedding, and fed ad libitum for one week.From the second week, the rats were randomly divided into the control group (n = 5), ASA group (n = 5), DEN group (n = 10), and DEN + ASA group (n = 10).DEN group and DEN + ASA group were prepared with distilled water to prepare a 0.01% DEN (McLean, 99.9% purity) solution.0.01%DEN was freely consumed for 6 weeks, followed by 1 week of discontinuation, followed by DEN feeding for 10 weeks and discontinuation 49 .The control group was given free drinking water; the ASA group and the DEN + ASA group were given aspirin (10 mg/kg per day) by gavage.The general conditions of the animal were observed every day 50 .Rats can be fed in individual cages if they are in very poor condition. Before euthanasia, all animals were anesthetized with sodium pentobarbital (30 mg/kg, intraperitoneal injection).After successful anesthesia, blood was taken and the liver and spleen were obtained.Then, the relevant data such as liver size, color, texture, weight, and presence or absence of cancer nodules were rapidly observed and recorded.Finally, liver tissues were fixed in 4% paraformaldehyde solution for 24-48 h, embedded in paraffin, and cut into 3-4 μm thick sections for HE staining. The experimental procedures were approved by the Medical Ethics Committee of the Second Affiliated Hospital of Xi'an Jiaotong University.The study was reported in accordance with the ARRIVE guidelines.All procedures were carried out in accordance with institutional guidelines. Immunohistochemical analyses Rat liver sections with a thickness of 5 μm were cut from paraffin blocks and mounted on slides.Deparaffinized hydration was performed using xylene and varying concentration gradients of alcohol.Then, antigen thermal repair was performed using a citric acid antigen repair solution at pH 6.0.After antigen repair was completed, endogenous peroxidase was blocked using 3%H 2 O 2 and 10% goat serum.Further, rabbit polyclonal antibodies against CD4 (Servicebio, China, dilution: 1:200) and CD8 (Servicebio, China, dilution: 1:500) were dropped onto each section and incubated overnight at 4 °C.The second antibody (goat anti-rabbit, Abcam, USA) was incubated Figure 2 . Figure 2. IHC staining of CD4 and CD8 in liver tissues of rats in different groups: (a,b) Expression of CD4 and CD8 in the DEN group was detected by IHC.(c,d) Expression of CD4 and CD8 in the DEN + ASA group was detected by IHC.(e)The expression of CD4 in the DEN + ASA group was significantly higher than that in the DEN group (P < 0.01).(f) The expression of CD8 in the DEN + ASA group was significantly lower than that in the DEN group (P < 0.001).The images for IHC staining were all at 20X magnification.**P < 0.01; ***P < 0.001. Figure 3 . Figure 3. Tumor burden of rats in each group: The rats in the DEN group, the DEN + ASA group, the control group, and the ASA group were compared in terms of (a) body weight, (b) liver weight, (c) liver-to-weight ratio, (d) spleen weight and (e) spleen-to-weight ratio.(f) The number of tumors and the sum of tumor length and diameter were compared between the DEN group and the DEN + ASA group.*P < 0.05; **P < 0.01; ***P < 0.001. Figure 6 . Figure 6.Inhibitory effect of aspirin on Hep G2 cell proliferation at different concentrations and time gradients: (a) The proliferation of Hep G2 cells treated with different concentrations of aspirin for 6 h was different from that of the control group.(b) The proliferation of Hep G2 cells treated with different concentrations of aspirin for 12 h was different from that of the control group.(C) The proliferation of Hep G2 cells treated with different concentrations of aspirin for 18 h was different from that of the control group.(d) The proliferation of Hep G2 cells treated with different concentrations of aspirin for 24 h was different from that of the control group.(e) The proliferation of Hep G2 cells treated with different concentrations of aspirin for 48 h was different from that of the control group.(f) The proliferation of Hep G2 cells treated with different concentrations of aspirin for 72 h was different from that of the control group.The concentration gradients of aspirin were set to 1.25, 2.5, 5, and 10 mM.*P < 0.05; **P < 0.01; ***P < 0.001. Figure 7 . Figure 7. Inhibitory effect of aspirin on Hep 3B cell proliferation at different concentrations and time gradients: (a) The proliferation of Hep 3B cells treated with different concentrations of aspirin for 6 h was different from that of the control group.(b) The proliferation of Hep 3B cells treated with different concentrations of aspirin for 12 h was different from that of the control group.(c) The proliferation of Hep 3B cells treated with different concentrations of aspirin for 18 h was different from that of the control group.(d) The proliferation of Hep 3B cells treated with different concentrations of aspirin for 24 h was different from that of the control group.(e) The proliferation of Hep 3B cells treated with different concentrations of aspirin for 48 h was different from that of the control group.(f) The proliferation of Hep 3B cells treated with different concentrations of aspirin for 72 h was different from that of the control group.The concentration gradients of aspirin were set to 1.25, 2.5, 5, and 10 mM.*P < 0.05; **P < 0.01; ***P < 0.001. Figure 8 . Figure 8. Aspirin significantly reduced the expression of PD-L1: (a) Differences in PD-L1 expression in liver tissues of rats in the DEN group, DEN + ASA group, ASA group, and control group.(b) The difference in PD-L1 expression in Hep G2 cells under different intervention concentrations compared with the control group.(c) The difference in PD-L1 expression in Hep 3B cells under different intervention concentrations compared with the control group.The blots were cut prior to hybridization with antibodies during blotting, and images of all replicate blots were included in Supplementary Fig. 1. *P < 0.05; **P < 0.01; ***P < 0.001.
v3-fos-license
2018-12-22T01:24:23.604Z
2018-01-01T00:00:00.000
59429408
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/06/matecconf_sibe2018_03013.pdf", "pdf_hash": "c77c42906ab13803a99368853425c562c6dfa439", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2827", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "c77c42906ab13803a99368853425c562c6dfa439", "year": 2018 }
pes2o/s2orc
The Diurnal and Semidiurnal Patterns of Rainfall and its Correlation to the Stream Flow Characteristic in the Ciliwung Watershed , West Java , Indonesia Based on the data analysis of 16 years of TMPA dataset, the common patterns of rainfall over the Ciliwung River Basin are diurnal and semidiurnal. Those patterns can be associated by a stationary or moving rainstorm with different magnitude and direction. Based on hydrological model simulations, both the pattern and movement have a significant role to the discharge. At the downstream area, the discharge that triggered by semidiurnal pattern of rainfall can produces higher peak discharge and longer flood duration than diurnal pattern. This result open possibility to improve our prediction on design discharge. Introduction The spatial-temporal characteristics of rainfall have significant role on characteristic of stream flow [1]- [4].These characteristics strongly effected by moving rainstorm [2].The greatest effect of moving rainstorms occurred when the velocity of storms is half of stream flow velocity in downslope direction [1].de Lima and Singh simulated the moving rainstorm using numerical and physical models, and concluded the runoff characteristics are sensitive to the speed and direction of storm movement [3], [4]. An analysis of the spatial-temporal characteristic can be simplified by analyze the spatial distributions and the temporal patterns of rainfall.Based on the four severe floods that occurred in Jakarta, the spatial-temporal variation is inherent characteristics on the extreme rainfalls.Table 1 showed the distribution of rainfall and storm motions on flood events in Jakarta.The distribution of rainfall can be concentrated on upstream areas (example: 1996 and 2007 flood events), or downstream areas (example: 2002 and 2013 flood events) and the storms can moving to downslope (example: 2002) or upslope (example: 2013) or downslope-upslope (example: 2007).downstream downslope [5] [6] [7] 3 February 2007 upstream Downslopeupslope [7], [8] 4 January 2013 downstream upslope [9] The objective of this study is to study the common characteristics of rainfall and related to peak flow and floods durations.The characteristics shown by: spatial distributions (upstream, midstream, downstream), temporal patterns of rainfall and its combinations (diurnal and semidiurnal). Ciliwung River was diverted to West Flood Canal at Manggarai hydrological station (HS).This condition increasing the complexity of hydrological analysis in CRB, especially for the downstream area after the diversion.For this reason, this study is limited to the area before the diversion, precisely at M.T Haryono HS.Total area of CRB for an outlet at M.T Haryono HS reached 352 km 2 (Fig. 1).CRB has a complex topography.The upstream area consists of steep mountains with maximum altitude above 2982 m.The downstream area is a flat floodplain. Fig. 1.The map of study area. Rainfall Data The Tropical Rainfall Measuring Mission (TRMM) Multisatelite Precipitation Analysis (TMPA) was used in this study.The TMPA was produced by a calibrationbased sequential scheme for combines multiple satellites to produce precipitation data at fine scales, 0.25º×0.25ºand 3 hourly, and available since 1998 [2].This data was compare to other TRMM-derived products to analysis the extreme rains over the Maritime Continent [3].Nesbitt and Zipser (2003) used the previous product of TMPA to analysis the diurnal cycle of rainfall in the tropical area [4]. This study used 16 years period of TMPA data, 1998-2013 to produce the general pattern of daily rainfall over CRB.There is 3 grid over CRB named by the upstream, midstream and downstream grid (Fig 1). Diurnal variation analysis method The diurnal variation of rainfall identified composite analyzes according to the method that proposed by Ruhf and Cutrim (2003).The 3-hour rainfall data from TMPA was accumulated for the same hour (hh), for each month (mm), of 16 years (yy) study periods, then divided by the number of years.This method produced the average annual accumulation rainfall for each 3-hour (mm/year).However, since monthly analysis is required, the month is considered as a calculated variable (equations ( 1 Hydrological model and simulation scnearios The distributed hydrological model was used to investigate the influence of diurnal characteristics of rainfall to overland flow at CRB.The numerical model used in this study is Gridded Surface Subsurface Hydrological Analysis (GSSHA) [2].GSSHA is the distributed and physics-based hydrological modeling.The model also has two dimensional (2D) overland flow and 1D stream flow that has been fully coupled. The domain of the model is Ciliwung Watershed from the upstream area at Gede-Pangrango Mountain area to the outlet at M.T. Haryono hydrological station, DKI Jakarta Province.The domain has 250 m horizontal resolution and divided by 234 x 99 grids and 5704 grid is active cell (inside watershed).Fig. 1 and Fig. 2 showed the domain of the hydrological model. Fig. 2. Hydrological Domain in 2D visualization The topography, rivers, and land cover data are GISvector dataset with scale 1:25,000 which can be downloaded from the website of Badan Informasi Geospasial (BIG) at http://portal.ina-sdi.or.id/.The contour topography was converted to DEM and used as the surface elevation of the model.The river data was used to correct the DEM data, especially in the gently slope area.The land cover dataset is used as a proxy for roughness parameterization.There are four classifications for roughness parameterization that is: forest, cultivated area, water body, and urban area.The roughness value for This model was run to simulate two following scenarios, namely (1) the combination of rainfall pattern on upstream, midstream, and downstream area that produced by TMPA data analysis (section 3.1), and (2) the stationary and moving rainfall (~5 m/s) based on previous research.The diurnal variation of rainfall in CRB variated for each location and month.Fig. 3 shows the average of the 16 years of total annual rainfall for each month and each hour in the upstream, midstream, and downstream area of CRB based on TMPA data.The figure can represent the frequency of rain for each location, hour and month.In upstream and midstream area, the rain commonly occurs in afternoon.The morning rain, the rain that occurs in the morning, is more frequent in the midstream area than upstream.In the downstream, the morning rain becomes dominant than the afternoon rain.This indicates the diurnal patterns getting more dominant in the upstream area and vice versa for the semidiurnal pattern, getting more dominant in the downstream area.Where the diurnal pattern is a pattern with single rain event in a day and the semidiurnal pattern has two rain events in a day. Variation of Diurnal Rainfall The domination of semidiurnal pattern in downstream area was studied by Siswanto et al. ( 2016) based on long term period of observation data at the Jakarta Observatory meteorological station [3].Semidiurnal pattern getting dominant in December-January-February (DJF) than the other month. May to September is a dry month and August is the driest month, therefore those months will not be further analyzed in this study.Besides of diurnal patterns, there are also other rainfall characteristics triggering the extremes discharge, which is rain propagation.Due to data limitations in Indonesia, there has few of research discuss this topic.However, several studies have mentioned their existence in CRB [6], [9], [8], [11].Wu et al. ( 2013) stated the convective system that triggered major floods event in 2013 propagate to upstream by 8 m/s of speed.Sulityowati showed that the speed of rainfall propagation can vary between 2.57 -15.42 m/s based on radar data on 9-15 February 2010 [11].Numerical model and laboratory experiment in artificial domains shows that propagated rain to downstream direction will lead to greater peak discharge.Based on this, a new parameter added to the scenario, which is the movement of rain.At this moment only considered the movement to downstream at 5 m / s of speed.The addition of this parameter produces four additional scenarios illustrated in Fig. 8. Fig. 8.As in Fig. 7, but for moving rainstorm to downstream directions The role of the rainfall pattern combination The GSSHA model was applied to produce hydrograph of discharge effected by diurnal-semidiurnal pattern of rainfall.This simulation consists of combinations of a simple spatiotemporal variability of daily rainfall over CRB that described in sections 3.1. Fig. 9. Mass curve of the calibration processes The discharge data provided by Ciliwung-Cisadane Large River Basin Agency (BBWS: Abbreviation in Indonesian) was used to calibrate the model.The calibration output showed a good agreement between observation data and simulation result for the volume of rivers discharges at outlet without any significant bias (Fig. 9).This result indicates the model parameterization was good enough to explain the role of rainfall volume on overland flow.The calibration process is produced by the roughness parameterization and the ratio of effective rainfall.That roughness value is 0.184, 0.01, 0.01, and 0.001 for forests, cultivated areas, urban areas, and water bodies, respectively.Effective rainfall was used in this study is 0.7.Comparison of the DD, DS, SS and 2SS scenarios showed: 1) diurnal pattern of rainfall produced higher peak flow than semidiurnal, but 2) semidiurnal pattern of rainfall triggered semidiurnal pattern on discharge and prolong flood duration (Fig. 11).The rainfall with 15 mm/hour of intensity and 3 hours on the duration that occurred on the whole watershed area have produced the hydrograph with 470 m 3 /s of peak flow and 8 hours of flood duration.If the semidiurnal rainfall with 7.5 mm/hour of rainfall intensity and 2 x 3 hour of durations occurs, respectively in the afternoon and early morning, the hydrograph shape will be turned into semidiurnal with the values of the first and second peak flow respectively are 0.38 and 0.5 times of the peak flow of scenario DD.Besides of that, the flood duration increased from 8 hours to 11 and 13 hours on the DS and SS scenarios, respectively. The simulation results in Fig. 11 showed the existence of the role of the diurnal-semidiurnal patterns of rainfall against to in the diurnal-semidiurnal patterns of discharge.Existence of diurnal-semidiurnal patterns of the rivers water level, generally has a logarithmic function to the discharge, has shown by result of spectral analysis conducted by Sulityowati et al. ( 2014) at Manggarai hydrological station (about 5 km to downstream area from outlet of the hydrological model in this study) [11].The study showed the correlation between the diurnal pattern of rainfall and water level, but have no explanation about correlation for the semidiurnal pattern.The semidiurnal pattern of discharge (Fig. 11, blue curve), the 2nd peak flow showed a larger value than 1st peak flow.This condition is caused by a superposition between hydrographs that produced by the afternoon and morning rain on semidiurnal rainfall.The value of the second peak will be higher when the time difference of the afternoon and morning rain is shorter.These results indicate that the semidiurnal pattern has the potential to trigger greater peak discharge for maximum rainfall intensity occur in both afternoon and morning (15 mm / hour in this study, the 2SS simulation scenarios). The 2SS scenario has twice amount of daily rainfall than the SS scenario.This scenario accordance with the This condition has a possibility to produce two rainfall events that have maximum intensity.The result opens possibility to analyze the major flood events that occurred in 2002, 2007 and 2013 known have a long flood duration [9], [8], [12], [7]. In addition to the diurnal-semidiurnal pattern, Sulityowati et al. ( 2014) also explains about rain propagation that can occur to upstream or downstream direction with speeds varying between 2.57 -15.42 m/s based on radar data on 9-15 February 2010 [11].Wu et al. ( 2013) stated the convective system that triggered major floods event in 2013 propagate to upstream about 8 m/s [9].Rain propagation was also described in Hadi et al. (2006) and Trilaksono et al. (2012) which each addressed the flood events of 2002 and 2007 [6], [8].That research result indicates that, besides of the semidiurnal diurnal pattern, rainfall propagation also plays an important role in triggering the peak discharge that causes flooding. The DD-M, DS-M, SS-M and 2SS-M scenarios are rainfall scenarios that combine diurnal-semidiurnal patterns of rainfall and rain propagation.The simulation results show that rain propagation to downstream area will amplify peak discharge about 1.3, 1.1, 1.2 and 1.2 times for DD, DS, SS, and 2SS patterns, respectively (Fig. 12).However, this rain propagation will result in shorter flood duration.[3], [4].Those study also mentioned downstream movement direction will cause the opposite condition, ie lower peak discharge but longer flood duration.In the real case, the convective system causing major floods in Jakarta has varying speed and direction values, such as the 2013 flood event, the conventive system was propagated to upstream direction [9] and the 2007 flood event the convective system propagated to the downstream area at night but to the upstream area at morning [8]. Conclusions The results of this study highlight the spatiotemporal variance of rainfall patterns and its role in peak flow and flood durations.In addition, the role of rainfall propagation also compared to the role of rainfall patterns.The result shows the rainfall pattern variated for each location and month.The semidiurnal pattern becomes dominant in the whole area of CRB in January and February which is the months with most frequent major floods in CRB.Based on the hydrological model simulation, for the equal amount of daily rainfall (a single event for semidiurnal pattern): (1) the diurnal pattern will produce higher peak-flow than the semidiurnal pattern of rainfall and (2) vice versa for flood duration.The double amount of rainfall for the semidiurnal pattern will produce higher peak-flow and flood durations.This conditions will be enlarger by the moving rainstorm to downstream directions. The results of this research indicate there are two factors that trigger major floods are: (1) semidiurnal pattern of rainfall that occurs in the whole of the watershed, (2) the rainfall propagation to downstream area.Both of these factors need to be studied further in real cases.This research was funded by Riset ITB 2017 and was the part of "Peranan Hujan yang Bergerak terhadap Debit Ekstrem di Sungai Ciliwung" research. )).The calculation result was plotted on the hour-month chart for represented the variability of diurnal variation for each month, Fig 3 and Fig 4 are the results of this method. Fig. 3 . Fig. 3. Total annual rainfall for each month and each hour in (a) upstream, (b) midstream and (c) downstream area of CBR. Fig. 4 . Fig. 4. The general patterns of diurnal rainfall for each month from October to April in three grid of TMPA data in CRB Fig. 5 . Fig. 5. Topographic profile of Ciliwung River from the upstream and the position of hydrology station (red arrows) and TMPA grid boundary (vertical black lines) Fig. 6 . Fig. 6.The cumulative probability function of: a) amount, b) duration, c) average intensity and d) maximum intensity for each rainfall event Fig. 7 . Fig. 7. Rainfall scenarios for hidrological simulation based on general patterns of diurnal rainfall Fig. 10 . Fig. 10.The illustration of hydrological simulations results and the definition of peak flow, flood volume, and flood duration that used in this study Table 1 . The rainstorm characteristics in the Ciliwung Watershed on extreme
v3-fos-license
2021-10-22T15:38:47.530Z
2021-09-01T00:00:00.000
239187644
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.preprints.org/manuscript/202109.0018/v1/download", "pdf_hash": "67b930ec77233d40341c5cb832f707d266eed993", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2829", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "4f6b42f033f13a9655d0117979790c0a0fa8acae", "year": 2021 }
pes2o/s2orc
The Effect of Sulfated Zirconia and Zirconium Phosphate Nanocomposite Membranes on Fuel-Cell Efficiency To investigate the effect of acidic nanoparticles on proton conductivity, permeability, and fuel-cell performance, a commercial Nafion® 117 membrane was impregnated with zirconium phosphates (ZrP) and sulfated zirconium (S-ZrO2) nanoparticles. As they are more stable than other solid superacids, sulfated metal oxides have been the subject of intensive research. Meanwhile, hydrophilic, proton-conducting inorganic acids such as zirconium phosphate (ZrP) have been used to modify the Nafion® membrane due to their hydrophilic nature, proton-conducting material, very low toxicity, low cost, and stability in a hydrogen/oxygen atmosphere. A tensile test, water uptake, methanol crossover, Fourier-transform infrared spectroscopy (FTIR), X-ray diffraction (XRD), thermal gravimetric analysis (TGA), and scanning electron microscopy (SEM) were used to assess the capacity of nanocomposite membranes to function in a fuel cell. The modified Nafion® membrane had a higher water uptake and a lower water content angle than the commercial Nafion® 117 membrane, indicating that it has a greater impact on conductivity. Under strain rates of 40, 30, and 20 mm/min, the nanocomposite membranes demonstrated more stable thermal deterioration and higher mechanical strength, which offers tremendous promise for fuel-cell applications. When compared to 0.113 S/cm and 0.013 S/cm, respectively, of commercial Nafion® 117 and Nafion® ZrP membranes, the modified Nafion® membrane with ammonia sulphate acid had the highest proton conductivity of 7.891 S/cm. When tested using a direct single-cell methanol fuel cell, it also had the highest power density of 183 mW cm−2 which is better than commercial Nafion® 117 and Nafion® ZrP membranes. Introduction Because of their outstanding conversion efficiency, high power density, and zero pollution emissions, proton-exchange-membrane fuel cells (PEMFCs) are regarded as environmentally acceptable energy-conversion devices for both stationary and portable power applications [1]. Proton-exchange membranes (PEMs) are a key component of PEMFCs because they carry protons between the anode and the cathode while isolating electrons and avoiding fuel crossover. Electrochemical devices that are both durable and efficient, such as PEMFCs and beyond-Li-ion batteries such as Li-sulfur [2,3] and Li-O 2 batteries [4][5][6]. PEMs such as Nafion ® 117 maintain a greater conductivity and mechanical and chemical stability at lower temperatures in fuel cells [7][8][9][10][11]. The phase of separation between Nafion ® 's two major monomers (the hydrophobic Teflon-like backbone and the hydrophilic sulfonic-acid-terminated side chain) determines its characteristics. The thermo-chemical environment and material interfaces of Nafion ® play a major role in this segregation. However, when run at higher temperatures, these perfluorosulfonic acid membranes face issues such as increased fuel crossover and reduced proton conductivity due to water loss, as well as a higher cost, limiting their use in PEMFCs [12,13]. The insertion of nanosized inorganic fillers into the polymeric matrix to construct hybrid composite membranes has received a lot of interest among those investigating methods to synthesise efficient PEM materials [14]. At low to medium temperatures, the introduction of hygroscopic inorganic nanomaterials such as silica, titanium dioxide, zirconium dioxide, and nanoclays into the polymer matrix has improved features of composite membranes such as water retention capacity and ionic conductivity [15]. These hydrophilic fillers can provide many hydrogen bonding sites, allowing membranes to absorb a large amount of water. When the amount of filler is increased, it weakens the link between the organic polymer and inorganic filler which causes poor interfacial interaction, resulting in a loss of conductivity [6]. When inorganic acid such as sulfated zirconia is calcined at 300 • C, it improves proton conductivity (14.5 mS/cm), with better ion-exchange capacities (IEC) of 0.54 meq/g and greater water uptake due to sulfate ions, which raises the sulfonic acid content inside the membrane [16]. Furthermore, the addition of sulfated zirconia to the membrane gives an additional proton ion within the Nafion matrix. In addition, the modified Nafion ® membrane containing S-ZrO 2 nanoparticles exhibits less swelling, better mechanical properties, and lower methanol permeability. Although mesoporous sulfated zirconia offers the potential to broaden the applications of zirconia-based acid materials, its low thermal stability remains a major drawback, causing the mesoporous sulfated zirconia to collapse when the template is removed at a high temperature. Zirconia is predominantly cationic rather than polyxo in high acidic conditions. However, the polyoxo ions can occur when zirconia is sulfated with ammonia sulfate [Zr (OH) 2 (SO 4 2− ) x (H 2 O) y ] n n(2−2x) [17]. At temperatures above 100 • C, the hydroxyl groups on the oxide surface can effectively retain water molecules and prevent membrane dehydration. Furthermore, incorporating S-ZrO 2 nanoparticles into Nafion ® membranes enhances the sensitivity to high-temperature response. Zirconia oxide is the sole metal oxide with four chemical properties: acidity or basicity and a reducing or oxidizing agent [18]. The fascinating zirconium phosphate (ZrP), a layered acidic inorganic cation-exchange material with the formula Zr(HPO 4 ) 2 2H 2 O, has been extensively explored [19]. ZrP is known for its great thermal and chemical stability, as well as its high ion conductivity and mechanical strength. Its layered construction enables the incorporation of numerous guest species of diverse sizes between their layers [20,21]. ZrP has been integrated into several polymer-based nanocomposites in recent investigations. These have shown good mechanical, thermal, and barrier properties [22]. The major goal of this article is to use sulfated and phosphated zirconia nanoparticles to modify Nafion ® membranes to achieve high proton conductivity, good thermal and chemical stability, and improved water absorption. Membrane Nanocomposite Synthesis To eliminate contaminants, Nafion ® 117 membranes were boiled for 1 h in hydrogen peroxide (3 percent solution), then boiled in sulfuric acid (0.5 M), and finally soaked in distilled water at 80 • C for 1 h [23,24]. After pre-soaking the pure membranes in methanol to open the pores, 5 wt% of ZrP [9] and 5 wt% of S-ZrO 2 [25] nanoparticles were added. The membranes were soaked five times before being heated at 100 • C for 2 h [26]. The obtained solution was maintained at room temperature for 24 h. The thicknesses of the membranes were measured using a digital micrometer (0.18 mm). To record the accurate number, the thickness reading was recorded more than thrice. Tensile Test Under a uniaxial testing system, the mechanical strength of membranes was measured. The breadth, thickness, and length were all measured using a Vernier calliper. All membranes had a clamping area of 4 mm × 10 mm. The tension applied to the sample was calculated using the observed thickness of 0.18 mm. Membranes were measured at 25 • C using the CellScale Ustretch instrument with actuator speeds of 40, 30, and 20 mm/min. Measurements of the Water Contact Angle Contact angles were used to determine the hydrophilicity of the membrane surfaces (Phoenix 300 contact angle analyser instrument equipped with a video system). For analysis, the membrane was cut into strips and put on glass slides. By putting the tip of the syringe close to the sample surface, a droplet of deionized water (0.16 L) was placed onto the surface of membranes at ambient temperature. To get an average value, the measurement was performed ten times at various membrane surfaces. The wetting process was recorded prior to the water droplet adhering to the sample's surface until there was no more noticeable change at the surface. Water Uptake (WU) and Swelling Ratio (SR) The membranes were immersed in distilled water for 24 h at different temperatures of 80 • C, 60 • C, and 30 • C and then weighed and measured. Using the equations below, the water uptake and swelling ratio of soaked membrane were calculated: where W up is the WU percentage, m wet the membrane wet mass, m dry the membrane dry mass, L w the membrane wet length, and L d the dried length of the membrane. Ion-Exchange Capacity (IEC) The IEC of membranes was determined using the equation below based on the titrated results: where V NaOH is the volume of titrated NaOH, C NaOH the concentration of NaOH and the membrane dried mass is m d . Measurements of the Methanol Permeability A two-compartment permeation-measuring cell was used to determine the methanol crossover. Methanol solution (50 mL) was placed in compartment (A) and distilled water in compartment (B) (50 mL). With a diffusion area diameter of 3.5 cm, the membrane was installed between the two compartments. The readings were collected at 30 • C, 60 • C, and 80 • C using 5 M and 2 M methanol solutions. The following equation was used to compute methanol permeability (P): where C B (t) is the methanol concentration in compartment B at time t; methanol content in compartment A is denoted by C A and in compartment B, V B represents the volume of distilled water, the effective permeating area is A, and the membrane thickness is L. Measurement of the Proton Conductivity A four-point probe conductivity cell was used to measure the conductivities of the membranes. At 0.1 mA current and 1 MHz to 10 Hz frequency, the proton conductivity was measured galvanostatically and estimated using the equation below: where R s denotes the measured membrane resistance, the area of the membrane normal to the current flow is A, and the thickness of the membrane is L. The Cell Polarization and the Fabrication of Membrane Electrode Assembly The performance of the membranes was tested using a direct methanol fuel cell (DMFC). The MEA was prepared by using 20% Pt Vulcan XC-72R in Nafion ® solutions for ink preparation and Pt in carbon cloth. Pt on carbon cloth was used for the anode and cathode membrane electrode assemblies (MEA). The MEA was put together without the use of hot pressing. At 60 • C, fuel cells were tested with a 2 M methanol solution. On a single fuel-cell test, the galvanostatic potential of the fuel cell was measured in the open air. Figure 1A shows the FTIR spectra of Nafion ® S-ZrO 2 and Nafion ® ZrP nanocomposite membranes in comparison to Nafion ® 117 membrane and Figure 1B shows the FTIR spectra of S-ZrO 2 and ZrP nanoparticles. Figure 1A(a-c) shows that the O-H stretching vibration of the membranes is 3456 cm −1 , which corresponds to physically adsorbed water [27,28]. However, as shown in Figure 1A(a-c), the peaks at 3456 cm −1 for Nafion ® S-ZrO 2 and Nafion ® ZrP nanocomposite membranes are significantly lower than those for commercial membranes. This may be due to the incorporation of nanoparticles into the nano-composite membranes, which increases the water content. Figure 1A(a-c) shows the O-H bending vibration of free water molecules at 1630 cm −1 , due to symmetric S-O stretching, the membranes have a comparable peak at 1060 cm −1 [29,30] and a band at 1145 cm −1 and 1201 cm −1 were formed due to symmetric C-F stretching [31]. Furthermore, the C-O-C stretching caused the peaks at 976 cm −1 and the 512 cm −1 band was due to symmetric O-S-O bending, whereas the 632 cm −1 band was due to C-S group stretching [31,32]. Asymmetric stretching vibrations of the Zr-O-Zr bond were also assigned to the peaks at 636 cm −1 and 515 cm −1 , respectively, which were identical to the Nafion ® 117 membrane's transmittance peaks [33]. This could be due to the Nafion ® matrix's well-distributed inorganic components. The bands at 1619 cm −1 were allocated to H-O-H bending vibration mode in Figure 1A(a), which was slightly similar to the bands at 1648 cm −1 and 1636 cm −1 for ZrP and S-ZrO 2 as shown Figure 1B(a,b); this may have been due to the sulfate group's coordinated molecular water [33]. The peaks of Zr-O and P-O 4 can be seen in Figure 1A(b) at 797 cm −1 , 509 cm −1 , and 446 cm −1 , respectively, this could be due to ZrP nanoparticles embedded in the Nafion ® membrane and the C-H stretching of the modified Nafion ® membrane, with stretch vibrations between 2925 cm −1 and 2852 cm −1 [34,35]. Membrane Morphology To produce Nafion ® /ZrP and Nafion ® /S-ZrO 2 nanocomposite membranes, 5 wt% nanoparticles (ZrP and S-ZrO 2 ) were incorporated into commercial Nafion ® 117 membrane. The morphology of the obtained membranes was examined using scanning electron microscopy (SEM). Figure 2a shows that the Nafion ® 117 membrane is dark in colour and free of nanoparticles. Figure 2b shows a Nafion ® /ZrP nanocomposite membrane with uniformly distributed ZrP nanoparticles and fewer agglomerates in the membrane matrix. This can also be seen in SEM micrographs of ZrP nanoparticles, which show the presence of well-oriented nanoparticles with a very smooth surface, as shown in the Figure 2b insert. Figure 2c shows the significant difference in surface morphologies observed under modified Nafion ® 117 with S-ZrO 2 nanoparticles that were well scattered and agglomerated. As illustrated in the Figure 2c insert, this could be because the synthesized sulfated zirconia was made into the tiniest particles, which clustered together and agglomerated in their varied shapes. The electrodes are expected to have the highest proton conductivity (ionic conductive groups of sulfated zirconia exist on its solid surface) as confirmed in Table 1 [36]. Figure 3 shows three-dimensional atomic force microscopy (AFM) surface images for Nafion ® /S-ZrO 2 and Nafion ® /ZrP nanocomposite membranes at a scan size of 10 µm by 10 µm. Figure 3a,b shows that the surface roughness of Nafion ® /S-ZrO 2 and Nafion ® /ZrP nanocomposite membranes was 41.46 nm and 18.59 nm, respectively, on topography images. The rougher surface of modified Nafion ® nanocomposite membranes increases electrode contact [37]. The brightest areas in these images show the highest point of the membrane surface, while the dark areas show the valleys or membrane holes, as seen in Figure 3a,b. The surface roughness of the Nafion ® /S-ZrO 2 membrane was higher than that of the Nafion ®/ ZrP nanocomposite membranes, as numerous small peaks and valleys were replaced by many small ones, resulting in a smooth membrane surface ( Figure 3a) and Table 1 [38]. Figure 3b shows the dark spots which are made up of a polymer matrix that does not contain any nanoparticles [39]. Furthermore, Figure 3a shows the inadequate bright spots which indicate the appropriate distribution and aggregation of particles in a Nafion ® matrix [39]. Figure 4A illustrates the XRD diffraction patterns of Nafion ® /S-ZrO 2 nanocomposite membranes, commercial Nafion ® 117 membranes, and Nafion ® /ZrP nanocomposite membranes, respectively. Figure 4A(a,b) reveals that the diffraction peaks of the Nafion ® /S-ZrO 2 and Nafion ® /ZrP nanocomposite membranes are at 17 • , which is slightly lower than that of the commercial membrane [40]. These can also be seen on the modified membranes' diffraction peaks at 39 • in Figure 4A(a,b), which are slightly lower than the commercial membrane. This could be due to the well-distributed nanoparticles within the Nafion ® matrix, as confirmed by SEM results, which reduces the intensity of the diffraction peak. The powder XRD patterns of the produced S-ZrO 2 and ZrP nanoparticles are shown in Figure 4B. The structure of ZrP is shown by a series of distinctive reflections in the range of 0-50 • , whereas the distinctive reflections of S-ZrO 2 are in the 0-100 • range. Figure 4A(c) indicates that the commercial Nafion ® 117 membrane only has two diffraction peaks at 17.5 • and 39 • 2θ, this is due to the ionomer's perfluorocarbon chains being semi-crystalline [41]. As a result of the broken hydrogen bonding within the Nafion ® 117 membrane, membranes incorporating nanoparticles tend to be amorphous with a decrease in crystallinity. Thermo-Gravimetric Analysis (TGA) TGA was used to determine the derivative thermogravimetric (DTG) and thermal stability of modified membranes and Nafion ® 117 membranes. To assess the thermal properties of the membranes, thermal stability tests were carried out. Thermal stability is critical in defining the operating temperature of a fuel-cell application. The TGA results of the Nafion ® 117 membrane, Nafion ® /ZrP, and Nafion ® /S-ZrO 2 nanocomposite membranes follow a three-stage deterioration pattern, as shown in Figure 5. The first step corresponded to absorbed water evaporation, thermal degradation's second stage, the polymer matrix was then thermally oxidized in the third stage. The thermal stability of modified Nafion ® membranes with S-ZrO 2 nanoparticles was better than that of modified Nafion ® membranes with ZrP nanoparticles in Figure 5(a,b), as it began to lose weight at temperatures above 300 • C, whereas Nafion ® /ZrP began to lose weight at temperatures below 150 • C. This could have been due to the well-distributed S-ZrO 2 in the form of small particles, as SEM results show. Furthermore, at around 150 • C, Nafion ® /ZrP began to lose weight, which corresponded to water adsorption as shown in Figure 5(b). The decomposition of the sulfonic acid groups caused the second weight loss at 340 • C [42]. The degradation of the polymer backbone chain may have been the cause of the third weight loss at 570 • C. This decreased thermal degradation could be attributed to the inorganic filler's composition and intimate interaction with the hydrophobic Nafion ® backbone, as opposed to the commercial Nafion ® 117, which decomposed at 380 • C [43]. Figure 5 (DTG insert) shows that the nanocomposite membranes had better heat stability about 340 • C, but the Nafion ® 117 membrane had better thermal stability up to 240 • C (DTG insert). This could be because of the inorganic nanofillers used in Nafion ® membranes [44] that operate as a better insulator and mass transport barrier to the volatile compounds produced during decomposition. As a result, it is ideal for fuel-cell applications. Due to the evaporation of adsorption bound water to the sulfonic groups, the commercial Nafion ® 117 membrane in Figure 5(c) initially lost weight at 100 • C. [8]. At 380 • C, the second weight loss could be attributed to sulfonic group degradation [42]. The degradation of the polymer backbone chain may have been the cause of the third weight loss at 550 • C [45]. We may conclude that reducing the mobility of the Nafion ® chain delays the initial weight loss and thermal degradation of modified membranes compared to unmodified membranes. Tensile Tests Tensile tests were used to determine the membrane's mechanical strength and the findings are shown in Figure 6. Figure 6a-c shows the stress-strain curves of the Nafion ® 117 membrane and the Nafion ® /ZrP and Nafion ® /S-ZrO 2 nanocomposite membranes at 20, 30, and 40 mm/min [46][47][48]. The elasticity and flexibility of the membranes at 0.6 stress versus strain are demonstrated at a stress rate of 20 mm/min. The modified membrane with inorganic nanofiller improved the tensile strength within the membrane, as shown in Figure 6b,c, which could be attributed to the nanofiller's incorporation into the Nafion ® matrix. When ZrP was added to Nafion ® , the tensile stress was lowered to 1300 kPa at a strain rate of 40 mm/min, this could have been due to the small agglomeration of ZrP nanoparticles in the Nafion ® matrices, which resulted in the modified membrane being brittlely fractured, whereas the Nafion ® /S-ZrO 2 shows a greater tensile stress of 2630 kPa at the same strain rate. This could be attributed to well-distributed S-ZrO 2 with minimal agglomeration, as seen by SEM and AFM data, as aggregated nanoparticles may have had an impact on mechanical strength. Furthermore, good contact between the membrane and nanoparticles would improve nanocomposite reinforcement and fuel-cell durability, which is a more important requirement for the production and operating process. Figure 6a-c shows the Nafion ® /S-ZrO 2 tensile stress-strain curves, which demonstrate a significant improvement as it achieved a tensile stress of 2630 kPa at 20, 30, and 40 mm/min, which was twice that of Nafion ® /ZrP (1630 kPa) and Nafion ® 117 (990 kPa). The enhanced tensile stress of Nafion ® /S-ZrO 2 membranes may be related to the presence of ammonia sulphate ions within the membrane, which promote the movement and flexibility of polymer chains, resulting in mechanical strength suitable for fuel-cell applications. Furthermore, the nanocomposite membrane had a higher stress-strain than the Nafion ® 117 membrane. Overall, the results demonstrated that adding sulfated zirconia to the Nafion ® membrane improved the stress-strain properties, which are a good DMFC features [49]. Methanol Permeability At different methanol concentrations (2 M and 5 M) and temperatures of 30 • C, 60 • C, and 80 • C, the methanol permeability of Nafion ® 117 membrane and Nafion ® /ZrP and Nafion ® /S-ZrO 2 nanocomposite membranes was measured. There was no methanol crossover seen for all membranes at varied temperatures and lower concentrations of 2 M methanol [50] as shown in Figure 7. Figure 7 shows that a membrane's methanol crossover is influenced by its affinity for both water and methanol, as well as the amount of empty space within the membrane [51]. However, because of the nanocomposite's dense internal structure and greater filler loading, the methanol molecules have a longer diffusion path. As a result, the permeability of methanol in nanocomposite membranes decreases. Furthermore, because methanol permeability is caused by the movement of molecules across the membrane, the size of the transport molecules must be considered while analysing methanol permeability. According to Yang et al., lowering the methanol concentration lowers the methanol crossover because the concentration gradient is lower [52]. As a result, a higher concentration of 5 M methanol solution was used in this study. At 60 • C, the methanol permeability of Nafion ® 117 membrane and Nafion ® /ZrP and Nafion ® /S-ZrO 2 nanocomposite membranes was 8.84 × 10 −7 cm 2 /s, 0 cm 2 /s, and 0 cm 2 /s (no crossover), respectively, as shown in Figure 7. The methanol permeability of modified and unmodified Nafion ® membranes increased as the temperature rose, as shown in Figure 6. When the temperature is raised to 80 • C, the results demonstrate that nanocomposite membranes have a lower methanol penetration, indicating that water permeation is greater than methanol permeation at high temperatures. This is because methanol molecules are larger than water molecules and are more likely to be obstructed by space limits inside the membrane structure [51]. As shown in Figure 7, the methanol permeability of Nafion ® 117 membrane and Nafion ® /ZrP and Nafion ® /S-ZrO 2 nanocomposite membranes was 1.99 × 10 −6 cm 2 /s, 1.55 × 10 −6 cm 2 /s, and 1.50 × 10 −7 cm 2 /s, respectively. The nanocomposite membrane had a lower methanol permeability than commercial Nafion ® 117, which was due to the addition of ZrP and S-ZrO 2 to Nafion ® 117, which improved the barrier properties of Nafion ® membrane towards methanol. Furthermore, by preventing methanol from migrating through the membrane, the well-dispersed nanoparticles may limit methanol crossing [53]. Because methanol crossover can affect fuel efficiency, a reduced or low methanol crossover is critical in DMFC applications. In addition, modified Nafion ® nanocomposite membranes appear to be potential electrolytes for use in fuel cells. Water Contact Angle, Water Uptake, Dimensional Swelling Ratio, Ion-Exchange Capacity, and Proton Conductivity Measurement In fuel-cell applications, water wettability within the membrane matrix is critical because it promotes protonic conductivity of the membrane by allowing protons to move through it [40]. Figure 8a shows how contact angle was used to determine water wettability. A polymer with a smaller contact angle is more hydrophilic, while high contact angle indicates a more hydrophobic polymer. Because of its hydrophobic nature, the commercial Nafion ® 117 membrane attained a contact angle larger than 90 • , as illustrated in Figure 8a [10]. As shown in Figure 8a, the contact angle of Nafion ® /S-ZrO 2 and Nafion ® /ZrP nanocomposite membranes was smaller, ranging from 80 • to 68 • , this could be owing to the introduction of inorganic material with a hydrophilic property that holds water [54]. In addition, the modified membranes demonstrated that inorganic material impregnating the Nafion ® membrane surface results in hydrophilicity [55]. The hydrophobicity of Nafion ® membranes increased when they are treated with hydrophobic nanoparticles. The dimensional swelling ratio at 30 • C, 60 • C, and 80 • C showed a slightly increase with the increases in temperature as shown in Figure 8b and Table 2. However, when the Nafion ® /ZrP nanocomposite membrane was soaked at the higher temperature of 80 • C, a higher dimensional swelling ratio of 35% was obtained when compared with Nafion ® 117 membrane (29%) and Nafion ® /S-ZrO 2 nanocomposite membrane (33%). Moreover, when the temperature increased, it also increased the dimensional stability and water uptake of the membranes. Water uptake % (30 • C) 30 43 40 Water uptake % (60 • C) 32 44 44 Water uptake % (80 • C) 34 49 47 Figure 8c and Table 2 shows the water uptake of Nafion ® 117 membranes and Nafion ® /ZrP nanocomposite membranes, and Nafion ® /S-ZrO 2 nanocomposite membranes at 30 • C, 60 • C, and 80 • C. As the temperature rose from 30 • C to 80 • C, all membranes exhibited an increase in water uptake [56]. At 80 • C, the Nafion ® /ZrP and Nafion ® /S-ZrO 2 nanocomposite membranes had the highest water uptake of 49% and 47%, compared to 34% for Nafion ® 117 membranes as shown in Figure 8(c). This could be due to the use of hydrophilicity of the ZrP nanoparticles, which helped the membranes retain water [57,58]. Moreover, this could be attributed to an excellent distribution of hygroscopic S-ZrO 2 nanoparticles that hold water within the membrane matrix. Table 1 shows that the modified membrane with ZrP and S-ZrO 2 nanoparticles demonstrated enhanced water uptake at a higher temperature of 60 • C than the unmodified membrane. This could be attributed to the hydrophilic character of acidic nanoparticles, which raises the acidity and surface areas of nanoparticles integrated into the Nafion ® matrix, as well as the existence of a high concentration of polymer-filler interfaces, which increases the free volume [59]. Furthermore, nanoparticle impregnation causes clusters in the pore of the Nafion ® membrane, resulting in the nanocomposite membrane's higher water uptake [40,60,61].This conclusion is consistent with the hydrophobic site's reduced contact angle in Figure 8a. The proton conductivity and IEC of Nafion ® 117 membranes, Nafion ® /ZrP nanocomposite membranes, and Nafion ® /S-ZrO 2 nanocomposite membranes are shown in Figure 8d and Table 2. The Nafion ® /ZrP and Nafion ® /S-ZrO 2 nanocomposite membranes had an IEC of 1.46 meg/g and 1.3 meg/g greater than the Nafion ® 117 membrane's IEC of 0.93 meg/g. This could be because acidic nanoparticles are impregnated into the Nafion ® membrane, which provides the membrane with a strong acid site [58], with the inclusion of sulfate ions as proton-exchange sites within the Nafion ® matrix [62]. The nanocomposites' IEC rises as more nanoparticles are incorporated into the membrane. The proton conductivity of a polymer electrolyte membrane in a fuel cell is the most essential factor that influences its performance. At room temperature, the proton conductivity of the Nafion ® /ZrP nanocomposite membrane was 0.031 Scm −1 , compared to 7.89 Scm −1 and 0.113 Scm −1 for the Nafion ® /S-ZrO 2 nanocomposite membrane and Nafion ® 117 membrane. It is possible that zirconia phosphate nanoparticles within the membranes are causing this decrease in proton conductivity [11,63] because their ionic activity and water mobility are both affected by high temperatures. Furthermore, as the length of the hydrophilic block rises, so does their ionic conductivity. In addition, sulphating zirconia nanoparticles with NH 3 SO 4 acid improved the proton conductivity of the nanocomposite membranes by promoting the migration of sulfonated groups to form cluster aggregates via the strong electrostatic contacts of the Na + counter ions. Fuel-Cell Performance Single-cell DMFC tests were done at 60 • C to further confirm the influence of acidic nanoparticles on the electrochemical performance of commercial Nafion ® 117 membrane. The polarization and power density graphs for DMFCs are shown in Figure 9 and Table 3. The peak density of the Nafion ® /ZrP nanocomposite membrane was 206.79 mW cm −2 , which is greater than the Nafion ® /S-ZrO 2 nanocomposite membrane (183 mW cm −2 ) and Nafion ® 117 membrane (126.04 mW cm −2 ) at the current densities of 189 mA cm −2 . Therefore, Nafion ® 117 membrane incorporated with ZrP obtained higher power density (145 mW cm −2 ) than commercial membrane, with current density of 350 mA cm −2 as shown in Figure 9a. This may have been due to the nanoparticles being well deposited within the membrane pores, that are good at water retention and enhance the conductivity of modified membrane [64]. The best fuel-cell performance is ascribed to the better water retention capabilities of the composite membrane with acidic nanoparticle filler. Furthermore, the increased power density could be attributable to the use of ZrP, which reduces the ohmic resistance of the Nafion ® membrane [65]. The Nafion ® /S-ZrO 2 nanocomposite membrane's superior performance in DMFC is attributed to its proton conductivity and decreased methanol permeability. The modified membrane had a higher voltage than the commercial membrane, as seen in Figure 9b, when compared to the Nafion ® 117 membrane (0.58 V), and Nafion ® /ZrP (0.91 V) and Nafion ® /S-ZrO 2 (0.85 V) nanocomposite membranes at current densities of 200 mAcm −2 . This indicates that the nanocomposite membranes are a good barrier to prohibit the crossover of both the fuel and the oxidant. Furthermore, this could be attributed to a larger percentage of ZrP in the Nafion matrix membrane. The improvement in voltage and current density can be seen by the decreased weight percent incorporation. This could be due to the nanoparticles that had been well deposited within the membrane pores, which aided in water retention and improved the modified membrane's conductivity. [64]. The Nafion ® /ZrP nanocomposite membrane (0.48 V) displays a modest drop in voltage at current densities of 350 mAcm −2 . Although the Nafion ® /ZrP membrane outperformed the Nafion ® /S-ZrO 2 nanocomposite membrane in terms of fuel-cell performance, the Nafion ® /S-ZrO 2 nanocomposite membrane showed long-term stability. Therefore, we can conclude that the Nafion ® /S-ZrO 2 nanocomposite membranes are reasonably decent and appropriate for DMFC applications. Also, these results suggest that modified membranes show great potential in direct methanol fuel cells. Conclusions The impregnation approach was used to successfully construct Nafion ® /ZrP and Nafion ® /S-ZrO 2 nanocomposite membranes with low methanol permeability and high proton conductivity. Because of the nature of an inorganic fillers and their tight interaction with the hydrophobic Nafion ® backbones, the thermal stability of the nanocomposite membranes began to degrade at a high temperature of 450 • C. Furthermore, when compared to Nafion ® 117 membrane, water uptake, IEC, and linear expansion of nanocomposite membranes were improved. The results revealed that the nanocomposite membranes obtained a lower water contact angle than the commercial Nafion ® membrane. Moreover, the results show that the incorporating of S-ZrO 2 in Nafion ® membrane enhances the conductivity compared to membrane modified with ZrP nanoparticles. The results demonstrate a decrease in methanol permeability on modified Nafion ® membrane, at a higher temperature of 80 • C and a 5 M methanol concentration when compared to Nafion ® 117 membrane, which may be due to the incorporation of inorganic components within the membranes. The improved membrane's lower methanol permeability and strong proton conductivity resistance further verified its feasibility for use in fuel cells. The inclusion of ZrP and S-ZrO 2 in the membranes was confirmed by SEM and FTIR findings, which also improved water uptake. At 80 • C, the Nafion ® /ZrP and Nafion ® /S-ZrO 2 nanocomposite membranes' water uptake and swelling ratio ranged from 47 to 49% and 33 to 35%, respectively. These findings suggest that nanocomposite membranes have higher IEC with improved conductivity. The power density of the Nafion ® /ZrP (206.79 mW cm −2 ) and Nafion ® /S-ZrO 2 (183 mW cm −2 ) nanocomposite membranes was higher than that of the commercial Nafion ® 117 membranes (126 mW cm −2 ). The Nafion ® /S-ZrO 2 nanocomposite membrane produced a maximum power density of 188.6 mWcm2 and an OCV of 0.98 V, indicating that Nafion ® /S-ZrO 2 nanocomposite membranes are promising for fuel cells. The results also showed that membrane modified with ZrP nanoparticles obtained the highest fuel-cell performance at maximum power density of 188.6 mW cm −2 and an OCV of 0.98 V with the short life when compared with Nafion ® /S-ZrO 2 which attained a long-life in the fuel-cell performance. Funding: Financial support was provided by the University of South Africa (AQIP) and the National Research Funding Agency (NRF). UNISA for SEM and CSIR for the methanol crossover results are also acknowledged. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: All materials for this study are presented in this article and available on request to the corresponding authors.
v3-fos-license
2022-12-12T16:05:46.751Z
2022-12-10T00:00:00.000
254561128
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/jobe/2022/4851044.pdf", "pdf_hash": "fb5490ea47b9b2384b905320c9cfef146c4b4067", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2831", "s2fieldsofstudy": [ "Medicine" ], "sha1": "02c3235244233adfcdb7e223b128ce93a81b804f", "year": 2022 }
pes2o/s2orc
High Prevalence of Prediabetes and Associated Risk Factors in Urban Areas of Pontianak, Indonesia: A Cross-Sectional Study Uncontrolled prediabetes can develop into Type 2 Diabetes mellitus (T2DM). The incidence of T2DM among adults in Pontianak, Indonesia was reported remarkably high. Therefore, this study aimed to investigate the risk factors for prediabetes in adults living in urban areas of Pontianak, Indonesia. A cross-sectional study was conducted in 5 subdistricts of Pontianak. A total of 506 adults underwent screening to obtain subjects with fasting blood glucose (FBS) of ≤124 mg/dL and aged >30 years. Blood pressure and body mass index (BMI) were measured. Interview using a structured questionnaire were performed to obtain data on predictor variables (age, sex, education, income, health insurance, tobacco use, history of hypertension, gout, high cholesterol level, frequency of exercise per week, and diabetic education). The prevalence of prediabetes among subjects was significantly high (76.4%). Subjects were predominantly above 40 years, female, had low income, low education level, and had health insurance. About a third of the subjects had a history of hypertension, gout, and high cholesterol level, respectively. The exercise frequency was mostly less than 3 times/week, and the BMI was mainly classified as overweight and obese. The result of spearman's rho correlation showed that age (r = 0.146; p=0.022) and BMI (r = 0.130; p=0.041) significantly correlated with prediabetes incidence. Moreover, the chi-square analysis demonstrated that health insurance ownership (OR = 4.473; 95% CI 1.824–10.972; p ≤ 0.001), history of hypertension (OR = 3.096; 95% CI 1.542–6.218; p=0.001), and history of gout (OR = 2.419; 95% CI 1.148–5.099; p=0.018), were associated with prediabetes incidence. For all these significant risk predictors except BMI, the significant associations were found only among female subjects after specific sex analysis. Moreover, multivariate logistic regression showed that health insurance ownerships (OR = 5.956; 95% CI 2.256–15.661; p ≤ 0.001) and history of hypertension (OR = 3.257; 95% CI 1.451–7.311; p=0.004), and systolic blood pressure (OR = 2.141; 95% CI 1.092–4.196; p=0.027) were the risk factors for prediabetes. It is concluded that the prevalence of prediabetes is probably high especially among urban people in Pontianak, Indonesia. Health insurance ownership and hypertension may have an important role in prediabetes management. The risk factors might be different between male and female. Introduction Prediabetes, also referred as impaired glucose tolerance, is a serious health problem which potentially develops to type 2 diabetes mellitus (T2DM). Te blood glucose levels in prediabetes are higher than normal, but not high enough yet to be classifed as diabetes [1]. Te global prevalence of both prediabetes and diabetes are reported to increase rapidly [2]. Te report showed that more than one-third of American adults had prediabetes, while over 84% were unaware of their condition, and about 5-10% progressed to diabetes. Te prevalence of T2DM in Indonesia remains high, and the International Diabetes Federation has ranked Indonesia as the 7 th country with the highest number of cases, namely, about 10.7 million sufering from diabetes in 2019. Specifically, the prevalence of T2DM in West Kalimantan Province, Indonesia according to Indonesian Basic Health Research 2018 was 1.6% which was quite lower than the national prevalence (2.0%). Unfortunately, the data from Pontianak City Health Ofce, West Kalimantan in 2019 showed that the incidence of T2DM was 42% among people aged over 40 years [3]. Also, it was notably reported that the cumulative prevalence of prediabetes and T2DM sharply increased from 2007 (5.7%) to 2018 (10.9%) [4]. Furthermore, prediabetes is associated with an increased risk of allcause mortality and cardiovascular diseases in the general population, and the risk gets higher in patients with atherosclerosis [5]. Several factors may generate prediabetes, while the prevalence increases with age and obesity [6]. According to the Centre for Disease Control and Prevention (CDC), the risk factors for prediabetes are overweight, age of 45 years or older, having a family history with T2DM, lack of physical activity (<3 times per week), gestational diabetes, and polycystic ovary syndrome [7]. In addition, a study found that there are other risk factors for prediabetes and T2DM including hypertension, low level of HDL-cholesterol, having a frst-degree relative(s) with diabetes, previously having elevated blood glucose levels, and member of ethnicminority communities [8]. Other risk factors have also been reported, such as smoking, low level of education and income, and being female [9]. Prediabetes is a condition that can be managed, which therefore could prevent from developing T2DM. Te management can be done by screening the risk factors for prediabetes [10]. Te screening involves both noninvasive and laboratory measures. Te noninvasive method includes identifying age, sex, body mass index (BMI), blood pressure, family history, and lifestyle, while the laboratory measures include blood glucose tests [11]. Moreover, the criteria to defne prediabetes and diabetes have been established by the American Diabetes Association where prediabetes is categorized by HbA1c test range of 5.7 to 6.4%, or fasting blood glucose (FBG) 100 to 125 mg/dL, or the oral glucose tolerance test 140 to 199 mg/dL [12]. Pontianak is located in Kalimantan Island which has a tropical climate and the fruit is supposed to be abundantly available. However, unlike other islands such as Sumatera and Java, the fruit availability is highly dependent on the season leading to irregular fruit consumption in Pontianak [13]. Moreover, fruit consumption is defnitely correlated with the prevention of glucose intolerance [14]. In addition, the preferred foods are mostly high in energy, fat, and sugar, such as fried rice, noodle, fritters, and cake [15]. Terefore, this study aimed to investigate the risk factors for prediabetes among adults over 30 years in Pontianak where the diabetes incidence was very high recently. Study Design. A community-based cross-sectional study was carried out in Pontianak, West Kalimantan Province, Indonesia. Te design was used to determine the prevalence and risk factors of prediabetes. Data collection was conducted from February to April 2021 strictly following the standard COVID-19 prevention protocol provided by the Indonesian Ministry of Health. All protocols have been approved by the Health Research Ethics Committee in the Faculty of Public Health, Diponegoro University, Indonesia (Certifcate of Approval No. 226/EA/KEPK-FKM/2020). Target Population and Sampling Technique. Te target population of this study was the inhabitants of all 5 subdistricts in Pontianak. Adult subjects were invited for screening tests, which were coordinated by head villages. From 512 participants in the screening tests, a total of 246 subjects met the inclusion criteria (aged >30 years, not pregnant, no history of chronic diseases, not under the treatment of oral antidiabetic, and willing to participate by signing the informed consent. Based on the power of 80% and 5% standard deviation, the calculated minimal sample size was 246 [16]. Te fow diagram of subject recruitment can be seen in Figure 1. Research Instrument and Measurement. Te research instrument used in this study was a structured questionnaire which consisted of the questions about age, sex, education level, income, tobacco use, health insurance, history of some diseases including hypertension, gout, high cholesterol level, frequency of exercise per week, and participation to diabetes prevention education. Te age of the subjects was divided into two categories (<40 years and ≥40 years). Based on the previous study, the age over 40 year increases the risk of developing diabetes [17]. Education level was grouped into low education level (primary and secondary school) and high education level (senior high school and higher education). Regional minimum wage/month was used to divide the income into low (<Rp. 2.515.000) and high (≥Rp. 2.515.000) [18]. Data of tobacco use, health insurance ownership, history of hypertension, gout, and high cholesterol level, participation to diabetes prevention education consisted of two categories (Yes or No). Exercise activity per week was made into two categories (≤3 times and >3 times for 30-50 min per exercise). Having exercise more than 3 times per week could improve the glycemic control and body composition of T2DM patient [19]. Tis study also measured the nutritional status of the subject using body mass index (BMI), which was calculated as weight in kilogram divided by the squared of height in meter. Te BMI of the subjects were then stratifed according to WHO classifcation as underweight (<18.5 kg/m 2 ); normal (18.5-24.9 kg/m 2 ); overweight (25-29.9 kg/m 2 ), and obesity (≥30 kg/m 2 ). Hypertension status and FBG levels were measured at Prodia Clinical Laboratory, Pontianak (Accredited by National Accreditation Committee; ISO 15189:2012). Blood pressure was measured using a calibrated Omron M6 Comfort following the method of Bell and Williams [20]. Prior to measurement, subjects were asked to relax and remain seated for 25 minutes. Te cuf was then placed on the left upper arm in a relaxed position. Te subject was categorized as hypertension 246 subjects were analyzed 6 subjects were excluded due to incomplete questionnaire responses A total of 512 subjects aged >30 years and without chronic illness followed the screening test for FBG with Roche's 250 subjects met the inclusion criteria 262 subjects were excluded due to FBG > 125 mg/dl, chronic illness, and did not sign the inform consent if the systolic blood pressure ≥140 mmHg and/or diastolic blood pressure ≥90 mmHg [21]. Te FBG levels were measured by collecting 5 ml of a blood sample drawn from the antecubital vein. Te blood samples were collected in the morning after the subject had fasted for approximately 8 hours. Te automated glucose oxidase method was used to obtain FBG levels [22]. Te subjects were categorized as having prediabetes if the FBG levels were between 100-125 mg/dL (according to American Diabetic Association criteria) [12]. FBS was used as the only parameter to diagnose pre-diabetes as FBS is an adequate, simple, safe procedure to diagnose prediabetes, suitable for a community-based study and also has a notable correlation with HbA1c [1, 23]. Data Analysis. Descriptive statistics were used to determine the frequency and percentage of all variables. Te spearman's rho, chi-square tests, and multivariate logistic regression were performed to investigate the relationship between independent variables and prediabetes as the outcome in the present study. Te association was considered as signifcant at p value of <0.05. In addition, the data were analysed using the IBM SPSS version 21 software with a Diponegoro University license (https://www.ibm.com/ support/pages/downloading-ibm-spss-statistics-21). Results Of the 506 subjects who underwent screening, 246 subjects met the inclusion criteria. Te subjects were mostly over 40 years (90.7%), female (80.5%), had a low income (60.6%) and education level (63.0%). More than half of the subjects were overweight or obese (65.5%). Predominantly, they had health insurance (71.5%). Tobacco use was relatively low among the subjects (36.6%). Most of the subjects had no history of hypertension (61.0%), gout (70.3%), and high cholesterol levels (65.9%). In addition, mostly the subjects had a frequency of exercise less than 3 times per week (79.3%). Te subjects mainly did not participate in diabetic prevention education (61.0%). Te measurements showed that 31.7% of the subjects had hypertension, where the prevalence of high systolic blood pressure (≥140 mmHg) and diastolic blood pressure (≥90 mmHg) were 48.0% and 38.6%, respectively. We notably found a high prevalence of prediabetes among the subjects, which reached 76.4% (188 subjects). Te description of subjects' characteristics can be seen in Table 1. Te relationship between predictor variables and prediabetes status can be seen in Tables 2 and 3. Te present study revealed that there was no association between sex, level of education, income, tobacco use, history of high cholesterol level, frequency of exercise, participation in diabetes prevention education, and prediabetes status. However, the Spearman's rho test showed a weak signifcant correlation between age (r = 0.146; p � 0.022), BMI (r = 0.130; p � 0.041) and prediabetes incidence (Table 2). Furthermore, according to chi-square test results, subjects with no health insurance ownership (OR = 4.473; 95% CI 1.824-10.972; p ≤ 0.001), history of hypertension (OR = 3.096; 95% CI 1.542-6.218; p � 0.001) and gout (OR = 2.419; 95% CI 1.148-5.099; p � 0.018) had a higher risk for pre-diabetes (Table 3). Due to wide diferences in sex ratio, we conducted the separated analysis for males and females to investigate sex diferences in the risk factor of pre-diabetes (Tables 2 and 3 Discussion Te present study found that the prevalence of prediabetes was remarkably high among the selected subjects in Pontianak. Two-third of the subjects had FBG >100 mg/dL. Te prevalence was comparably with the prevalence of diabetes according to the data from the local health ofce. Indeed, a previous study in Uganda reported the increased rate of prediabetes (106%) coincided with the increase in obesity, central obesity, and diabetes prevalence [24]. Tis study is in line with our current study that found a high prevalence of obesity among the subjects. Te signifcant increase in the prevalence of prediabetes has also been reported in England, the US, Iran, and Turkey [24][25][26][27]. Age, BMI, health insurance ownership, history of hypertension and gout, and systolic blood pressure were found to be signifcantly associated with prediabetes in the present study. Age has been reported to be a nonmodifable risk factor for diabetes [28]. Our results showed that the signifcant association between age and prediabetes was observed only in females after sex-specifc analysis. In women, age of ≥40 years is associated with having menopause, which is linked to body composition changes leading to the reduction of insulin sensitivity [29]. Moreover, this study revealed that BMI and prediabetes was signifcantly correlated with no sex-specifc diference. Consistent to the present study, higher BMI has been reported as a strong risk factor of prediabetes [30]. Te higher percentage of body fat produces larger amounts of free fatty acids, glycerol, and proinfammatory cytokines that participating in the development on insulin insensitivity [31]. From the multivariate model, subjects with no health insurance were at 5.9 times higher to have pre-diabetes. Te results was in line with the previous study showing that uninsured people had signifcantly lower diabetes control than insured people [32]. Having no health insurance might indicate that the subjects did not pay much attention to their health status thus increasing the risk of diseases vulnerability [33]. Lack of health insurance ownership is commonly a limitation for obtaining routine and preventive therapy which are crucial for people with prediabetes where regular medical check-ups to monitor diseases progression is essentially important [34]. Besides providing medications, the health insurance could provide the basic information of health such as nutrition education which is the fundamental strategy to prevent prediabetes from developing T2DM [35]. In addition, people with health insurance have signifcant use of healthcare which indicates high awareness of health status [36,37]. Hypertension has an important role in pre-diabetes incidence. Both from the results of chi-square analysis and multivariate logistic regression, a history of hypertension was signifcantly associated with prediabetes. In addition, the measured systolic blood pressure was found to be independently associated with prediabetes. Te risk of prediabetes was 3.3 and 2.14 times higher in people with a history of hypertension and high systolic blood pressure, respectively. Hypertension is a degenerative disease that still be the major public health problem in Indonesia [38]. Moreover, hypertension is also a major risk factor for glucose intolerance and diabetic complication [39,40]. High blood pressure is correlated with lower insulin sensitivity leading to impaired glucose uptake [41]. High insulin levels can increase sodium retention in renal tubules which can causatively induce hypertension [42]. Tis fact supports the statement of diabetes and hypertension are closely correlated and have a mutual interdependent efect [43]. Furthermore, our chi-square analysis revealed a history of gout might be the risk factor for prediabetes. Several studies reported people with gout have a higher risk of developing T2DM [44][45][46]. Higher uric acid levels have a strong correlation with higher adiposity and increased visceral fat deposition leading to insulin resistance [47] However, along with age and BMI, a history of gout did not signifcantly relate to prediabetes based on the multivariate analysis results. We propose susceptible multicollinearity between these independent variables. Age is a nonmodifable risk factor for obesity, hypertension, and gout [48][49][50][51]. In addition, hypertension, obesity, and gout have been found to be correlated with each other [45,52]. After sex-specifc bivariate analysis, we found the signifcant associations between age, health insurance ownership, history of gout, and hypertension only among female subjects, but not in males. Some other factors including consumption of sugar-sweetened beverages, alcohol, intergenerational transmission of diabetes, sleep deprivation, and work stress may be more pronounced in males that is proposed to be the reasons of the fndings [53]. In addition, a limited number of male subjects might also cause the insignifcant associations. Tere are some limitations and strengths in the present study. Tis study used a cross-sectional design in which exposures and outcomes variables were obtained in the same period. Terefore, the causality relationship could not be proven. Te subjects were also not well distributed in age and sex which might cause no signifcant association for several variables. Also, menopausal status, family history of diabetes, history of gestational diabetes, food consumption pattern, cafeine intake, or any hormonal treatment were not considered. We also did not measure the insulin and adiponectin levels, while the insulin resistance is the most common factor present in prediabetic and adiponectin signaling plays an important role in both prediabetes and newly diagnosed diabetic [54]. However, the fndings in the present study reported the recent update of a probably high prevalence of prediabetes among people living in a tropical urban area. Te results also emphasize the role of health insurance and hypertension monitoring in T2DM prevention. Conclusions Te present study found that the prevalence of pre-diabetes among adults over 30 years in urban areas of Pontianak, Indonesia, was remarkably high. Also, the study found a high prevalence of hypertension among the subjects. Tis study suggests that health insurance ownership and hypertension may play important roles in the incidence of prediabetes among the subjects, especially in females. Tis study implies that encouraging people of having insurance is essential for diabetic prevention. Data Availability Te data sets used and/or analysed during the current study are available from the corresponding author on reasonable request. Conflicts of Interest Te authors declare that there are no conficts of interest regarding the publication of this paper.
v3-fos-license
2017-06-29T12:35:15.000Z
2017-06-04T00:00:00.000
119229537
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1016/j.physletb.2017.10.056", "pdf_hash": "c92002bd4cedf0d39c93972796535b667481a647", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2832", "s2fieldsofstudy": [ "Physics" ], "sha1": "c92002bd4cedf0d39c93972796535b667481a647", "year": 2017 }
pes2o/s2orc
Radiative nonrecoil nuclear finite size corrections of order $\alpha(Z \alpha)^5$ to the Lamb shift in light muonic atoms On the basis of quasipotential method in quantum electrodynamics we calculate nuclear finite size radiative corrections of order $\alpha(Z \alpha)^5$ to the Lamb shift in muonic hydrogen and helium. To construct the interaction potential of particles, which gives the necessary contributions to the energy spectrum, we use the method of projection operators to states with a definite spin. Separate analytic expressions for the contributions of the muon self-energy, the muon vertex operator and the amplitude with spanning photon are obtained. We present also numerical results for these contributions using modern experimental data on the electromagnetic form factors of light nuclei. The investigation of the Lamb shift and hyperfine structure of the muonic hydrogen and helium spectrum opened a new page of studies of the energy spectra of simplest atoms. The experiments that are currently being carried out by the collaboration CREMA (Charge Radius Experiments with Muonic Atoms) [1][2][3][4] will make it possible to additionally test the Standard model, obtain more accurate values for a number of fundamental parameters, and possibly answer the question of the presence of additional exotic interactions between the particles. The inclusion of other experimental groups in this field of research (see, [5][6][7]) will allow, as planned, not only to verify the experimental results of the CREMA collaboration, but also to lead to a further increase in the accuracy of the experimental results for separate intervals of fine and hyperfine structure. The already obtained results of the CREMA collaboration show that there is a significant difference between the values of such a fundamental parameter as the charge radius of the nucleus obtained from the study of electronic and muonic atoms. As has always been the case for a long history of precision studies of the energy spectra of simplest atoms in quantum theory, one of the ways to overcome the crisis situation is related to a new more in-depth theoretical analysis, recalculation of various theoretical contributions that can be amplified in the case of muonic atoms. In this way, the problem of a more accurate theoretical construction of the particle interaction operator in quantum electrodynamics, the calculation of new corrections in the energy spectrum of muonic atoms acquires a special urgency [8]. In this work we study radiative nonrecoil corrections of special kind of order α(Zα) 5 related with the finite size of the nucleus in the Lamb shift of muonic hydrogen and helium. A preliminary estimate of the possible magnitude of such a contribution to the Lamb shift can be obtained on the basis of general factor α 6 µ 3 /m 2 1 ≈ 0.012 meV (for the µp). It shows that the contributions of this order should be studied more closely. While the theoretical contribution of a certain order is not calculated accurately, the theoretical error caused by it is retained, which, when estimated by the main factor, can reach a considerable value. For precise determination of order α(Zα) 5 contribution we should account that the distribution of the nucleus charge is described by the nucleus electric form factor. Radiative nonrecoil corrections of order α(Zα) 5 are divided into three parts: muon self-energy correction, vertex correction and correction with spanned photon presented in Fig.1. This work continues the investigation of the radiative nonrecoil corrections of order α(Zα) 5 made earlier in our works [9,10] to the case of the Lamb shift. It is necessary to point out that the contribution of amplitudes shown in Fig.1 in the case of the point nucleus was calculated many years ago in [11]. Taking into account the finite size of the nucleus, the contribution of these diagrams to the Lamb shift of hydrogen atom proportional to the square of charge radius < r 2 N > was obtained numerically in [12,13]. In analytical form the contribution of amplitudes in Fig. 1 proportional to < r 2 N > was obtained in [14,15]. In this study we obtain closed integral expressions for the contributions of individual diagrams, and then calculate them analytically in the approximation, when the electric form factor of the nucleus is replaced by the square of the charge radius, and numerically, taking into account the complete expression of the form factor obtained on the basis of experimental data. We have presented the required corrections in such a clear form that can be used in the future to evaluate numerically the contributions of the most diverse simple atoms. To study the Lamb shift of muonic atom, we use a quasipotential method in quantum electrodynamics in which the bound state of a muon and nucleus is described in the leading order in the fine-structure constant by the Schrödinger equation with the Coulomb potential [16][17][18]. The first part of important corrections in the energy spectrum is determined by the Breit Hamiltonian [16][17][18]. Other corrections can be obtained by studying the different interaction amplitudes of particles (see the detailed review of Eides, Grotch and Shelyuto [19]). To evaluate radiative nonrecoil corrections of order α(Zα) 5 we neglect relative momenta of particles in initial and final states and construct separate potentials corresponding to muon self-energy, vertex and spanning photon diagrams in Fig.1. The contribution of two-photon exchange diagrams to the Lamb shift and hyperfine structure of order (Zα) 5 was investigated earlier by many authors [19][20][21][22][23]. The lepton line radiative corrections to two-photon exchange amplitudes were studied in detail in [24,25] including recoil effects. Numerous studies have shown that corrections of this type are conveniently calculated in the Fried-Yennie gauge for radiative photon [26] because it leads to infrared-finite renormalizable integral expressions for muon self-energy operator, vertex function and lepton tensor describing the diagram with spanning photon. It is convenient to begin the study of corrections α(Zα) 5 from the most general expression for the amplitudes (direct and crossed) shown in Fig. 1. In order to demonstrate the calculation technique, let us consider a direct and a crossed amplitude with arbitrary muon tensor insertion omitting a number of nonessential factors: where p 1,2 and q 1,2 are four-momenta of the muon and proton (nucleus) in initial and final states: p 1,2 ≈ q 1,2 . k stands for the four-momentum of the exchange photon. The vertex operator describing the photon-proton interaction is determined by two electromagnetic form factors F 1,2 . The propagator of exchange photon is taken in the Coulomb gauge. The lepton tensor L µν has a completely definite form for each amplitude in Fig. 1. Using the FeynCalc package [27] we construct the exact expressions for leptonic tensors corresponding to muon self-energy and vertex corrections and the correction of the amplitude with spanning photon which have the following form: where the functions F µν are written in explicit form in our previous papers [9,10]. To extract the Lamb shift part of the potential, we use special projection operators in (1)-(2) on the states of particles with spin 0 and 1: Inserting (6) into (1)-(2) and calculating the trace and contractions over the Lorentz indices by means of the system Form [28] we obtain the contributions to the potential for states with spin 0 V ( 1 S 0 ) and spin 1 V ( 3 S 1 ). We give here general expressions for the numerators of direct exchange amplitudes for states 1 S 0 and 3 S 1 : After that we can present the Lamb shift part of the potential by means of the following relation [29]: Neglecting the recoil effects we simplify the denominators of the proton propagator as follows: The crossed two-photon amplitudes give a similar contribution to the Lamb shift which is determined also by relations (2)-(5) with the replacement k → −k in the proton propagator. As a result the summary contribution of direct and crossed amplitudes is proportional to the δ(k 0 ): In the case of muonic deuterium [9] the tansformation of the scattering amplitude and a construction of muon-deuteron potential can be done in much the same way. The main difference is related with the structure of deuteron-photon vertex functions and projection operators on the states with spin 3/2 and 1/2. As a result three types of corrections of order α(Zα) 5 to the Lamb shift in all cases of muonic hydrogen are presented in the integral form over loop momentum k and the Feynman parameters. Below, we present the complete integral expressions for the corrections under study, as well as the results of analytic integration in the case of the expansion of the form factor in a series with preservation of the leading term proportional to r 2 N : ∆E Ls vertex−2 = 8α(Zα) 5 µ 3 π 2 m 2 1 n 3 δ l0 where an expansion looks as follows: Such an expansion means that one of the vertex operators in Fig. 1 is the vertex operator of a point particle, and the other is proportional to r 2 N . The coefficient 2 acts as a combinatorial factor. We note that the vertex correction ∆E Ls vertex−2 includes the contributions of two terms with functions F There is another correction of the same order α(Zα) 5 , determined by the muon vacuum polarization effect [19]. It can be obtained by the same scheme as the previous corrections in Fig. 1. To construct the potential, we need to use the following substitution in one of the exchange photons, ρ(ξ) = ξ 2 − 1(2ξ 2 +1)/ξ 4 , m l is the lepton mass. After integration over the spectral parameter ξ, this correction can be represented as a one-dimensional integral (a 1 = 2m l /m 1 ): (21) After the replacement (19) and analytic integration in (21), we obtain the following result (the argument given in parentheses denotes the extraction of a contribution proportional to It is clear that we do not consider here the electronic vacuum polarization (m l → m e ), which must be taken into account separately because it presents the correction of a different order. All corrections (11)- (22) are expressed through the convergent integrals. In the case of expansion (19) all integrations can be done analytically. Some of the integrals contain terms which are divergent at k = 0 but their sum is finite. In Table I we present separate results for muon self-energy, vertex and spanning photon contributions in the Fried-Yennie gauge. We mention here once again that the result of an analytical calculation of the corrections from the amplitudes in the case of a point nucleus ∆E Ls = 4(1+11/128−ln 2/2)α(Zα) 5 µ 3 δ l0 /m 2 1 n 3 was obtained in [11]. In paper [25] the expressions for the lepton tensors of the vertex and spanning photon diagrams were constructed in a slightly different form but they lead to the same contributions to the Lamb shift of S-states. In numerical calculations of integrals (11), (13), (15), (17), (21) with finite-size nucleus we use the known parameterizations for electromagnetic form factors of nuclei as in our previous works [9,[30][31][32], namely, the dipole parametrization with charge radii from works [1][2][3]33]: r p = 0.84184 ± 0.00067 fm, r d = 2.12562±0.00078 fm, r t = 1.7591±0.0363 fm, r hel = 1.9661±0.0030 fm, r α = 1.6755±0.0028 fm. The obtained numerical values are written to four digits after the decimal point for the central values of the charge radii. If the accuracy of the results for a deuteron and a proton can be considered high (an error of not more than one percent), then for other nuclei errors in the parametrization of form factors can be estimated at 5 percent. In the approximation, when the corrections to the structure of the nucleus are proportional to the square of the charge radius, our complete analytical result from the Table I coincides with the previous calculations [14,15,19] (exact analytical result is written in the book [19]). Numerical results are obtained with nucleus charge radii taken from [1][2][3]33]. It follows from obtained results in Table I that the account of the nucleus structure by means of electric form factor changes essentially the results in which the expansion (19) is used. For separate contributions, the magnitude of the correction decreases more than two times. Such a significant change in the magnitude of the corrections is due to an increase in the lepton mass. If in the case of electron hydrogen the expansion (19) works well, then for light muonic atoms it is necessary to use the exact integral relations (11), (13), (15), (17), (21) obtained by us and to carry out calculations using the explicit form of nuclear form factors. For comparison, we present in the Table I two numerical results obtained on the basis of the calculation of integrals (11), (13), (15), (17) and analytical formulas (in parentheses). Although in general the corrections obtained are small, they are nevertheless considered important when the accuracy of the experiment is increased. To construct the quasipotential corresponding to amplitudes in Fig. 1 we develop the method of projection operators on the bound states with definite spins. It allows to employ different systems of analytical calculations [27,28]. In this approach more complicated nuclear structure corrections, for example, radiative recoil corrections to the Lamb shift of order α(Zα) 5 m 1 /m 2 can be evaluated if an increase of the accuracy will be needed. The results from Table I should be taken into account to obtain total value of the Lamb shift in light muonic atoms for a comparison with experimental data [1-4, 34, 35].
v3-fos-license
2017-06-05T17:36:26.218Z
2007-10-01T00:00:00.000
21940429
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.scielo.br/j/rlae/a/D3XkmjZgmgCMcMHkWMPkfTJ/?format=pdf&lang=en", "pdf_hash": "1e0bfe151a79b48406511ab59f44437a1990b00b", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2836", "s2fieldsofstudy": [ "Education" ], "sha1": "1e0bfe151a79b48406511ab59f44437a1990b00b", "year": 2007 }
pes2o/s2orc
BRAZILIAN NURSING AND PROFESSIONALIZATION AT TECHNICAL LEVEL: A RETROSPECTIVE ANALYSIS This article presents a retrospective analysis of the Brazilian Nursing concerning the professionalization of workers at technical level. It also provides some indication about the trends of professional education. There is a clear indication of increased intellectual and conceptual accumulation in the four decades the professional education in nursing at technical level has been part of the public policy agenda. This experience serves as reference for the formulation of new actions directed to other professionals of technical level who deliver direct care to the population. The study shows that there was reformulation of the nursing professional qualification issue, including in the discussion the need to improve the quality of educational processes and extensive supply of continuous education to workers already inserted in the process, in order to keep the constant changes in the Brazilian Health system. This article presents a retrospective analysis of the Brazilian Nursing concerning the professionalization of workers at technical level. It also provides some indication about the trends of professional education. There is a clear indication of increased intellectual and conceptual accumulation in the four decades the professional education in nursing at technical level has been part of the public policy agenda.This experience serves as reference for the formulation of new actions directed to other professionals of technical level who deliver direct care to the population. The study shows that there was reformulation of the nursing professional qualification issue, including in the discussion the need to improve the quality of educational processes and extensive supply of continuous education to workers already inserted in the process, in order to keep the constant changes in the Brazilian Health system. A ENFERMAGEM BRASILEIRA E A PROFISSIONALIZAÇÃO DE NÍVEL TÉCNICO: ANÁLISE EM RETROSPECTIVA O artigo apresenta análise retrospectiva da trajetória percorrida pela enfermagem brasileira no processo de profissionalização dos trabalhadores de This is an eminently qualitative documentary research.This approach can unveil new aspects of a theme or problem, using any written material about human behavior as a source of information (1) .This method is appropriate in this situation, as the reconstruction of historical facts that constituted the (2) .The vertical division (3) occurs through different worker categories, according to the training levels that compose nursing -nurse with higher education, nursing technician with secondary education, nursing aid with basic education, which is an internal work division of the profession.The internal division of nursing work (4) , which is a characteristic originated in is institutionalization as a profession, from the 19th century onwards, deserves to be highlighted because of its influence on daily work, on the relations with other health areas and on the quality of care.In the division of responsibilities and roles (vertical division), nurses are in charge of teaching, supervision and management, while technical and auxiliary staff carry out most care activities. Three important aspects stand out about the current context of the development and trajectory of the nursing profession: the position of the nursing category, manifested through its class organs, about the professionalization of technical-level workers, the regulation of professional exercise through the laws and main initiatives already taken by the public power to induce and promote the professionalization of secondary-level nursing workers in the health sector. PROFESSIONALIZATION OF SECONDARY-LEVEL WORKERS AND ITS HISTORICAL CONSTRUCTION The search for the professionalization of secondary-level nursing workers has been a priority on ABEn's and the public policy agenda, especially in health and education, particularly from the 1970's onwards. In this period, nursing workers' education was regulated by Law 775/49, which determined nursing education in Brazil through two courses: higher education training for nurses and training for nursing aids, to be administered by public schools. Without making any claim of looking at this issue in great depth, we should highlight the detailed analysis already made about the political, social and economic circumstances that gave rise to the official recognition of the course for nursing aids, through the publication of the above mentioned legal instrument, as well as the debates by nursing leaderships in that age, emphasizing the "considerable rejection of the idea of permitting another course for nursing staff" as a strategy to fight for the preservation of the spaces nurses worked so hard to conquer, as opposed to the pressure these nurses felt to solve the problem of a lack of nursing staff to attend to the country's needs (6) . This law interiorized some conflicts in the composition of nursing, in its vertical division and in independently of nursing education (5)(6) . In view of the great need to train nursing professionals in a context of a population with low education levels and insufficient places in higher education, changes were proposed in the degree requirements for education, including the creation of the nursing aid course and State subsidies to induce and stimulate the creation of new nursing schools. The nursing course only reached the higher education level in 1962 because, although "this law started to require a secondary education degree to enter, given the scarce demand for the course, this requirement was delayed for seven years and then for another five years, accepting a mere primary education degree as an admission requirement" (7) .The Law of the where there were no nursing schools, but which had hospitals with actual possibilities to train aids" (5) .The in two years, with a third additional year for obstetric and public health nurses" (5) .Both the duration as the curriculum should be approved by a national entity, guaranteeing the validity of the nursing technician degree across the country.The course would be valid as a secondary-level degree. We highlight some points in this debate due INCREASED EDUCATION VERSUS HISTORICAL DEFICIT According to records, the first course for nursing aids was created at Ana Neri School of Nursing, in 1941, before the regulation of nursing education, which only occurred in 1949 (5) .The creation Labouré" Technical Nursing School (5) . The curriculum of courses for nursing aids has also been discussed with a view to the adequate education of nursing professionals.which permitted the preparation of nursing aids at primary level to attend to the emergency situation (8) . LDB 9394/96 introduced changes in professional education, which started to be considered as articulated with different forms of education, with work, with science and technology (9) and this, together with secondary (9) . THE REGULATION OF PROFESSIONAL PRACTICE: STRATEGY TO QUALIFY ATTENDANTS As .This Law entitled six professional groups to practice the Nursing profession: 1) nurse; 2) obstetric nurse; 3) nursing aid; 4) midwife; 5) practice nurse; 6) practice midwife and creates the Federal Nursing Council. While the first Law recognizes and regulates the professional practice of a given composition of the nursing category, the second departs from the existing reality and gives another direction to this composition. Law 7498/86 established the different subcategories that compose the nursing category -the nurse (higher level), the nursing technician (secondary level), the nursing aid (primary level) and the midwife-, recognized the existence of nursing workers acting without adequate professional training -the Nursing Attendant -, and set a term of ten years to solve these professionals' situation. The approval of this Law was not immediately accompanied by effective and universal policies to grant nursing workers access to professional qualification. Once the legal term has expired, a technical-political discussion process starts in the national sphere, about the quality, problem-solving capacity and continuity of the nursing functions carried out in health establishments. The different positions in this debate were characterized as follows (10) : As a result, there was a strong polarization of the work force, with qualified professionals, physicians on the one hand and less qualified professionals like attendants and similar kinds on the other.Some data illustrate this problem.In 1976, nursing attendants occupied 35.8% of all health jobs; decreasing to 29.9% in 1984; 13.8% in 1992 and 5.3% in 1999 (11) . ROUTES ALREADY FOLLOWED FOR THE PROFESSIONALIZATION OF SECONDARY-LEVEL WORKERS The Project), implanted in different Brazilian states (12)(13)(14) ; the progressive creation of Technical Health Schools in the Single Health System (ETSUS) and Training Centers for Human Resources in Health (Cefor) (12)(13) and, finally, the Professionalization Project of Nursing were not relevant to solve the quantitative deficit of nursing aids (5) , it is important to highlight the debates that occurred at that time, which already indicated aspects orienting the qualitative improvement of the education.In the same year, studies carried out and discussed during the XV Brazilian Nursing Congress recommended the improvement of the teaching staff and the revision of these courses' curriculum, including courses on public health and maternal-infant health. Moreover, they recommended the offering of courses at secondary education level. In 1967, a seminar was held in Recife to assess the first five years of the program, which recommended 1) authorizing nursing aid courses to offer intensive courses who had finished the second year of secondary education; 2) adding devices so as to allow schools to offer the exclusively professional one-year course as well. In 1971, another assessment seminar was held in Curitiba, during which the following recommendations were made: 1) the elaboration of a student assessment system, using the models contained in the Guide on the Curriculum of the Intensive Nursing Aid Course as a reference; and 2) the teaching staff should be better qualified, completing their education with a teaching diploma in nursing.the nursing aid and technician courses (14) . It SYNTHESES AND NEW QUESTIONS In analyzing the scenario, it can be appointed that the nursing category has always debated on secondary-level professional training in the sphere of its class organs, despite a lack of consensus about a favorable position towards its stimulation.This is due to the understanding that its growth is opposed to nurses' quantitative and qualitative growth. education mechanisms: a) it proposed the establishment of the two nursing courses in colleges; b) it authorized the schools to receive candidates with a basic education degree only; c) it defined that nursing and nursing aid courses would be supervised by travelling inspectors with a nursing degree, subordinated to the Directory of Higher Education of the Brazilian Ministry of Education (MEC); d) the State would grant resources to schools funded for nursing education and would expand resources for existing schools; and e) it regulated obstetric nursing education that functioned Bill, on the other hand, which was analyzed by the Executive Power in 1957, established nursing and obstetrics teaching, guided by the offering of professional nursing education to young people with different educational backgrounds.This Bill waited for the dissemination of the Law of National Educational Directives and Bases (LDB).In 1962, when LDB 4024/61 was issued, which defined Brazilian education at three teaching levels (primary, secondary and higher), the large-scale education of technicians at secondary-level in any activity area became a priority in the country.In this Law, article 47 deserves special attention, which regulated technical courses in the industrial, agricultural and commercial areas and delegated to regulation of other areas to the different teaching systems.In 1963, ABEn's Legislation Commission sent an extract of a study called "Observations about auxiliary nursing teaching in the country" to the competent authorities, affirming that ABEn wanted three course levels: "maintenance of the current higher and nursing aid levels; and the creation of an intermediary course, with a possible duration of three years, to train technicians for hospital nurses, possibly to the contradictions they contain.The first, extremely positive, is ABEn's concern with the quality of nursing education and development, always supported by the legitimate discussion about the profession's search for social and economic recognition, for the construction of its own knowledge, for the primacy of its activities and for its constant qualification.The second is that, in this search for the recognition of the profession, a conflict is present: the nurses' desire for expansion in quantitative terms faced the obstacle characterized mainly by the Brazilian female population's low education level, which historically constitutes the main part of the nursing work force.Thus, technical education was faced as a factor that could decrease the demand for the higher education nursing course.This position was taken by a group of nurses who, even in view of the concrete demands posed by the health sector to increase the number of nursing technicians, sustained a corporative position, attending to this need, a position that is still present nowadays.The third point is that, in view of the lack of consensus about technical nursing education, the education processes seem to have been conducted by initiatives taken by the public power, with the support of the Law of National Educational Directives and Bases, without a nursing leadership. of the first technical schools that would graduate nursing aids, although with possible cases of imprecision in the records, occurred in 1965-66, with special attention to the following initiatives: a) In 1965, the State Council of the State of Guanabara creates the Secondary Course in Nursing, and the same was done by the Pernambuco State Education Council; b) In 1966, the State Councils of Goiás and Paraná create, respectively, the technical course at the São Vicente de Paulo School of Nursing in Goiânia and the Experimental Technical Course at the "Catarina The main points are related to the prerequisites for entry, mainly in terms of degree, and the content itself.In view of the low education level of the population interested in courses for nursing auxiliaries, historically, recommendations have been made to compensate for deficiencies, including subjects like Portuguese and mathematics.The curriculum contents, on the other hand, are defined according to each school's reality, but the main question has been the relative proximity of this education with the higher-level course and with the need for a better preparation in realities where nurses do not exist.This granted elasticity and plasticity to the curriculum.These concerns gave rise to many devices that would regulate this education, such as Opinion 3814/76, by the Federal Education Council, which set the minimum curriculum contents for nursing aids.In the next year, Resolution 07/77 was published, which established the nursing aid and nursing technician courses as secondary-level degrees, as well as Resolution 08/77 by the Federal Education Council, a) Nursing leaderships and supervising entities if professional nursing practice exerted pressure on employers to end the practice of hiring nursing attendants and to find an immediate solution for all workers who were active in this condition of aids without certification, which these workers translated as being equal to resignation; b) The unions representing uncertified workers exerted pressure on the government to give more time and offer resources in order to facilitate their transition to nursing aids, according to legal determinations; c) Employers started to rationalize the use of nursing aids to a maximum and, in many cases, used ruses to hide that they were employing attendants and even hired them under different names*.The characterization of this scenario is complemented by the situation of human resources in the health sector in the period of the Health Reform.The main characteristic was the distortion in the occupational structure, with the increasing offering of university-level professionals, mainly physicians; the reduced offering of secondary-level professionals and the high degree of incorporation of staff without any qualification whatsoever. State has assumed initiatives for the professional training of nursing technicians.Special attention is given to those implanted at national level: Program for the Training of Nursing Aids in the North, Northeast and Central West, implanted in 1963 by the Health Ministry (HM) in an agreement signed with the Ministry of Education, the Pan American Health Organization (PAHO), the World Health Organization (WHO) and the International Childhood Rescue Fund (ICRF), now called the United Nations Children's Fund (UNICEF), which continued from 1963 to 1973 (5) ; the Large Scale Health Staff Training Program (Large Scale Workers (PROFAE), funded by the Inter-American Development Bank (IDB) and the National Treasury, implanted by the HM in all Brazilian states (14) .First Period: 1960's and 1970's In 1963, the Training Program for Nursing Aids in the North, Northeast and Central West was implanted, aimed at training nursing aids for medical-health care services in the North, Northeast and Central West, through financial aid to the schools.Although the results Second Period: 1980's and 1990's In this second period, from 1981 onwards, the Large Scale Health Staff Training Program (Large Scale Project) was implanted, motivated by the observation that there were about 300 thousand health service workers, practicing health actions, without any kind of qualification, which represented 50% of the health work force in the 1970's.This project was marked by detailing and methodological deepening, departing from the characterization of the workers already inserted in the services.It previewed the integration between the subject and object in the work environment, supported by the theoretical constructions on adult learning, respecting the subjects' perception of reality, without denying their knowledge coming from practice, with a view to (de)constructing and reconstructing new and more elaborate knowledge.The contents were organized in four modules called "Curricular Guidelines for Nursing Aid Training", which became known as the Integrated Curriculum, due to its methodological design and pedagogical model.It proposed the realization of specific pedagogical training for the teaching staff, which included higher-education professionals inserted in services.This training proposal was assumed by State Health Secretaries (SHS) and federal universities during the 1980's and 1990's.The experience of SHS from the states of São Paulo and Rio de Janeiro and from the Federal Universities of Minas Gerais and Santa Catarina are ratified in studies that analyze the social and economic results related to the experience of educating young people and adults, who demanded differentiated educational strategies and the acknowledgment of daily experience in these workers' education process (12-13,15-16) .The progressive creation of public technical schools, under the responsibility of the Health Secretaries, which occurred in the 1980's and 1990's, can be accredited to the experience of implanting the Large Scale project.These schools were conceived as "function" schools, with lean and flexible technicaladministrative structures, using the physical, material and human resources of the health system itself, with the mission to train professionals for health work and retrain technical-level professionals.Their creation was legally supported by Law 5.692/71 and Opinion CFE 699/72 about adult education, additional education, professional qualification and the function school.Nowadays, they are supported by educational legislation and represent important public spaces for professional health training in Brazil*.In 1996, the National Professional Education Plan was implanted by the Ministry of Labor -PLANFOR, with resources from the Worker Support Fund, used for the professionalization of health workers, funding local initiatives proposed by schools and colleges.Information about its results was not surveyed in this study due to access difficulties.Third Period: From 2000 onwards In this last period, the PROFAE was implanted, initially motivated by the estimated number of 250 nursing care workers without formal qualification in 1999, as verified by the realization of two national registers to survey the actual demand.Existing Brazilian public and private technical schools were used to offer, in four years, professional training courses for all registered workers.This initiative made it possible to offer, nursing aid courses, complementary nursing aid courses for nursing technicians and complementary basic education courses and graduated approximately 280 thousand workers.The program was put in practice through specific actions aimed at providing high-quality training in a decentralized and controlled way.The following stand out: a) funding of courses with resource transfers to contracted professional training schools; b) formulation and distribution of didactical books for students from the nursing aid course, which contained the contents needed for their training; c) creation and implantation of the Specialization Course in Pedagogical Training in Technical-Level Professional Education for Health, offered to all nurses inserted in the health services that acted as teachers; d) monthly supervision of the courses by nurses and other related professions, hired through state institutions, with a monitoring and monthly assessment methodology that provided the national management with information for the decision-making process; e) creation of the journal Formação, aimed at registering experiences, socializing information and stimulating knowledge production in the area of professional training for the health sector; f ) creation of technical and financial stimulation mechanisms for the modernization of public technical schools; g) elaboration of an assessment and certification system for graduates from However, this position attends to the increasing needs which, as demonstrated by job market data, indicate the progressive incorporation of secondary-level professionals in the sector.The State's stimulus and funding of nursing aid training dates back as far as the 1960's, and has always been assumed in partnership between the health system and the education system.The first, with funding and with the demand and absorption of workers, and the second with the school institutions that make up the system and the whole legal framework that grants regularity and legality to the education processes.Thus, professional nursing education at technical level, in occupying the agenda of public education policies for human resources in health for more than four decades, has also accumulated intellectual and conceptual maturity, which can serve as a reference framework for the formulation of new educative actions directed at other secondary-level professionals involved in direct care to the population.It can be said that the PROFAE, as the most recent government proposal to intervene in the reality of secondary-level professional training in nursing, joined the main demands, recommendations and lessons from the proposals implanted before and its results provoked a redrawing of the professionalization problem of technical-level workers.Professionalization and professional qualification of secondary-level workers represents the best course to face the increasing incorporation of new technologies and changes in the technical division of work, as the work force goes through successive and constant alterations in terms of occupational composition, qualification and education, and the population's health care demands are also modified.The quantitative insufficiency of trained workers -at least in nursing -does not seem to be the problem, indicating that the debate scenario should be occupied by the quality and continuity of education.Therefore, technical education schools in health should see to the improvement of education processes, including the Constant training of the teaching staff, the reformulation of its pedagogical projects, the stimulus to construct new knowledge about health work in its different dimensions, the creation of didactical materials among other strategies.It means saying that the articulation between education and work needs to be translated into joint actions of schools, health services, management and social control instances, directed at professionals who are already inserted in the job market.The expansion of the activity base of health and nursing, which has taken form through the expansion of the service offer and the incorporation of new technologies, requires not only adequate and permanent training, but also the development of continuous knowledge construction processes, as the quality of care and education are related with critical reflection about the reality of the work process and the capacity to intervene and propose changes in this reality.
v3-fos-license
2017-08-15T20:07:41.064Z
2015-11-09T00:00:00.000
45750123
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=61405", "pdf_hash": "3e36042e83426ce0d3f0a7ba1a2cdea26fd70adc", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2837", "s2fieldsofstudy": [ "Psychology" ], "sha1": "3e36042e83426ce0d3f0a7ba1a2cdea26fd70adc", "year": 2015 }
pes2o/s2orc
Depression , Anxiety and Stress among Undergraduate Students : A Cross Sectional Study Background: The prevalence of moderate to extremely severe level of depression, anxiety and stress among undergraduate students in Malaysia was ranging from 13.9% to 29.3%, 51.5% to 55.0% and 12.9% to 21.6% respectively. Medical students have been shown to be more inclined to emotional disorders, especially stress and depression, as compared to their non-medical peers. Therefore, the objective of this cross-sectional study was to determine the prevalence of depression, anxiety and stress among undergraduate students in Melaka Manipal Medical College. Methods: Self-administered questionnaires consisted of 3 sections: demographic data, socioeconomic data and DASS 21 questions. Data processing was performed using Microsoft Excel 2010. The psychological status was categorized according to the presence or absence of depression, anxiety and stress. The data were analyzed using Epi InfoTM 7.1.4 and SPSS. Student’s t-test, Fisher Exact and Chi-square test were used to analyze the associations. P-value of <0.05 was considered as statistically significant. Multiple logistic regression was used to calculate the adjusted Odd Ratio. Results: A total of 397 undergraduates participated in this study. The prevalence of the depression, anxiety and stress, ranging from moderate to extremely severe, was 30.7%, 55.5%, and 16.6% respectively. Multiple logistic regression shows significant associations between relationship status, social life and total family income per month with depression. Only ethnicity has been shown to be significantly associated with anxiety. There are significant associations between ethnicity and total family income per month with stress. No other factors have been found to be significantly associated. Conclusion: Depression, anxiety and stress have a high detrimental effect to individual and society, which can lead to negative outcomes including medical dropouts, increased suicidal tendency, relationship and marital problems, impaired ability to work effectively, burnout and also existing problems of health care provision. With that, there is a need for greater attention to the psychological wellbeing of undergraduate students to improve their quality of life. Corresponding author. Introduction According to WHO definition, "Health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity" [1].Many people perceive health as being physically well and free of any diseases, and thus they have neglected the importance of mental health.Therefore, mental health is an irreplaceable aspect of health.Poor mental health will lead to many life threatening diseases such as cardiovascular disease deaths, deaths from external causes or even cancer deaths, which was only associated with psychological distress at higher levels [2]. Depression, anxiety and stress levels in the community are considered as important indicators for mental health.Failure to detect and address to these emotional disorders will unfortunately lead to increased psychological morbidity with undesirable impacts all through their professions and lives [3]. In public medical universities, the prevalence of depression and anxiety ranged from 10.4% to 43.8% and 43.7% to 69% respectively.However, the prevalence of depression and anxiety among private medical students has been estimated to be 19% to 60% and 29.4% to 60% respectively [4].In Hong Kong, a web-based survey of stress among the first-year tertiary education students found that 27% of the respondents were having stress with moderate severity or above [5].While in India, a study was conducted to focus on the prevalence of current depression, anxiety, and stress-related symptoms among young adults, ranging from mild to extremely severe, which was 18.5%, 24.4%, and 20% respectively.Clinical depression was present in 12.1% and generalized anxiety disorder in 19.0%.Co-morbid anxiety and depression were high, with about 87% of those having depression also suffering from anxiety disorder [6]. A research conducted in Malaysia showed that the prevalence of moderate to extremely severe level of depression, anxiety and stress among undergraduate students was ranging from 13.9% to 29.3%, 51.5% to 55.0% and 12.9% to 21.6% respectively [7] [8]. With respect to the source of stressors, the top ten stressors chosen by the students were mainly academic and personal factors [9].As indicated by Porter, there were up to 60% of university dropouts recorded; the majority of these students leave within the first two years.Steinberg and Darling specified that 50% of university students who consulted mental health service complained of challenges in study, anxiety, tension, and depression which contributed to poor grades in courses [10]. In Malaysia, tertiary learning institutions offering medical degrees have expanded in numbers in the previous couple of years to meet the nation's demand for more graduate doctors and medical personnel.All things considered, the environment of medical education and practice has long been viewed as a distressing factor [11].Medical students have been shown to be more inclined to emotional disorders, especially stress and depression, as compared to their non-medical peers. Therefore, we conducted the cross-sectional study to determine the prevalence of depression, anxiety and stress among undergraduate students in Melaka Manipal Medical College. Methodology This cross-sectional study was done among undergraduate students, from September to October 2014 in Melaka Manipal Medical College (Melaka Campus), Malaysia. We calculated the sample size using prevalence of 55.0% [7].With the 95% CI and precision of 5%, we require a total sample size of 384 students.After accommodate the non-response rate of 10%, we distributed 430 sets of the questionnaires.A total of 397 undergraduate students participated in this study.Written informed consent was taken from every participant.The students who were absent for class on the day of data collection were excluded from this study. This study helps to arbitrate the differences in psychological distress with respect to the demographic variables among MMMC students.There are several stress reducing factors (stress busters) and are divided into 6 groups: friends, gym workouts, physical factors, co-curricular activities, teacher's patronage and personal hob-bies. Self-administered questionnaires consisted of 3 sections: Demographic data, socioeconomic data and DASS 21 questions.Demographic data consists of 8 questions based on personal details: age, gender, ethnicity, study course, residence, relationship status, academic performance and social life status.The socioeconomic data include parental marital status and total family income per month. The Depression Anxiety Stress Scale (DASS 21, Psychology Foundation of Australia) was used to screen mental health problems among the population [12].The DASS 21 is a 21 item self report questionnaire devised to measure and assesses the severity of a range of symptoms common to depression, anxiety and stress.However, it is not a categorical measure of clinical diagnoses of the said conditions [13]. In completing the DASS 21 questionnaire, the individual is required to indicate the presence of a symptom over the previous week.DASS 21 consists of 21 questions in total which was designated for participants to specify their emotional level for each statement.In total, there are 7 items for each depression, anxiety and stress assessment [14].Each item is scored from 0 (did not apply to me at all over the last week) to 3 (applied to me very much or most of the time over the past one week) [15].Because the DASS 21 is a short form version of the DASS (the Long Form has 42 items), the final score of each item groups (depression, anxiety and stress) must be multiplied by two (×2) [12].The minimum score is zero and the maximum score is 42.The final score of DASS can be categorized as in Table 1. Studies have shown that the DASS 21 score have validity in the measurement of the degree of depression, anxiety and stress in the person.It also has high reliability in terms of usage in a clinical and non-clinical setting [16] [17]. Data processing was performed using Microsoft Excel 2010.The psychological status was categorized according to the presence or absence of depression, anxiety and stress.Data was analyzed using Epi Info TM 7.1.4and SPSS.Descriptive statistics such as frequency (%), mean and standard deviation (SD) were also described.The Student's t-test, Fisher's exact test and Chi-square test were used for bivariate analysis.The variables which had P-value < 0.1 were included in multiple logistic regression analysis.P-value of <0.05 was considered as statistically significant. The study was carried out by giving a brief introduction on the purpose of the research and the procedures involved prior to distribution of questionnaire.Participants were then informed about their rights to not participate in the study and written consent was taken before they answered the questionnaire.Confidentiality of participants' information given was preserved.This study was conducted under the permission of the research committee of Melaka Manipal Medical College (MMMC). Results Table 2 shows the descriptive statistics of demographic and socioeconomic factors among respondents.The average age of the respondents is 21.9 years old with a range of 18 to 24 years old.63.2% of the respondents are female and the remaining 36.8% are male respondents.Chinese contribute to the largest portion of the ethnic group (34.5%), followed by Malay (33.3%),Indian (28.5%) and lastly others (3.8%).Many of the respondents are single (73.8%), followed by those who are in the relationship (26.2%).For the academic performance, 2.8% and 28.0% of the respondents are very satisfied and satisfied with their results respectively.However, most of the respondents (69.3%) have least satisfaction with their performance.Besides, 7.1% respondents are very satisfied with their social life, 49.1% are just satisfied, while 43.8% has least satisfaction.94.5% of the respondents' parents are happily married, 3.5% respondents are either orphan or from single parent family.Table 3 shows the prevalence of depression, anxiety and stress among undergraduates.Depression, anxiety and stress are divided into 5 categories, which are normal, mild, moderate, severe and extremely severe.In depression, 54.2% of the respondents are normal while 15.1%, 20.9%, 6.3% and 3.5% of the respondents have mild, moderate, severe and extremely severe depression respectively.Mean ± Standard Deviation for depression score is 9.8 ± 7.9.For the anxiety status, 36.0% of the respondents are free from it while the rest, ranging from 8.6% to 30.5% have mild to extremely severe anxiety.Mean ± Standard Deviation for anxiety score is 11.0 ± 7.7.Moreover, 68.0% of the respondents do not have any stress.Those who are with mild level of stress consist of 15.4%, followed by moderate level of stress (10.8%), severe level of stress (5.0%) and lastly extremely severe level of stress (0.8%).Mean ± Standard Deviation for stress score is 12.7 ± 12.8. Table 4 shows the association between socio-demographic factors and depression, anxiety and stress among the respondents.There are no significant association between socio-demographic factors and depression.However, the students who are least satisfied to social life (Unadjusted OR 2.0; 95% CI 1.3 -3.1) and the students who have total family income of <RM1000 per month (Unadjusted OR 3.4; 95% CI 1.0 -11.3) are significantly more likely to have depression.Regarding anxiety, there are no significant association between socio-demographic factors and anxiety, but Malay students are significantly more likely to have anxiety (Unadjusted OR 2.1; 95% CI 1.2 -3.4).Regarding stress, there are no significant association between socio-demographic factors and stress.However, the students who are least satisfied to social life (Unadjusted OR 1.6; 95% CI 1.0 -2.4) and the students who have total family income of <RM1000 per month (Unadjusted OR 6.2; 95% CI 1.9 -20.7) are significantly more likely to have stress. The variables which had P-value < 0.1 in bivariate analysis were included in multiple logistic regression analysis.Table 5 shows the multiple logistic regression analysis of socio-demographic factors and depression, anxiety and stress.Regarding depression, the students who are single (Adjusted OR 1.6; 95% CI 1.0 -2.7), least satisfied to social life (Adjusted OR 2.1; 95% CI 1.3 -3.5) and having total family income <RM1000 per month (Adjusted OR 3.8; 95% CI 1.0 -13.8) are significantly more likely to have depression.However, there are no significant association between other socio-demographic factors and depression.Similarly, there are no significant association between socio-demographic factors and anxiety, but Malay students are significantly more likely to have anxiety (Adjusted OR 2.1; 95% CI 1.2 -3.6).Regarding stress, Malay students are significantly more likely to have stress (Adjusted OR 2.0; 95% CI 1.2 -3.5) and having total family income <RM1000 per month (Adjusted OR 7.7; 95% CI 2.1 -28.2).There are no significant association between other socio-demographic factors and stress. Discussion The objective of the study is to determine the prevalence of depression, anxiety and stress among undergraduate students in Malaysia.In the present study, prevalence for moderate to extremely severe depression, anxiety and stress are 30.7%,55.5%, and 16.6% respectively.This is lower than one study done among Malaysian university students whereby the percentages are 37.2%, 63.0%, and 23.7% for depression, anxiety and stress [18].A higher prevalence of depression, anxiety and stress could be attributed to the fact that enormous syllabus has to be covered in a limited time period, sudden change in their style of studying, thought of appearing or failing in exams, inadequate time allocated to clinical posting have become the main factors.Furthermore, social stress such as relationship with peer groups, hostel friends, displacement from home and financial problem have also potentially psychologically influence undergraduate students greatly.This study is conducted done to determine the differences in elevated psychological distress with respect to the demographic variables among MMMC students. To the best of our knowledge, no study has found association between relationship status and depression.We hypothesised that single individuals are more likely to have depression due to the fact that they may lack a partner to express their daily stressors, thereby lacking social support and social buffer.Social life has invariably been associated with depression.It has been shown that individuals, who are satisfied with their social life and thus, a good social support, has a good social support, has shown more resilience to stressors in life, hence acting as a life buffer.This minimizes the risk of developing depression [19] [20].In the present study, students with total family income per month of less than RM1000 are more likely to have depression.This is consistent with studies which also shows that lower socioeconomic status are strongly associated with major depressive disorder and depressive symptomatology [21].Lefkowitz et al. also found that lower family income are associated with higher prevalence of childhood depression [22].Students with lower total family income per month may encounter problem with everyday's expenses and thus contributing to the precipitating factors for depression. Malay ethnicity has been shown to be significantly more likely in developing anxiety and stress.According to Khadijah Shamsuddin et al., they found that Malay ethnicity has a higher stress score on DASS as compared to their other ethnic counterparts [18].This could be due to cultural differences.We postulate that Malays are more susceptible to stress due to cultural factors.However, this is in contradiction to an earlier study on medical students in a Malaysian university, which reported no difference in emotional distress among Malays, Chinese, Indians and students from other ethnicity [23].Total family income per month less than RM1000 is significantly associated with risk of having stress.We postulate that this to be due to addition of stressors to the lives of students, particularly to sustain everyday's living expenditures as well as the already-costly medical education.One study has also shown that socioeconomic status, especially parents' education and income, indirectly relates to children's academic achievement through parents' belief and behaviours [24]. Our study did not find any significant association between age, sex, study course, residence, academic performance and parental status with depression, anxiety and stress. To pinpoint some limitations of our study, we had chosen an analytical cross-sectional study which has the disadvantage of being unable to establish the incidence rate of the mental health status of MMMC students.We can only determine the prevalence of the psychological distress among the students.Besides, lack of baseline information concerning mental status of medical students has become a limitation of our study.Since our study was done only among the medical students from a single private medical college, who are more likely to have high levels of stress, selection bias might be present.Associations among all these might not be representative of the general population because this study is only focus on undergraduates. Other than this, the students may not remember the events happened last week which might disturb their emotion.Also, the life events happen might not cause an immediate change in an individual's mental status.Hence, to understand the temporal relationship and the mechanism of how these risk factors may affect one's mental state, it will require not only longitudinal data throughout the lifetime but also regular assessment of individual's mental health with the consistent measurement of level of exposure to each risk factor intermittently. Emotional disturbances in the form of depression, anxiety and stress exist are existing in high rate among undergraduate science students that require early intervention [25].We recommend that to achieve a healthy life as per define by WHO [1], students are encouraged to spend adequate time on their social and personal lives and emphasize the importance of health promoting coping strategies which might be helpful in overcoming stress throughout their medical condition.Academy management-wise, a student counseling centre with adequate facilities and qualified staff should be established in the campus to provide a medium for students to seek appropriate help for mental health problems.Also, preventive programming efforts should be introduced and begin early in medical education and address a wide variety of concerns from academic to interpersonal relationship and financial worries.Early signs of depressive symptoms among students should be addressed.Intervention will help students to cope with stress to make a smooth transition through medical college and also to adjust to different learning environments during different phases of medical education. Conclusion In conclusion, depression, anxiety and stress have a high detrimental effect to individual and society, which can lead to negative outcomes including medical dropouts, increased suicidal tendency, relationship and marital problems, impaired ability to work effectively, burnout and also existing problems of health care provision.With that, there is a need for greater attention to the psychological wellbeing of undergraduate students to improve their quality of life. Table 1 . Severity of depression, anxiety and stress. Table 4 . Bivariate analysis of socio-demographic factors and depression, anxiety and stress. Table 5 . Multiple logistic regression analysis of socio-demographic factors and depression, anxiety and stress.
v3-fos-license
2022-12-11T14:19:25.740Z
2016-04-11T00:00:00.000
254508121
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10693-016-0246-1.pdf", "pdf_hash": "a661964851fae14dfed423b068fd8dc8a9face31", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2845", "s2fieldsofstudy": [ "Economics" ], "sha1": "a661964851fae14dfed423b068fd8dc8a9face31", "year": 2016 }
pes2o/s2orc
Household Access to Mortgages in the UK We employ the propensity score matching approach to investigate household access to mortgages in the UK using information on 29,732 households between 2003 and 2010. We find that, on average, the probability of obtaining a mortgage is similar for White and Non-White households. However, we find that Black households with low incomes are less likely to have mortgages compared to White households with similar characteristics. Asian households, in contrast, do not seem to have a lower probability of having a mortgage. 1 Financial inclusion refers to a process whereby people have access and/or use financial services (such as basic bank and saving accounts) and products (such as loans and mortgages) in the mainstream market that are appropriate to their needs and enable them to lead a normal social life in the society in which they belong (European Commission 2008). Literature also defines households with access to various financial services as Bbanked^. economies find that households with lower usage of financial services are more likely to be from non-White background (see for example Hogarth et al. 2005;Devlin 2005;Carbo et al. 2007;Khan 2008;Honohan 2008;Simpson and Buckland 2009;Demirguc-Kunt and Klapper 2013). 2 In this paper we investigate differences in the probability of having a household mortgage in the UK. Previous evidence suggests that access to mortgage and other forms of household finance is influenced by ethnicity (Pollin and Riva 2001;Goodwin et al. 2000). Other evidence also suggests that in the UK various households may not have household credit at all (Kempson and Whyley 1999;Devlin 2005;Khan 2008;Deku et al. 2015). Recent public policy concerns have also been raised about the inability of ethnic groups to obtain SME finance (Clegg 2011) and in the literature access to finance has been cited as one of the most significant barriers facing ethnic minority businesses in the UK (Bates 2011;Carter et al. 2015). While there is evidence suggestive of prejudice in ethnic group usage of financial services, the aforementioned UK empirical literature does not consider a major household financial market, namely, mortgages. Our analysis seeks to fill this gap in the literature. Previous research on the likelihood of obtaining household mortgage finance mainly focuses on the US and shows that non-White minorities face inequality in access to mortgages on noneconomic grounds. This empirical literature finds that non-White households have higher mortgage application rejection rates and are offered less attractive terms than Whites with similar credit and other features (Black et al. 1978;Munnell et al. 1996;Ross and Yinger 1999). Building on this evidence, we investigate the probability of non-White households obtaining mortgages in the UK. Mortgages are important for households as the ability to obtain a mortgage is critical to a household's capacity to acquire a home, and it is typically the largest financial commitment most people make in their entire lives. Being a homeowner is an important component of wealth acquisition and can increase status and standing in society. Hence, the inability to obtain mortgage credit may significantly hinder household wealth and aggravate social exclusion. We seek to provide a better understanding of the key determinants of household access to mortgage credit in the UK which we hope can help inform public policy. We use the Propensity Score Matching (PSM) approach and a large household sample compiled from the Living Costs and Food Survey gathered by the Office of National Statistics (ONS). The sample, collected between 2003 and 2010, consists of information on the economic, social and demographic features of 29,732 households, of which 17,398 have a mortgage and 12,234 are renting. 3 In general, we find that non-White households have the same chance of obtaining a mortgage as White households. However, Black households with low incomes are less likely to have mortgages compared to White households at similar income levels and other characteristics. Asian households, in contrast, do not seem to have a lower probability of having a mortgage. The 2007/08 financial crisis does not seem to have reduced mortgage access for Black households. The remainder of the paper is organized as follows: Section 2 provides a brief background on the ethnic discrimination literature and also highlights the disadvantages faced by ethnic minorities in the UK. Section 3 discusses the developments in the UK housing market between 2001 and 2010. Section 4 describes the data sources, explains the methodology and provides descriptive statistics. The results are presented and discussed in Section 5. Section 6 concludes. discouragement and the possibility of loan denials rather than ethnic discrimination by banks (Coleman 2000;Fraser 2009). 4 One reason why ethnic minority business owners may feel discouraged from applying for bank loans is because they fear prejudicial treatment (Blanchflower et al. 2003;Kon and Storey 2003). Fraser (2009) argues that loan denials are linked to differences in creditworthiness rather than ethnic discrimination. 5 While there is a literature covering ethnic minority use of a variety of UK consumer financing, research on household mortgages remains limited. In contrast, there exists an established US literature. For instance, Black and Hispanic households are more likely to be rejected and offered less attractive terms for mortgages than Whites (Black et al. 1978;\Munnell et al. 1996;Ross and Yinger 1999). Non-Whites pay more for their mortgages even when factors such as income levels, property dates and the age of buyer are controlled for (Oliver and Shapiro 1997;Courchane and Nickerson 1997;Crawford and Rosenblatt 1999;Black et al. 2003;Cheng et al. 2015). Although these higher rates may be counteracted with more favourable terms, such as longer low rate lock-ins (Crawford and Rosenblatt 1999). Also, disparities in mortgage approval rates tend to fall substantially for Black households the longer their credit history (Han 2011). Redlining, outlawed in the US by the 1974 Equal Credit Opportunity Act, has also been widely researched. 6 Early work finds little evidence of such discriminatory mortgage lending practice (Schafer and Ladd 1981;Benston and Horsky 1992;Munnell et al. 1996;Tootell 1996). However, the majority of later studies document that minority neighbourhoods have lower access to mortgage funding (Phillips-Patrick and Rossi 1996;Siskin and Cupingood 1996;Ross and Yinger 1999) and are more likely to be subject to predatory lending practices (Calem et al. 2004;Williams et al. 2005;Dymski 2006). 7 Given the economic and social disadvantages faced by ethnic minorities in the UK, and evidence on their lower usage/access to various consumer financial services, we seek to expand this literature by examining the probability of different ethnic groups obtaining household mortgages. This evidence will add to evidence we already have on the US household mortgage market. Before we outline our data and approach the following section highlights developments in the UK housing market over our period of study. 4 Banks in the UK typically use credit scoring techniques to assess the riskiness of borrowers. Antidiscrimination legislation prevents the use of ethnicity, gender, disability or religious beliefs in determining credit scores. However, scoring techniques are often used as a complement to relationship lending, involving close contact between the entrepreneur and bank manager, which introduces the possibility for credit assessments to be tainted by personal prejudices (Fraser 2009). 5 Fraser (2009 finds that Black African firms are significantly more likely to miss loan repayments and/or exceed their agreed overdraft limit and this behavior seems to largely account for their much higher loan denial rates. 6 Redlining is the refusal to lend to certain neighborhoods due to non-economic features. In the US, in addition to the 1974 Equal Credit Opportunity Act, Community Reinvestment Act of 1977 made it illegal for lenders to have a smaller amount of mortgage funds available in minority neighborhoods compared to similar White neighborhoods. 7 There is also literature on discrimination in the US consumer credit market, however, the findings are not uniform. Some studies conclude that non-Whites are not discriminated against in terms of access to consumer credit (Lindley et al. 1984;Hawley and Fujii 1991). Others find that loan approval rates are lower for non-Whites (Duca and Rosenthal 1993) and that they pay higher interest rates (Edelberg 2007). Lenders chose to discriminate against non-Whites because, on average, they have higher default risk (Lin 2010). In addition, a number of studies look at auto loan pricing and find no evidence of discrimination (Goldberg 1996;Martin and Hill 2000) although this could be because non-price terms differ for minorities compared to Whites leading those discriminated against to drop out of the market (Ayres and Siegelman 1995). Snapshot of developments in the UK housing market The UK has one of the most persistently volatile housing markets, with four boom and bust cycles since the 1970s (Stephens 2011). This is important as the consequences of these cycles may have a disproportionate impact on different segments of the population. Boom and bust cycles may influence housing choices and bank lending behavior, inhibit house building and increase wealth inequalities. We present a set of key indicators of the housing market over 2001 to 2010 in Table 1. Average house prices in the UK increased 65.6 % between 2001 and 2007 from £138,281 to £227,735 and dropped significantly afterwards with the impact of the financial crisis. UK housing supply, unlike the US, is argued to be much less sensitive to price changes and increases in demand for housing has a large impact on price but not volume (Meen and Andrew 2005). High levels of house completions were achieved in the period between 2003 and 2007. However, after the financial crisis new house construction fell notably, failing to cover the demand from newly formed households. Hence, the ratio of rented to owner occupied houses increased steadily, from 4.44 % in 2006 to 5.23 % in 2010 and average monthly rent increased 4.9 % during this period, above average yearly inflation level of 2.1 %. These figures signal an increase demand for rental properties perhaps because of tighter lending policies of banks during and after the financial crisis. The number of households in the UK increased steadily over this period, an average of 0.77 % a year with average household size remaining unchanged at around 2.36. The average age of mortgage holders also gradually increased after the crisis. A combination of rising household numbers and lower housing completions together with tighter credit conditions are likely to have forced poorer and younger households out of the market. In the pre-crisis period borrowing costs for mortgages became more expensive for households, increasing from 3.81 to 5.79 % between 2003 and 2008 (based on 2 year variable rates). However, other lending conditions became more favorable. Households were able to borrow more (mortgage payments to average pay increased two fold between 2001 and 2007) with the requirement of lower initial deposits. For first time borrowers the average deposit amount was 16.4 % in 2006 compared to 23 % in 2003. These favorable conditions indicate a relaxing of mortgage lending standards prior to the financial crisis and a tightening thereafter. 8 Looking at the age distribution of borrowers, younger households increased their share of the housing market pre-crisis. However, this trend soon reversed as the share of households under 34 years fell from 47.2 % in 2007 to 40 % in 2010. Different types of mortgage products are available in the UK. Households can borrow for maturities ranging from 5 to 30 years. Borrowers may choose to repay the capital along with the interest or opt out for paying only interest. 9 Rates may be fixed (typically 2 to 5 years) or variable (determined by the lender or tracking an official rate-such as the Bank of England's base rate). Borrowers can switch from fixed to variable rates although penalty costs (often 3 months mortgage payments) are charged. Borrowers need to make deposits to obtain a mortgage and these vary according to the lender. After the financial crisis minimum deposit levels increased to around 25 % of property valuation, whereas pre-crisis deposits typically were around 5 %. The increase in initial deposits has disadvantaged poorer and younger House prices adjusted for retail prices. This uses the Office for National Statistics Retail Price Index (RPI) to convert nominal prices to current prices households who typically have more limited savings. Property can be repossessed if the borrower falls into arrears-under current law lenders have to give at least 3 months notice before repossession actions take place. 10 In the UK, unlike the US, households can still be pursued through the courts for any negative equity even after the repossession of the property by the lender. Overall one can see that the access to mortgage finance was more favorable to poorer households prior to the crisis and conditions have tightened thereafter. Given that ethnic minorities are disproportionately represented in the lower household income groups, it could be that they have also suffered more from the changed mortgage market environment. Data and methodology 4.1 Data source We collect our data from the Living Costs and Food Survey gathered by the Office of National Statistics (ONS) in the UK for the years between 2003 and 2010. This is an annual exercise to collect data on private household expenditure on goods and services. Most of the questions address issues relating to household characteristics such as, race, family relations, employment details, as well as information on household spending and income features. Following previous literature on the UK (Kempson and Whyley 1999;Devlin 2005;Deku et al. 2015), the household reference person is assumed to be the most influential within the household even though certain responses require that variables are aggregated for all household members. Our sample consists of 29,732 households, of which 17,398 have a mortgage and 12,234 are renting. We do not include observations where the household owns the property outright as the historical source of ownership (whether inheritance or paid mortgage) is unknown. Non-White households constitute 5.68 % (1,681 households) of the entire sample. Within the non-White group the distribution of Asian and Black households are 944 and 737, respectively. 11 We exclude mixed race and other race categories from our sample as it is not possible to identify ethnic background of these categories accurately. Propensity score matching Here we aim to answer the following question: Are non-White households, ceteris paribus, less likely to have mortgages than White households with comparable economic and other characteristics? A potential selection bias emerges as being a non-White household is likely to be endogenous and related to various other observable characteristics. As such we follow Rosenbaum and Rubin (1983) and use PSM as a way to reduce selection bias. Matching restricts inference to the sample of non-White households (the treatment group, denoted T i = 1 for household i) and White households (the control group, denoted T i = 0). The treatment group is matched with the control group on the basis of its propensity score which is a function of households' observable characteristics (X i ): Following Dehejia and Wahba (2002), we match the households based on the nearestneighbor with the replacement. Propensity scores are estimated via a probit model utilizing household head's characteristics (age, employment, occupational classification, education, gender and marital status) and other household characteristics (household size, income, benefits and regional location) as independent variables, which are drawn from the aforementioned literature. Age is categorized into 10 year bands ranging from 16 to 65+. Employment status is defined as employed, unemployed, retired or unoccupied. Occupational classification indicates the skill level and content of the head of household's employment into six categories as: higher managerial, professional or working for a large employer; lower managerial; clerical and intermediate; small employers or self owned business; lower supervisory or technical; and routine/semi-routine manual or service. For education we use three UK educational qualification levels as GSCE (typically at 16 years of age), A-levels (typically at 18 years of age) and higher education (including further and higher university education). Marital status is grouped as married, co-habiting or single (including widowed, divorced and separated). Household size is the number of persons in a household. Income indicates the total weekly income of the household. Income gap equals 1 if household expenditure is more than its income and 0 otherwise. Benefits represent those households receiving any form of government benefit payments (from the Department for Work and Pensions or the Social Security Agency). Region is where the household is located. 12 We also use a set of year dummy variables to capture the effect of the macroeconomic environment. We acknowledge the limitations of our analysis, which is common to the extant literature. PSM has advantages and shortcomings. It does not depend on the assumption of functional form and has the advantage of restricting inference to the sample of White and non-White households that are actually comparable in their observable characteristics. On the other hand, the PSM procedure relies on the assumption of selection on observables (namely the Conditional Independence Assumption), and it only corrects for selection bias among included observable characteristics. While we control for a rich set of covariates to explain access to mortgages, it cannot be completely ruled out that the existence of unobservable characteristics (such as social and cultural differences that could be correlated with racial differences) may still bias the treatment effect (Berkovec et al. 1996;Pager and Shepherd 2008;Han 2011). Controlling for all characteristics that affects the outcome requires very informative data on individuals, especially on the credit worthiness of borrowers. Unfortunately this information is not available in the database we are utilizing and this may lead to bias in our results. In an effort to minimize these potential biases, we examine subgroups of households where some economic indicators (such as being employed, having higher income or savings) aim to capture indirect dimensions of the creditworthiness of potential mortgage applicants. Table 2 provides summary statistics for the sample. In the first three columns, we display statistics for the whole sample and divide the sample by tenure. Non-White households constitute 5.7 % of all households and this ratio is higher, 7 %, for the renting group. The percentage of household heads that are employed for mortgage owners are 91.7 compared to 42.2 of the renting group. Average weekly income for mortgage owners is GBP 797, which is more than double the renting group (GBP 342). Household heads are more likely to hold a higher education qualification if they are home owners. Renting households are more likely to be female and less likely to be married when compared to mortgage owners. Descriptive statistics We also present characteristics of White and non-White households within each tenure choice. In the renting group White households are older than non-White households with mean ages 48.4 years and 40.8 years, respectively. A larger percentage of White renting households are over 65 and retired. In terms of size White households are smaller (mean size 2.18 members) than non-White ones (mean size 2.88 members), and a significantly lower percentage of White households have higher educational attainment. Although there are large observable differences between non-White and White household renters, these groups look much closer in terms of households' characteristics for households with mortgages. They have similar age distributions, educational and employment characteristics. One difference between the two groups is the average weekly income where White households earn on average £39 more per week. In the latter four columns of Table 2 we divide the sample by racial origin. A higher percentage of non-White households (51.5 %) do not have mortgages compared to White households (40.9 %). Within the non-White group, the ratio seems to be driven by Black households, of whom 64.9 % do not have a mortgage. For Asian households this ratio is 41.6 % and close to the White households' mortgage ownership. Black households have a lower employment rate, are employed in lower paid occupations and have the lowest average weekly income. Compared to White households a lower percentage of Black households have a basic educational attainment (up to GCSE level), although they are more likely to have received (post-16 years of age) higher education. Furthermore, a higher proportion of Black household heads are female (55.2 %) and single (64.2 %). A larger fraction of Asian households are employed (73.3 %) with 20.4 % of them occupying managerial roles. Asian households' gross weekly income is the highest of all groups and they are more likely to have a degree in higher education and to be married. Overall the descriptive statistics illustrate that non-White UK households are a diverse group with Asian households on average being economically better-off, while Black households are most economically disadvantaged. White versus non-White households We present propensity score estimations for the whole sample as well as the sub-samples in Table 3. Briefly, we find that non-White households have lower incomes and a larger household size. They are also more likely to experience income gap and be a benefit recipient. Non-White household heads are more likely to be married and less likely to work in ***, **, and * indicate statistical significance at the 1 %, 5 %, and 10 % levels, respectively managerial positions. The results, sign and significance of the coefficients, are consistent across sub-groups, the only exception being variables in occupation classification. We match non-White households with one, four and eight corresponding White households. To verify the quality of matching we plot the distribution of the propensity score for both groups before and after matching for the whole sample (Fig. 1). In the unmatched sample, the propensity score distribution of the White households is skewed to the left, whereas it is very close to that of the non-White households in the matched sample. This result suggests that the matches are appropriate. We present the average treatment effect on the treated (ATT) in Table 4. Estimations include sub-groups of households where we are interested to see whether not having a mortgage is prevalent in sub-groups that have similar income levels, employment status, age, savings and income gap. 13 In Column 1 we observe that for the whole sample ATT is significant. In other words, for a household, on average, the effect of being non-White increases the likelihood of not having a mortgage. Subsequently, we match propensity scores for two sub-groups above and below median income. We find that the difference between non-White and White households remains statistically significant for the below median income group. In contrast, ATT loses its significance for households above median income. This suggests that at higher levels of income non-White households have similar chances having a mortgage as White households. Looking at the households with an employed head, we still find ATT to be statistically significant. Thus, employment itself does not alleviate the chances of non-Whites have a lower chance of obtaining a mortgage. Next we scrutinize the sample by age, specifically for younger households who need access to mortgages. In 2013 the average UK house buyer was likely to be aged 36 when they acquired their first property (Office of National Statistics 2014). Hence, we examine the sample with household heads younger than 37 years old. We find that ATT is still significantly present for this group. It could be argued that the probability of holding a mortgage is also related to the households' ability to put down a deposit; therefore, having savings may be a precondition to borrowing. With this in mind we examine the sub-group of households reported to have savings. Results show that non-White households have a lower likelihood of having a mortgage even if they have savings. We also look at the circumstances where households Fig. 1 Distribution of property score of white and non-white households before and after matching 13 We recognize that these variables are all endogenous in the model. While this is an important caveat to keep in mind for results presented in this section, we think it is nonetheless interesting to see the results for sub-groups. ***, ** and * represents significance levels at 1 %, 5 % and 10 %, respectively are less likely to save a deposit. Income gap shows the cases where the household's expenditure is higher than its income. The result does not change. As shown in Section 2, the UK housing market changed dramatically following the start of the financial crisis. Bank mortgage lending contracted sharply in the post crisis period with the poorer households (namely, the young and ethnic groups) most likely being the most adversely affected. To capture differences in accessing mortgages between these two periods, we repeat our analysis separately for the pre- (2003-2006) and post-(2007-2010) crisis period. Results are presented in the first two columns of Table 5. We find that results are similar as the effect of being a non-White household increases the likelihood of not having a mortgage in both periods. However, it is worth noting that, in the post 2007 period, the statistical significance of ATT weakens. We also examine whether our results differ in regions where larger communities of non-White households live. We divide the regions into two groups using a 5 % threshold of non-White household presence. Regions that have more than 5 % non-White households to total residents include the North West, Yorkshire and Humber, East Midlands, West Midlands, London and the South East (Office for National Statistics 2012). We hypothesize that in regions where there are more non-White residents lending institutions will be less likely to not provide mortgages to these households as they constitute a large customer base. In addition lending personnel may have developed expertise to assess mortgage risks and have learned to deal appropriately with specific issues that are distinctive to various ethnic groups. On the other hand, in regions where non-White residents are rare, lending institutions may not develop such expertise and therefore in such regions ethnic minorities are less likely to have mortgages. We present our results in the latter two columns of Table 5. We find that the ATT is larger and has higher statistical significance in the sample from regions where non-White households constitute less than 5 % of the total residents. This result provides some evidence that non-White households living in regions where fewer minorities reside are less likely to have a mortgage. Asian and Black households This section compares Asian and Black household separately to White households as they have been shown to have different socio-economic characteristics noted earlier in this paper. Our findings are presented in Table 6. The main results for the whole sample are shown in the first two columns. We find that ATT is insignificant when Asian and White households are Table 5 Pre-and post-crisis periods and regional analysis. This table reports results for the propensity score matching estimates of the average treatment effect (ATT) of being a non-White household on the likelihood of not having a mortgage. Robust standard errors are bootstrapped Number of controls matched Pre-crisis period (2003)(2004)(2005)(2006) During and post-crisis period (2007)(2008)(2009)(2010) Regions with more than 5 % non-White population Regions with less than 5 % non-White population ***, ** and * represents significance levels at 1 %, 5 % and 10 %, respectively ***, ** and * represents significance levels at 1 %, 5 % and 10 %, respectively ***, ** and * represents significance levels at 1 %, 5 % and 10 %, respectively matched. The effect of being an Asian household does not reduce the likelihood of not having a mortgage when compared to White households. On the other hand, ATT is positive and significant when we match White households with Black households. Black households are less likely to have a mortgage when compared to White households. We re-do our analysis for sub-groups pre-and post-crisis period. Our results are presented in columns two to five in Table 6. Similar to the aforementioned findings, we find that, when compared to White households, only ATTs for Black households are significant. Results for Asian households are not statistically significant. These findings are consistent in both the pre and post 2007 periods, with ATTs becoming slightly larger in the latter period when we compare Black to White households. We also separate the regions with higher or lower non-White household populations (again using the 5 % threshold) and compare the sub-groups of households with White households within these two regional groups. Results, presented in the latter four columns of Table 6, show that Black households, whether or not they live in a region with a higher portion of non-White residents, are less likely to have a mortgage. It is worthwhile to note that ATT are larger for the regions with less than a 5 % non-White population. Overall, our sub-group results reveal that our main findings for the non-White households are driven by the Black household sample. Asian households do not seem to have a lower probability of having a mortgage when compared to White households. Hence, we shift our focus to Black households and stratify this sample by income levels, employment status, age, presence of savings and income gap. Results are presented in Table 7. We find that the difference between Black and White households remains statistically significant when we examine households that have below median income, have an employed household head and have savings. Younger Black households also seem to be less likely to have mortgages in comparison to their White peers. In contrast, ATTs are not significant for above median income groups. Black households with higher levels of income seem to have similar chances of obtaining a mortgage as White households. Conclusion We investigate the probability of households obtaining mortgages in the UK using a sample of 29,732 households between 2003 and 2010. Using PSM we compare mortgage ownership of White and non-White households which are very similar in terms of their observable characteristics. On average, we find no difference in the probability of obtaining a mortgage for White and non-White households. However, we do find that low income Black households are less likely to hold mortgages. We do not observe differences between Black and White households at higher income levels. The 2007/08 financial crisis does not seem to have had any influence in making the situation worse for lower income Black households as they have a higher chance, compared to Whites, of not having a mortgage pre-and post-crisis. There are geographical disparities in our findings: Black households are less likely to have mortgages in regions where fewer minorities reside. In contrast, we find that Asian households, unlike Black households, do not seem to have a lower probability of having a mortgage compared to White households. Despite possible limitations of our methodology and data, we still argue that more work needs to be done trying to explain why it appears that Black households have poorer use of mortgage finance. One shortcoming of our analysis is that we only observe tenure status of households as a reduced form outcome, therefore, we cannot determine whether the lower probability of having a mortgage is a result of demand or supply-side factors. 14 It may also be claimed that the results capture self-exclusion due to Black households' possible cultural or religious differences. Our findings parallel those from the literature on SME financing in the UK, where Black business owners are found to have the lowest usage of bank finance. Perhaps, similar to the Black business case, Black households with low incomes may feel discouraged and do not apply for mortgages due to the belief that they would be rejected. An avenue for research may be to further investigate these issues. Policy makers in the UK should seek to develop mechanisms aimed at lenders so they have to demonstrate that they are not discriminating against certain groups from the mortgage marked based on non-economic criteria as well as consider ways to reduce possible barriers that may inhibit Black households' use of mortgage market.
v3-fos-license
2022-08-26T06:17:06.928Z
2022-08-24T00:00:00.000
251810541
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1073/pnas.2210321119", "pdf_hash": "8bb6d8761fc5f6a0bb54490b5e6559802474908f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2846", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "05a750229173ccfd5925b7ae3999577bced7d310", "year": 2022 }
pes2o/s2orc
Long noncoding RNA CHROMR regulates antiviral immunity in humans Significance An effective innate immune response to virus infection requires the induction of type I interferons and up-regulation of hundreds of interferon-stimulated genes (ISGs) that instruct antiviral functions and immune regulation. Deciphering the regulatory mechanisms that direct expression of the ISG network is critical for understanding the fundamental organization of the innate immune response and the development of antiviral therapies. We define a regulatory role for the primate-specific long noncoding RNA CHROMR in coordinating ISG transcription. CHROMR sequesters the interferon regulatory factor (IRF)-2/IRF2BP2 complex that restrains ISG transcription and thus is required to restrict influenza virus replication. These data identify a novel regulator of the antiviral gene program in humans and provide insights into the multilayered regulatory network that controls the innate immune response. RNA isolation, cell fractionation and qPCR. Total RNA was isolated using TRIzol reagent (Invitrogen) and Direct-zol RNA MicroPrep columns (Zymo Research). For cell fractionation experiments RNA was isolated from separate cytoplasmic and nuclear fractions using the PARIS kit (Thermo Fisher Scientific). Upon isolation, RNA was reverse transcribed using iScript cDNA Synthesis kit (Bio-Rad Laboratories) and quantitative PCR analysis was conducted using KAPA SYBR green Supermix (KAPA Biosystems) according to the manufacturer's instructions and quantified on Quantstudio 3 (Applied Biosystems). Fold change in mRNA expression was calculated using the comparative cycle method (2 −ΔΔCt ) normalized to the housekeeping gene GAPDH. A list of primers used in this study can be found in SI Appendix, Table S6. global impact of trimming. Reads were then aligned to the human genome (hg38) using Bowtie2 (5). Alignments were sorted and indexed using Samtools for downstream processes (6). MACS2 was then used to identify significant peaks (7). Peaks with a Q-value of less than 0.05 were retained. Custom scripts and the ChIPQC R package were used to assess ChIP-seq peak quality and reproducibility (8). Peaks present in all replicates from each condition were retained for differential enrichment analysis. Peaks were annotated using ChIPseeker package from Bioconductor (9). Peaks that overlapped a 4kb window centered at an annotated transcription start site were annotated as promoter peaks. Differential enrichment analysis was performed using DiffBind package from Bioconductor (10). Peaks with a false discovery rate (FDR-)adjusted P-value of 0.1 or less were considered differentially enriched between the knockdown and control conditions. Functional analysis of significantly enriched peaks was performed using Genomic Regions Enrichment Annotations Tool (GREAT) with default parameters (11). Motif enrichment analysis was performed on 3kb windows centered at the TSS of genes with differentially enriched promoter peaks using Hypergeometric Optimization of Motif EnRichment (HOMER) with the command 'findMotifs.pl' (12). ChIP-seq data are deposited in the GEO under the accession number GSE190413. Chromatin Isolation by RNA Precipitation (ChIRP). Cell harvesting, lysis, disruption, and chromatin isolation by RNA purification were performed as previously described (13) with the following modifications: 1) Cells were cross-linked in 3% formaldehyde for 30 min, followed by 0.125 M glycine quenching for 5 min; 2) Hybridization was performed for 16h; 3) For mass spectrometry (MS) experiments, lysates were pre-cleared by incubating with 30 mL washed beads per mL of lysate at 37°C for 30 min with mixing; 4) As a negative control, lysates were pooled and aliquoted into equal amounts and RNA was removed by incubating with RNase A (1 μg/mL, Sigma), and subsequent incubation at 37°C for 30 min prior to hybridization steps. RNA, DNA, protein isolation was performed as described (13) and further detailed below for ChIRP followed by DNAseq (ChIRP-seq) or Comprehensive Identification of RNA-binding Proteins by Mass Spectrometry (ChIRP-MS). RNA extraction was performed for validation of lncRNA enrichment. A list of probes used in this study can be found in SI Appendix, Table S6. ChIRP followed by DNA-seq (ChIRP-seq). DNA was eluted from hybridized magnetic beads and subjected for Illumina sequencing. In short, beads were washed at room temperature with ChIRP wash buffer (EMD Millipore, #17-10494). Beads were subsequently captured using a DynaMag magnet (Thermo Fisher Scientific) and DNA was eluted by suspending beads in elution buffer (20 mM Tris pH 7.4, 1% SDS, 50 mM NaHCO3, 1 mM EDTA). ChIRP eluates were reverse crosslinked at 65°C for 4h, digested with Proteinase K (EMD Millipore) at 55°C followed by incubation with RNase cocktail (Ambion). ChIRP purified DNA was cleaned using PCR purification columns (Zymo Research) and subjected to Illumina sequencing. Reads were trimmed using Trimmomatic (14) and mapped to hg19 using BWA (15). Peaks were then called for each probe set and replicate using the 'callpeak' function from MACS2 (7) relative to the input from the same replicate. Peaks were then imported into the DiffBind package from Bioconductor (10) and differential peaks were called between even and odd probe sets. Only peaks with no differential binding between the probe sets were retained. Peaks were then assigned to their nearest genomic location using ChIPseeker package from Bioconductor (9). ChIRP-seq data are deposited in the GEO under the accession number GSE190413. Comprehensive Identification of RNA-binding Proteins by Mass Spectrometry (ChIRP-MS). Protein was isolated from magnetic beads and analyzed by MS. To elute protein beads were collected on magnetic stand, resuspended in biotin elution buffer (12.5 mM D-biotin (Thermo Fisher Scientific), 7.5 mM HEPES pH 7.5, 75 mM NaCl, 1.5 mM EDTA, 0.15% SDS, 0.075% sarkosyl, and 0.02% sodium deoxycholate). Trichloroacetic acid (25% of total volume) was added to the clean eluent and proteins were precipitated at 4°C overnight. Proteins were pelleted at 16,000 g at 4°C for 30 min, washed with cold acetone and pelleted again at 16,000 g at 4°C for 5 min. Proteins were immediately solubilized in desired volumes of Laemmli sample buffer (Invitrogen) and boiled at 95°C for 30 min with occasional mixing to reverse crosslinking. Final protein samples were sizeseparated in Bis-Tris SDS-PAGE gels (Invitrogen) and submitted for MS analysis by the Proteomics Laboratory at NYU Langone Health. Individual samples were subjected to liquid chromatography (LC) separation with MS using the autosampler of an EASY-nLC 1000 (Thermo Fisher Scientific). Subsequently, peptides were gradient eluted from the column directly to Q Exactive mass spectrometer using a 1h gradient (Thermo Fisher Scientific). High resolution full MS spectra were acquired with a resolution of 70,000, an AGC target of 1 x 10 6 , with a maximum ion time of 120 ms, and scan range of 400 to 1,500 m/z. Following each full MS twenty data-dependent high resolution HCD MS/MS spectra were acquired. All MS/MS spectra were collected using the following instrument parameters: resolution of 17,500, AGC target of 5 x 10 4 , maximum ion time of 120 ms, one microscan, 2 m/z isolation window, fixed first mass of 150 m/z, and NCE of 27. MS/MS spectra were searched against a UniProt human database, using Sequest (16) within Proteome Discoverer (Thermo Fisher Scientific). Only high confidence peptides, based on a better than 1% FDR searched against a decoy database, were included for peptide identification. RNA Immunoprecipitation. Human histone H3, IRF2BP2, and HNRPNLL were immunoprecipitated from PMA-differentiated THP-1 macrophages. All immunoprecipitations were done using the MagnaRIP RNA-Binding Protein Immunoprecipitation Kit (EMD Millipore) according to the manufacturers' instructions. Briefly, an antibody targeting human histone H3 (Abcam, ab1791), IRF2BP2 (Abcam, ab220155), HNRNPLL (Cell Signaling, 4783), or an isotype matched control antibody (Sigma, 12-370 or 12-371) were bound to magnetic beads and incubated with lysed cells at 4°C for 24h. Beads were isolated and cleaved from the bound proteins by proteinase K, and coprecipitated RNA was purified. qPCR analysis of total RNA was performed to detect enrichment of CHROMR variants and control genes in the protein-of-interest precipitated fraction was determined as percentage of 1% input control. Mutagenesis studies. The interaction between IRF2BP2 and CHROMR3 was analysed by mutating the putative interaction site between IRF2BP2 and CHROMR3. Two GG-doublets in the sequence of a plasmid overexpressing CHROMR3 (2) were replaced with two CC-doublets creating a plasmid overexpressing CHROMR3-G4mut. All mutations were performed using the Quickchange XL kit (Stratagene) using the primers indicated in SI Appendix, Table S6. Bioinformatics. Enrichment analysis of interferon-stimulated genes was performed using Enrichr (17), the web-based software for Gene Set Enrichment Analysis was used for ChIP Enrichment Analysis (ChEA) (18) database (2016) which contains results from transcription factor ChIP-seq studies extracted from supporting material. Results were manually curated to remove duplicate studies or studies performed with non-human transcription factors. The catRAPID algorithm (19) was used to estimate the binding propensity of IRF2BP2 and CHROMR. The interaction score is generated using the interaction propensity distribution of a reference set, as described (20). The QGRS Mapper (21) was used for recognition and mapping of putative quadruplexes in CHROMR. RNAfold, part of The Vienna RNA Websuite (22), was used to predict the minimum free energy secondary structure of CHROMR3 and the RNA plot was created with RNArtist, developed by Fabrice Jossinet and available at https://github.com/fjossinet/RNArtist. CHROMR expression levels in whole blood of patients infected with influenza *Patient status at time of whole blood RNA sequencing
v3-fos-license
2020-02-12T14:04:23.322Z
2020-02-10T00:00:00.000
211078077
{ "extfieldsofstudy": [ "Medicine", "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcmededuc.biomedcentral.com/track/pdf/10.1186/s12909-020-1956-5", "pdf_hash": "3b73d9314ac1efd28cac3130072e326d50ac4b1e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2847", "s2fieldsofstudy": [ "Psychology" ], "sha1": "7f34d828e266d3be5d598a92fdf0fa528cc4d63c", "year": 2020 }
pes2o/s2orc
Pilot study of the influence of self-coding on empathy within an introductory motivational interviewing training Background Motivational interviewing (MI) is a framework for addressing behavior change that is often used by healthcare professionals. Expression of empathy during MI is associated with positive client outcomes, while absence of empathy may produce iatrogenic effects. Although training in MI is linked to increased therapeutic empathy in learners, no research has investigated individual training components’ contribution to this increase. The objective of this study was to test whether a self-coding MI exercise using smartphones completed at hour 6 of an 8-h MI training was superior in engendering empathy to training as usual (watching an MI expert perform in a video clip for the same duration at the same point in the training). Methods This was a pilot study at two sites using randomization and control groups with 1:1 allocation. Allocation was achieved via computerized assignment (site 1, United Kingdom) or facedown playing card distribution (site 2, United States). Participants were 58 students attending a university class at one of two universities, of which an 8-h segment was dedicated to a standardized MI training. Fifty-five students consented to participate and were randomized. The intervention was an MI self-coding exercise using smartphone recording and a standardized scoring sheet. Students were encouraged to reflect on areas of potential improvement based on their self-coding results. The main outcome measure was score on the Helpful Responses Questionnaire, a measure of therapeutic empathy, collected prior to and immediately following the 8-h training. Questionnaire coding was completed by 2 blinded external reviewers and assessed for interrater reliability, and students were assigned averaged empathy scores from 6 to 30. Analyses were conducted via repeated-measures ANOVA using the general linear model. Results Fifty-five students were randomized, and 2 were subsequently excluded from analysis at site 2 due to incomplete questionnaires. The study itself was feasible, and overall therapeutic empathy increased significantly and substantially among students. However, the intervention was not superior to the control condition in this study. Conclusions Replacing a single passive learning exercise with an active learning exercise in an MI training did not result in a substantive boost to therapeutic empathy. However, consistently with prior research, this study identified significant overall increases in empathy following introductory MI training. A much larger study examining the impact of selected exercises and approaches would likely be useful and informative. Motivational interviewing (MI) Motivational Interviewing (MI) has a 35-year research history and is considered an efficacious clinical framework for resolving ambivalence and addressing behavior change, especially related to behavioral healthcare and addictions [1]. For example, MI is often included as an element in education and training on screening, brief intervention, and referral to treatment (SBIRT) [2]. As research on MI training and applications has progressed, increasing focus has been placed on the positive influence of therapeutic empathy on MI-consistent counseling behaviors [3], synchrony of language used between client and counselor [4], direct client-level behavioral outcomes [5], and general cohesion with the spirit of MI [6]. Notably, low therapist empathy may predict poor treatment outcomes [5]. There is therefore value in focusing specifically on acquisition of therapeutic empathy within MI training. At the same time, measurement of MI training outcomes is complicated by the fact that training formats vary in terms of delivery and methods. For example, one meta-analysis of 28 MI training studies identified seven studies lasting fewer than 8 hours, 16 studies lasting between nine and 16 h, and five studies featuring extended timeframes [7]. MI trainings typically are delivered in a workshop format, though trainings can also include addons such as teleconferencing and booster sessions [8]. Research has indicated that a variety of workshop-driven formats, including those incorporating feedback and coaching, but also standalone workshops, produce superior proficiency to self-study controls [9]. MI skills development appears to be more sustainable when coaching and feedback are provided post-training [8]. Of particular interest for this study, researchers have also used the Helpful Responses Questionnaire (HRQ) [10], a measure of learner empathy, as a means of assessing the impact of MI training [11][12][13]. This work has generally found that MI training improves HRQ scores by a significant and meaningful amount. Teaching techniques within MI workshops The existence of a formal Motivational Interviewing Network of Trainers (MINT) and competency requirements [14] provides some internal consistency of training workshop components. MI workshops with a MINT trainer often begin with a two-day workshop (e.g., [15]). The workshop generally includes didactic content, roleplay and real-play (role-play in which the individual processes a scenario as him/herself in a realistic context), and video observation of expert MI practitioners. Roleplay and real-play are thought to be especially important, not only in terms of practicing applicable skills, but also because the type of learning that occurs in the context of self-reflection produces stronger outcomes than those attributed to an exclusively didactic style of delivery [16]. Purpose The present investigation began with a supposition based on observations of the lead author that a selfcoding exercise was the point in his own MI training workshops where learners seemed to grasp the clinical application of MI. There has been little research into MI self-coding within workshops, with 1 notable exception [17], and no research has been conducted regarding the effects of specific components of MI training workshops on development of learning outcomes, including therapeutic empathy. At the same time, the importance of investigating 'within workshop' MI training elements was noted in a recent editorial outlining necessary directions for MI research [18]. General health and medical education research suggests that a self-coding exercise following a brief real-play may be an especially effective MI training element, as it combines aspects of experiential adult learning [19,20] and structured assessment following role-play [21]. However, there is no extant research regarding the effect on learner outcomes, including development of therapeutic empathy, attributable to any single component of an MI workshop. This paper therefore describes a pilot study conducted among undergraduate students in both the United States (USA) and United Kingdom (UK). The study investigated whether a standard eight-hour MI workshop with an MI self-coding exercise (intervention) delivered 6 hours into the workshop was superior in building participant empathy when compared with the same workshop with students watching a video of an MI expert performing MI (control) in place of the self-coding exercise. Ethics The institutional review boards at both study sites approved this study (Sheffield Hallam University, #ER5231303, and Indiana State University, #1151112-2). Participants During the semester designated for the study, all students who either registered for and attended an undergraduate screening, brief intervention, and referral to treatment elective class within the Department of Social Work (of which 8 h were MI training) at Indiana State University, USA, or who registered for and attended a third year undergraduate nutrition class (of which 8 h were MI training) at Sheffield Hallam University, UK, were recruited. These potential participants were healthcare students either studying to become social workers or nutritionists. The MI approach can be used by a wide variety of fields, and has been taught to numerous healthcare disciplines, including social work and nutrition [22]. Thus, the only exclusion criterion was refusal to participate after reading the study information sheet. Excluded students still participated in the eight-hour training but were not asked to complete any study questionnaires. Interventions All participants first received a six-hour training block of introductory MI training conducted by one of two study authors (TS and MD), who are members of MINT; the training content was commensurate with recommendations by MINT for an introductory MI training [23]. Then, participants randomized to the intervention were led to a separate area to complete a self-coding exercise with a partner. Participants randomized to the control group remained in the classroom and watched a video of an expert performing MI. All participants completed the remainder of the MI training (approximately 100 additional minutes) after completing either the intervention or the control exercise. The self-coding intervention was a real-play experience where each participant was asked to identify an aspect of their lives that they felt ambivalent about changing and were comfortable both discussing with a classmate and recording. Exemplar topics included physical activity, diet, smoking, or alcohol consumption, but no topic was specifically excluded. Each member of each pair counseled the other about the identified behavior using applicable MI skills. Participants were instructed to audio record their session as the helping professional. Audio recording was completed using each participant's personal smartphone (using memo recording, voice recording, or a camera function without video enabled), with recording devices placed between members of the pair. After recording was completed for both partners, each participant listened to his/her own recording (where they were the helping professional) and completed a self-coding exercise using a coding sheet developed by the first author (see Additional file 1). For the coding exercise, participants were instructed to mark the appropriate box for both MI-consistent (e.g., Affirmations) and MI-inconsistent (e.g., Authoritarian statements) behaviors using tally marks to indicate the number of times each behavior occurred. Space was also provided for participants to add examples. Participants were told that they could pause, rewind, and replay the recording as needed. Finally, participants were asked to reflect to themselves, after completing the coding sheet, what went well during their recorded sessions and what, if anything, they would change about their practice in subsequent sessions. To reduce social desirability bias, the self-coding sheet was neither collected nor evaluated by the instructor. Study structure This study was a pilot project using a two-group parallel, randomized controlled design with 1:1 allocation. Outcome measure The HRQ is a six-item free-response questionnaire measuring therapeutic empathy [10] and commonly used to assess learner outcomes in MI training [7]. Participants completed the HRQ at the beginning of the study, and again at the end of the eight-hour training. The tool asked participants to respond to a series of vignettes in an open-ended style, and they were instructed to "think about each paragraph as if you were really in the situ-ation… in each case write the next thing that you would say if you wanted to be helpful" (p. 444) [10]. HRQ scoring was completed by independent expert reviewers using standard criteria; each open-ended response was scored by external reviewers from one to five, with a '1' not only indicating no reflection, but also a 'roadblock' (a response that interrupts dialogue between counselor and client), and a '5' indicating a complex reflection of the client's feeling (or similar metaphor) with no roadblock content present. Total scores therefore can range from 6 to 30. The reviewers were not part of the study team and were blinded to both the group assignment (intervention/control) and the administration time (pre/ post). HRQ scores were the mean of coders' ratings for each individual at each administration point. Interrater reliability Interrater reliability of the two coders was calculated at baseline and follow-up using Krippendorff's alpha [24] with the level of measurement set as interval and 1000 bootstrap samples used to generate confidence intervals. This metric can range from zero to one, with '1' representing perfect reliability. At both baseline and followup, coders exhibited excellent agreement (Baseline: α = .965, LL 95%CI = .944, UL 95%CI = .983; Follow-Up: α = .961, LL 95%CI = .940, UL 95%CI = .975). Sample size and randomization There was no precedent for an estimated effect size of a training modification such as this intervention on learners' therapeutic empathy. Because of this, and given the naturalistic setting of our pilot study within preexisting university classes, the protocol did not utilize an a priori power analysis, choosing instead to invite all enrolled students to participate in the study (n = 79 eligible students, n = 53 analytic sample; see Participant Flow). In the US cohort, simple randomization was achieved using facedown playing cards, and in the UK it was achieved using a computerized random number generator to separate participants [25]. We selected which card suits (US) or numbers (UK) were intervention and control indicators prior to using the mechanisms to sort participants. In the US, an assistant, rather than a member of the study team, passed out the facedown cards. In the UK, a study team member applied the randomly sequenced numbers to the participants as generated. In this way, allocation concealment can be inferred. All individuals generating outcome measure scores (the 'coders') were blinded to both group assignment and measurement point (pre/post). Statistical assumptions and methodology The outcome of interest was the interaction effect of HRQ administration time and group allocation, as it was expected that both groups would naturally display improved therapeutic empathy, but that the experimental group's improvement would be significantly greater. Thus, repeated measures ANOVA was used to generate statistical estimates of effect size and significance via the general linear model, IBM SPSS Statistics 25, and then the plot of means was interpreted [26,27]. Separate analyses of pre-post data by group were completed using Student's t-test and included in Table 1 to more clearly illustrate changes in measured therapeutic empathy over time as a result of the full training, but these analyses should not be used to interpret the effects of the intervention. Data exhibited high levels of skewness and kurtosis, especially at baseline (skew = 2.346 [SE = .327]; kurt = 4.549 [SE = .644]), and Shapiro-Wilk tests of normality indicated violations in both cases (Baseline w = .544, df = 53, p < .001; Follow-Up w = .928, df = 53, p = .003). This is typical for pilot data of this type [28]. There was one univariate outlier slightly exceeding an absolute value of Z = 3.29, but this case did not meaningfully affect overall skewness and kurtosis, so it was retained [29]. Multiple transformations (log, modified log, reciprocal, exponential) were attempted but were unable to achieve non-significant Shapiro-Wilk test values. However, parametric comparison of means is generally robust to violations of normality in the absence of extreme outliers and at least 20 degrees of freedom [29]. Parametric tests also allow for estimation of effect size, in keeping with CON-SORT 2010 recommendations [30]. Therefore, the planned comparison strategy was retained over the potential alternative of using non-parametric tests [31]. Participant flow Seventy-nine undergraduates (n = 50 UK, n = 29 US) were eligible for this trial. Only the first 29 students in the UK arm were utilized for analysis to avoid potential overrepresentation bias from different instructors, field of study, or course location in the UK versus the US. After potential participants were provided with a study information sheet, three US students declined to participate. The remaining 55 students were randomized into the self-coding (n = 27) intervention group and the video viewing (n = 28) control group. One US student failed to complete the pre-test (but completed the post-test), and a separate US student failed to complete the post-test (but completed the pre-test). Both students were excluded from primary analyses but their data were included in calculations of interrater reliability. A full participant flow diagram is included as Fig. 1. Empathy characteristics At baseline, both the control and experimental groups demonstrated little therapeutic empathy, with mean scores of 7.00 (SD = 2.74) and 8.17 (SD = 3.79), respectively, (within a possible range of 6 to 30). Both groups presented significantly improved empathy (p < .001) by the end of the MI training, with mean scores of 12.48 (SD = 4.40) and 15.41 (SD = 4.05), respectively (see Table 1). Primary analysis A mixed ANOVA using the general linear model found a significant main effect for the MI training program across all students (F 1,51 = 110.83, p < .001). The partial ƞ 2 statistic (.685, LL 90%CI = .554, UL 90%CI = .757) suggested that the training resulted in a large increase in mean therapeutic empathy for all students, in aggregate. Although baseline differences between the control and experimental groups were, by definition, random, the between subjects main effect of group allocation was significant (F 1,51 = 5.79, p = .020) with a partial ƞ 2 statistic of .102 (LL 90%CI = .001, UL 90%CI = .240). The interaction effect measured the degree to which the change in therapeutic empathy over time was different for the experimental and control groups. This effect was nonsignificant (F 1,51 = 2.12, p = .151), with a partial ƞ 2 statistic of .040 (LL 90%CI = .000, UL 90%CI = .154), a small effect but one with potential practical implication [32] (see Table 2). The plot of estimated marginal means (Fig. 2) illustrates the implications of the GLM output, as the slope of the experimental group's increase is somewhat sharper, but both groups increased relatively uniformly. Interpretation The notion that experiential learning is useful alongside or instead of didactic delivery of information is not a new concept. Role-playing and self-evaluation are often used when developing adult learning curricula [33]. The question of whether a single exercise within a MI workshop might, by itself, increase therapeutic empathy above more passive information transfer via observation of an expert, was heretofore unexplored. This pilot study used randomization and a control group to test the hypothesis that a self-coding exercise at hour six of an eight-hour MI training was superior in building therapeutic empathy to watching a video of an MI expert performing MI. The study outcome did not support rejecting the null hypothesis. While we had speculated that the isolated self-coding exercise might, in and of itself, result in a substantial boost in therapeutic empathy relative to passive learning, our measured effect was non-significant and small (.040), even at the upper bound of the 90% CI. One possible implication of failing to reject the null hypothesis may be that there is no one single point where learners experience a large increase in ability to express empathy, but rather that each separate component of the MI training synergistically builds on the others in increments, resulting in the aggregate gain in therapeutic empathy at workshop conclusion observed in this and other studies. An assessment of whether that is the case would require a larger sample size and, ideally, multiple study arms testing additional learning conditions and approaches. In addition to the general finding about MI workshops, there are two supplemental areas where education research might be influenced. First, prior to this study, the range of realistic effects on therapeutic empathy that might be expected from a single exercise within an MI workshop was unknown. While it is not recommended to base study power analyses solely on effect sizes from pilot tests [34], data from this study suggest that a medium or large effect would likely not be reasonable to expect from a single training modification of this type. Second, our failure to reject the null hypothesis does not imply that the self-coding exercise did not support building therapeutic empathy, but rather that it was not measurably superior, within the context of an introductory MI training, to a passive learning exercise (video viewing). Madson and colleagues [18] described a need to: "seek to better understand the effective training ingredients." For practitioners interested in this work, the present study is one of the first steps in this undoubtedly long and complex process. Strengths and limitations This study has several limitations. First, outcomes were observed only among undergraduate students enrolled in universities, so extrapolation of the findings to other commonly-trained groups (e.g., experienced therapists) should be done with caution. Second, both the trainers involved in the present investigation are members of MINT, limiting generalizability to workshops run by trainers who are not MINT members (e.g. potentially less experienced). Third, prior experience with MI was not elicited at enrollment for this study. At the same time, since these were undergraduate courses, it is somewhat unlikely that any student would have had extensive prior MI experience. Finally, the study's focus was solely on therapeutic empathy, so findings cannot be generalized to other potential outcomes from MI training, such as lower-level skills (e.g., use of affirmations). This study also has several strengths: The study included students from two different countries (USA and UK), and included students studying several different disciplines, allowing increased generalizability outside of the field of social work to other health-supportive fields that may use MI. We also note a correspondence with prior research on MI workshops that captured HRQ data, as the overall significance and effect size of the MI training on therapeutic empathy in this study mirrors that work [11][12][13]. This supports the overall validity of the study. Conclusions Our findings suggest that a single active learning exercise within an MI workshop for undergraduate learners in social work and nutrition may not be superior to a passive learning exercise in building therapeutic empathy. However, the pilot study itself was eminently feasible, with few barriers to completion, even across continents, raising the potential of developing a larger and more thorough assessment of MI workshop content in order to optimize within-training outcomes across desired domains like empathy. Further, our findings continue to reinforce the probability that even brief (8-h) MI training workshops are likely to increase participants' empathy.
v3-fos-license
2019-01-22T22:22:00.932Z
2018-11-01T00:00:00.000
56481504
{ "extfieldsofstudy": [ "Business", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1016/j.techsoc.2018.07.007", "pdf_hash": "a328e342a97cb754214bde790290e731c0691a81", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2848", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Economics" ], "sha1": "a328e342a97cb754214bde790290e731c0691a81", "year": 2018 }
pes2o/s2orc
Perception and adoption of a new agricultural technology: Evidence from a developing country Adoption of new agricultural technologies is always at the center of policy interest in developing countries. In reality, despite the visible benefits of many of the new agricultural technologies, including machinery and management practices, farmers either do not adopt them or it takes a long time to begin the adoption process and scaling up. To enhance the provision of irrigation using surface water and to enhance irrigation efficiency, Bangladesh has been trying to introduce the axial-flow-pump (AFP) appropriate for surface water irrigation, which can lift up to 55% more water, conditional on the water head, than a conventional centrifugal pump. Despite the visible benefits of the AFP, the uptake of the AFP for irrigation is low in the targeted zone of Bangladesh. The present study demonstrates that the new technology must be modified to adapt to local demand and specifications. Most importantly, the price of the new technology must be competitive with the prices of the existing available substitute technologies to ensure a rapid uptake and scaling up of this new agricultural technology. Introduction Nearly 11% of the 7.42 billion world population is extremely poor. They are concentrated mostly in the rural areas of Southern Asia and sub-Saharan African countries, 78% of whom rely on agriculture for their livelihoods [1,2]. As the majority of the rural poor depend on agriculture for their livelihoods, agricultural growth can have paramount impacts on rural poverty alleviation. In fact, agricultural GDP growth is at least twice as effective in reducing poverty as GDP growth in other sectors [3]. It means if a 1% increase in GDP in any non-farm sector can lead to a reduction of poverty by 1%, the poverty reduction will be 2% with 1% growth in the agricultural GDP [3]. Because of its profound impacts on poverty alleviation, ensuring agricultural growth is the center of the development policies, particularly in povertystricken agrarian developing countries. However, the sustainable growth of the agricultural sector critically depends on the adoption of improved, scale-appropriate and ecofriendly technologies, including new disease-resistant and climate-adjusted seeds, modern management practices, and conservation of resources using scale-appropriate new agricultural machinery. The adoption of new technology in agriculture is, therefore, at the core of agricultural growth and, thus, rural poverty alleviation. Unfortunately, the adoption of new agricultural technology, including agricultural machinery, is seldom rapid [4], as a large number of factors can affect the adoption process [5][6][7][8][9][10][11]. This is because, new agricultural technologies are often correlated with risks and uncertainties about proper application, scale appropriateness and suitability with the prevailing environment, and importantly with farmers' perceptions and expectations [3]. Examining farmers' perceptions of a new agricultural technology is, therefore, critically important to ensure the adoption and scaling up of the technologies, thereby, ensuring sustainable growth and development of the agriculture sector. Using primary data collected from 70 sampled irrigation service providers in Bangladesh, who were given an axial-flow-pump (AFP) for free for a season under a demonstration program, and by examining the users' perceptions, the present study demonstrates the need for continuous modification of a new technology based on the requirements of the farmers. Modifications should be done at least in the initial stage of the development and deployment of a technology for the rapid adoption of a new agricultural technology. It can minimize the gap between the actual and the expected performance of a new agricultural technology, which in turn can critically influence the adoption and diffusion of that new technology. In addition, the price of a new technology must be competitive with alternative competing available technology. The case is worth investigating for several reasons. The physical size of Bangladesh (147,570 km 2 ) is almost equivalent to the US state of Georgia, yet the population of Bangladesh (158.9 million) at present is almost half of the entire US population [12]. Yet, Bangladesh is one of the few countries that has achieved rice production self-sufficiency. In the early 1970s, with a population of less than 70 million, the country faced massive food shortages and famine [13,14]. In 1970, the total cereal production in Bangladesh (rice, wheat and maize) was 16.8 million tons, which increased to 56.4 million tons by 2016 [15]. Consequently, currently with a population of nearly 160 million, the country is almost self-sufficient in food production. In 2016, with a production of 52.6 million tons of only paddy rice, Bangladesh ranked as the fourth largest paddy-rice-producing country in the world, after China, India and Indonesia [15]. The remarkable success in cereal production, and particularly in paddy production, thereby achieving rice-food production self-sufficiency, is mainly attributed to the rapid adoption of modern highyielding varieties (HYV) along with the expansion of the ground-waterbased, private-led, small-scale shallow tube well-based irrigation system [14]. However, the massive extraction of groundwater for irrigation in the entire Indo-Gangetic Plain (IGP) including Bangladesh has substantially reduced the groundwater table in Pakistan, India and Bangladesh [16]. Despite recurrent floods in Bangladesh, the groundwater level in the northwest and southwest regions has been declining by between 0.01 and 0.05 m/year [17,18]. Considering the consequences of rapidly-depleting ground-water reserves on sustainable development in the future, some studies considered the phenomenon as a major policy failure [19]. Alarmingly, in addition to the problem of the declining ground water, global climate change can also generate severe threats on sustainable agriculture in the entire IGPthe most densely-populated and intensively-cultivated region in the world. In the arid and semi-arid regions of Asia, it is estimated that the irrigation water demand will increase by 10% at the lowest for a 1°C increase in temperature [20,21]. As an enhancement of irrigation effectiveness can reduce water requirements by nearly 50% [22], the expansion of the directly-renewable surface water-based irrigation system can be an effective remedy to the problems related to over extraction of ground water in the entire IGP including Bangladesh. In this case, the deployment of hydraulically efficient AFPs in suitable cases can be instrumental in expanding surface water-based irrigation systems. Second, in Bangladesh, farmers generally use centrifugal pumps for irrigation. A number of carefully-controlled scientific experiments by the International Maize and Wheat Improvement Center (CIMMYT), and its national partners in Bangladesh confirm that an AFP can lift from 28% to 55% more water conditional on the water head [23]. The closer the water head, the more efficient is the AFP [23]. A rapid adoption of AFPs in suitable cases, and AFP replacement of centrifugal pumps, where suitable, therefore, can reduce the costs of production of boro rice. The rice production costs in Bangladesh have been spiraling over the years [24]. Currently, the boro rice cultivation cost in Bangladesh is USD1319/ha, of which irrigation cost is 13.4% (USD178/ha) of the total cost [25]. As the benefit-cost ratio of cultivation of boro rice is 0.82 [25], a reduction in the irrigation costs due to a potentially rapid adoption of AFP in suitable areas can significantly improve the current benefit-cost situation in boro rice cultivation in the targeted regions. Under the initiative of CIMMYT, Bangladesh, from 2012 to 13, AFPs were made available in the southern region of Bangladesh for farmers' purchase. However, from October 2013 to September 2017, so far only 888 AFPs have been purchased by the lead farmers, who also provide irrigation services to other client farmers, in the southern region of Bangladesh, and the current land coverage using AFPs is 19,287 ha [26]. Currently in Bangladesh, there are 173.2 thousand surface waterbased low-lift pumps (LLPs), 1417 thousand ground-water-based shallow-tube wells and nearly 37 thousand deep tube wells are engaged in irrigating 5.31 million hectares of land [27]. The LLPs irrigate 1.16 million hectares of land, which is nearly 22% of the total irrigated land in Bangladesh [27]. The largest number of LLPs are deployed in Chattogram Division (41,514), Sylhet Division (41,384), Khulna Division (32,741), Dhaka Division (20,581) and in Barishal Division (14,459) [27]. Currently, out of a total of 173,179 low-lift pumps in Bangladesh, 163,764 were diesel engine-based and 9415 were electric motor-based irrigation pumps [27]. Overall, a replacement of the less-efficient diesel engine-based centrifugal pumps with the more efficient AFPs can improve Bangladesh's terms of trade by reducing diesel imports, because the water-lifting capacity of an AFP is high and, consequently, the required operation time for an irrigation machine with an AFP will be less than before. Consequently the total diesel demand will be lower than before. A replacement of the diesel engine-based centrifugal pumps by AFPs can also generate positive environmental externalities by reducing emissions from the existing diesel engine-based LLPs. Finally, the government of Bangladesh has developed and approved a master plan for agricultural development in the southern region for agricultural intensification by expanding surface water irrigation facilities [28]. Note that while the average cropping intensity of Bangladesh is 194 (i.e. a piece of land is cultivated at least 1.94 times a year), the cropping intensity in the southern region of Bangladesh (Barishal, Khulna and Patuakhali) ranges between 146 and 187 [29]. Because of the current vulnerability and potential of the southwest region, the government of the United States of America has announced a special program called Feed the Future, which is initiatives to improve the livelihoods of the poor by improving the agricultural sector [30]. In Barishal and Khulna divisions, and in Chattogram and Dhaka divisions, a sizeable amount of land is kept fallow partly due to the high establishment and operation costs for irrigation [31]. A rapid diffusion of the AFP, particularly in the southern region targeting bringing suitable fallow land under cultivation in the dry season by irrigating using AFPs, can be an efficient strategy in implementing the agricultural master plan of the government of Bangladesh. The present study explores the factors that affect the adoption of AFPs in Bangladesh and the estimates of the expected price of an AFP. The rest of the study is organized as follows: Section 2 includes the materials and methods and elaborates the data collection process; Section 3 specifies the econometric estimation process; Section 4 presents the major findings and Section 5 presents the conclusions and policy implications. Materials and methods: study design, study area and sampling The International Maize and Wheat Improvement Center (CIMMYT), Bangladesh, under the Cereal Systems Initiative for South Asia -Mechanization and Irrigation (CSISA-MI) project, introduced AFPs in Bangladesh through imports from other Asian countries. Although the use of AFPs in Asia started in the 1970s, first in the Mekong delta of Vietnam [32], the AFP is a completely new technology in Bangladesh. Under a joint venture agreement with a local Bangladeshi private business organization, Rangpur Foundry Limited (RFL), and in collaboration with an international NGO, iDE (International Development Enterprise), Bangladesh, CIMMYT, Bangladesh imported and tested the performance of AFPs. A carefully-controlled scientific experiment shows that the hydraulic performance of an AFP is higher at low lift, which ranged from 28% higher at 3-m water heads to 55% higher at 2-m water heads [23]. In general, the nearer the water head, the more efficient is an AFP. The efficiency of an AFP is also influenced by the slope of an AFP: a more parallel setting of a pump provides more water lifting efficiency. To introduce this highly-efficient irrigation pump to farmers, CIMMYT, Bangladesh organized a number of AFP-based irrigation demonstrations in the southern part of Bangladesh. Under the program, AFPs were provided for free for a season to a selected number of irrigation service providers, who were using centrifugal pumps. In addition, mechanical and technical supports and fuel subsidies were provided during demonstrations to the service providers who were selected for using AFPs under the demonstration program, keeping their centrifugal pumps idle for a season. The major objectives of AFP deployment and demonstrations were to generate awareness among the irrigation service providers and to understand the perception of the service providers on AFPs, compared to the centrifugal pumps that the service providers had been using. Under the program, each selected service provider was provided with an AFP. The ultimate objective of the AFP demonstration program was to encourage irrigation service-provider farmers' to purchase AFPs. In the series of demonstrations in 2014-15, CIMMYT, Bangladesh deployed 68 AFPs in Barishal, Bhola, Barguna, Jhalokati, Patuakhali and Pirojpur districts in Barishal Division, two in Faridpur and Rajbari districts in Dhaka Division and 13 in Narail, Khulna and Satkhira districts of Khulna Division. In Barishal and Dhaka divisions, service providers used AFPs mainly for irrigating boro rice. In contrast, AFPs in Khulna Division were mainly used for water conveyance between/ among hatcheries and ponds or aquaculture (see Fig. 1). At the end of the boro season in June 2015, using a standard questionnaire, the perceptions of service providers, including their willingness to purchase an AFP, were collected. As the use of AFPs in the Khulna regions was other than for crop irrigation, in this study, we have not included the perception of the AFP users in Khulna Division. The present study relies on information collected from 70 irrigation service providers located in Barishal, Bhola, Barguna, Jhalokati, Patuakhali and Pirojpur districts of Barishal Division, and Faridpur and Rajbari districts of Dhaka Division (Fig. 2). In the perception analysis process, the sampled service providers were asked to rank a number of attributes of an AFP compared to the centrifugal pump. For an attribute with the lowest level of satisfaction, service providers ranked a 1; for the highest level of satisfaction, they ranked a 4. The comparison of each attribute between an AFP and the centrifugal pump revealed by a sampled user was then tested applying a two sampled mean test-the ttest. In addition to a simple two-by-two table revealing the price of the AFP that the users were willing pay, applying a two-part estimation method, we also estimated the expected price of an AFP that an AFP user offered. The application of a two-part model estimation procedure is suggested to examine the factors that affect the decision to purchase and the price to be offered for AFP by an irrigation service provider, as it permits the censoring mechanism for the irrigation service providers for whom the demand is 0, and the price function can be estimated in a separate process. More clearly, the two-part model is a special type of mixture model, in which the zero and non-zero values (price of an AFP that the users offered, in this case) will generate different separate density functions. Particularly, for the irrigation service providers who were not willing to purchase an AFP, zeros are generally handled by developing a model only for the probability of positive outcome as follows: where Z is a vector of variables that includes all explanatory variables, δ is the related vectors of parameters, which are to be estimated and F is the cumulative distribution function of the error term which is indepdently and identically distributed. For the corresponding positive outcome related to probabilties, the model can be specified as: where Z is the vecor of independent variables and ϒ are the corresponding vectors of parameters to estimate and g is the density function for y|y > 0, where the density function is necessary to select based on the distribution of y|y > 0. The likelihood contribution of an observation can be written as: where i (.) is the indictator function. Then the log-likelihood contribution on an observation can be written as: As the δ and ϒ parameters in the log-likelihood contribution for every observation are additively separable, it is possible to estimate separate models for zeros and positives. Based on the first principle of a statistical decomposition of a joint distribution into marginal and conditional distributions, the overall mean can be written as the product of expectations from the first (Equation (1)) and the second part (Equation (2)) of the model as follows: The detailed specificant of the two-part model can be seen in Belotti et al. (2015) [33]. Empirically, to identify the factors that affect the willingness to purchase and the price offered for an improved surface water irrigation pump, the AFP, in the southern regions of Bangladesh, the vector of variable Z includes: -age of the sampled service provider; -an education dummy that assumes a value of 1 if a sampled service provider has formal education of at least five years, or 0, otherwise; -years of schooling of the spouse; -a major occupation dummy that assumes a value of 1 if agriculture was the major occupation of the irrigation service provider, or 0 otherwise; -number of earning family members; -a two-wheeled tractor dummy, that assumes a value of 1 if a sampled service provider used an AFP as an attachement to his twowheeled tractor, or 0, otherwise; -a crop dummy that assumes a value of 1, if the AFP was used to irrigate boro rice, and 0 otherwise; -land owned by the sampled irrigation service provider (ha); -discharge diameter (inches) of the AFP that the sampled service provider used; -length (feet) of the AFP that the sampled service provider used; and -a dummy if the sampled service provider was in Bhola District, and 0 otherwise. Descriptive findings: information on service providers and their perceptions on the AFP Out of 70 sampled AFPs deployed for demonstrations in the 2014-15 boro season, 25 AFPs were deployed in Barishal District, 31 in Bhola District, three in Barguna District, six in Patuakhali District, two in Pirojpur District and one each in Jhalokati, Faridpur and Rajbari districts (Fig. 2). The AFPs under demonstration were different in length, and diameter (Table 1). Out 70 AFPs deployed for demonstration, 47 were 20 feet long, 20 were 16 feet long, two were 18 feet and one was 14 feet long. Usually the price of an AFP is strictly fixed, based on the length and discharge diameter of the machine. In the initial years, to boost the adoption of AFPs, a subsidy of Bangladesh taka 1 (BDT) 8000 was provided per AFP through the sales agents. The subsidy was independent of length and the discharge diameter. The actual prices of AFPs are, therefore, BDT 8000 plus the prices shown in Table 1. The long AFP with the wider discharge diameter was in high demand, as it can better serve boro rice irrigation that requires more water than any other winter crop in Bangladesh. The lowest priced AFP used for demonstration was 20 feet long with a four-inch discharge diameter, the price of which was BDT 9700 excluding the subsidy. Twelve of these pumps were deployed for demonstration in the 2014-15 boro season (Table 1). In contrast, the costliest AFP was 20 feet long with a six-inch discharge diameter, the price of which was BDT 14,500 per pump excluding the subsidy. Eighteen of these pumps were deployed for demonstration ( Table 1). The AFPs under demonstration were mainly imported from Thailand and Vietnam, and sold in Bangladesh under five brand names: Bell, Parrot, RFL, Two-birds and Whale brands. The RFL brand of AFP is the AFP imported by the RFL Company, and the company used their own brand name for marketing AFPs in Bangladesh. The basic human capital and demographic information of the sampled service providers selected to conduct demonstration irrigation using an AFP in the 2014-15 boro season in Bangladesh are presented in Table 2, based on whether or not a service provider was willing to purchase an AFP. A total of 55 irrigation service providers, out of 70 expressed their willingness to purchase an AFP, and offered BDT 12,310 on average as the price of an AFP ( Table 2). Out of 70 sampled service providers, 56 of them were from Barishal and Bhola districts, which was at the time of the survey, with 6.6 years of schooling, with 11 years of experience in the irrigation service business and with a spouse with 6.6 years of schooling (Table 2). Of the sampled service providers, 17% had their main occupation in the non-farm sector. The statistical differences in the mean (t-value) shows that the sampled service providers who were into non-farm sectors were less willing to purchase an AFP compared to other service providers, and the mean difference is statistically significant at the 1% level. On average, a sampled service provider was equipped with nearly two earning family members. It is possible to run an AFP as an attachment to a two-wheeled tractor, which is popularly used in Bangladesh for land tilling [34]. A total of 26% of the sampled service providers used two-wheeled tractors for running their AFPs ( Table 2). The sampled service providers, who used two-wheeled tractors to run the AFPs, are more willing to purchase an AFP. Of the sampled service providers, 73% used an AFP for irrigating boro rice and the rest irrigated wheat, water melon and other crops. The average discharge diameter of all AFPs in the demonstration was 5.3 inches with a length of 18.7 feet. The length and discharge diameter were not the decisive factors in expressing a willingness to purchase an AFP. The comparative perceptions of a sampled service provider on the AFP they used in the 2014-15 season for irrigation and on the centrifugal pump they generally use for irrigation, are summarized in Table 3. In general, the sampled service providers have highly ranked the water lifting capacity, and the fuel and labor cost-saving attributes of an AFP compared to the centrifugal pump (Table 3). For an AFP, as the impeller is submerged under water, the priming (initial water to be filled in up to the suction point from the water head) is not required. In contrast, for a centrifugal pump, priming is a necessary requirement. Therefore, for an AFP, it is possible to save some labor cost as there is no need of priming, what is ranked as a most positive attribute of an AFP by the users (Table 3). In terms of the availability of spare parts, and mechanical services, the sampled irrigation service providers almost equally ranked both the AFPs and centrifugal pumps. However, in the case of the engine cooling attributes, the sampled service providers ranked the centrifugal pump higher than the AFP. In Bangladesh, in many cases, irrigation service providers make a direct water line to cool down the engine attached to the pump. The facility can be more easily generated for a centrifugal pump than for an AFP. For an AFP, to generate a direct water line to cool the engine, it is necessary to make a perfect mechanical hole with the provision of a water pipe. It requires mechanical and technical skills. The respondents ranked equally the machine and engine setting attributes of an AFP and a centrifugal pump. Currently, an AFP does not have a built-in chassis. Therefore, it is somehow problematic to connect an AFP to an engine, and an imperfect setting can lead to a repeated loss in belts that connect the engine to an AFP. In contrast, almost all centrifugal pumps are set with the engine on the same chassis and are thereby easy to operate. Finally, overall the sampled service providers were more satisfied with the overall performance of the AFPs compared to the centrifugal pumps. During our survey, we asked the service providers whether or not they realized a reduction in fuel and labor costs, and 93% of the respondents realized a reduction in fuel cost and 90% reported a reduction in labor cost. The study revealed that, on average, compared to a centrifugal pump, using an AFP can save BDT913/ha due to a reduction in fuel requirements, and BDT457/ha due to a reduction in labor requirement primarily related to priming (Table 4). Currently, there are 173,179 units of low lift pumps (surface water irrigation pumps) in Bangladesh in operation, which irrigate 1,164,603 ha of land [27]. Based on the findings (Table 4), even if it is possible to irrigate only 10% of 1,1646,03 ha of land using AFPs by replacing the relatively lessefficient centrifugal pumps, simple calculations show that it is possible to save BDT159.5 million (approximately USD1.92 million) through a reduction in diesel and labor demands in a single season. We asked the respondents about the attributes that are required to adjust or re-set to make the AFP user-friendly to the irrigation service providers in Bangladesh. They said that the AFP should come with a generic chassis which will be compatible with existing popular engines (Fig. 3). Of the sampled service providers, 17% suggested increasing the thickness of the pipe of the AFPs to increase the longevity by reducing the chance of denting and/or breaking the machine. After using the AFP, these service providers also mentioned that quality material for the bearings, pulley and shaft of the machine be ensured and suggested that no leakage of water from the oil cell should occur. Spilling water through the oil cell can easily damage the bearings of AFPs. Currently, the available AFPs in Bangladesh are directly imported from abroad. Suggestions of the sampled service providers indicate the need to develop and expand the local manufacturing and assembling capacity of AFPs in Bangladesh to better fit with the local demand (Fig. 3). During our survey we informally asked the sampled respondents what the major constraint in switching from centrifugal pumps to AFPs is. They stressed two issues as major constraints to AFP adoption. First, all of the sampled service providers were using centrifugal pumps. Despite visible gains from the higher water-lifting capacity and no priming, switching from a centrifugal pump to an AFP requires a substantial amount of new investment. Second, and most importantly, the price of an AFP is comparatively much higher than a centrifugal irrigation pump, which comes as an attachment with an engine of different capacity from a 4-horsepower engine at the minimum. For an AFP, for a new irrigation service provider, it is necessary to purchase an AFP first and engine separately pairing it with the capacity of the pump. Currently, the available AFP in Bangladesh requires an 8-12 horsepower engine at the least to accrue the benefits of hydraulic efficiency. The popularity of small horsepower-engine-driven centrifugal pumps and the requirement for a high horsepower engine to run an AFP is a barrier to the rapid adoption of AFPs in Bangladesh. The expected price that the sampled AFP users offered is calculated in Table 5. The price of an AFP is strictly determined by the length and diameter of it. Therefore, the different prices that the sampled users faced are simply because they have used AFPs of different length and diameters. Fifteen of the sampled service providers were unwilling to purchase an AFP; in contrast 55 of the sampled AFP users expressed their willingness to purchase the AFP they had used at the market price. Note that the price range presented in Table 5 are the prices after subsidy. It shows that on average a sampled AFP user offered BDT 9672.9 for an AFP. Table 2 shows that out of 70 irrigation service providers who used an AFP, 55 of them expressed an interest in purchasing an AFP (price > 0) and 15 of them were not (price = 0). It allows for the application of the two-part model estimation approach to estimate the price to be offered for an AFP by an irrigation service provider. Table 6 presents the estimated functions explaining the probability of willingness to purchase an AFP (yes = 1, no = 0), and price that a sampled irrigation service provider offered to purchase the AFP that they were using provided for free for a season by CIMMYT, Bangladesh Table 4 Reported reduction in fuel and wage costs from using an AFP for irrigation. for irrigation demonstration. In addition, the overall marginal effects of each of the variables are presented as some of the variables affected the choice function and price function differently. It shows that, on average, the level of education of a service provider positively but weakly affects the willingness to purchase an AFP (p < 0.10). A service provider with five or more years of schooling is more likely to purchase an AFP, and is willing to offer BDT 2229.2 (p < 0.10) more on average for the AFP he used compared to others. Probably, for the relativelymore educated irrigation service providers, it is easier to more accurately calculate the future stream of benefits from the current level of relatively-higher investment on an improved irrigation machine. It positively affects their decision to purchase and offer a relatively higher price for an AFP. The service providers who are engaged in agriculture full-time are more willing to purchase an AFP (p < 0.001) and, on average, they are willing to pay BDT 3866.2 more for an AFP than the other service providers. The service providers who are completely dependent on agriculture for their livelihoods are more eager to increase their income from agriculture and, therefore, are ready to invest in more efficient agricultural machinery than others. Conversely, the service providers who are dependent on the non-farm sector for their livelihoods and agriculture is only their part-time job can receive more return from investment in the non-farm sector. Therefore, this group of service providers is not interested in investing in agricultural machinery. Interestingly, the number of earners in a family is negatively associated with the willingness to purchase an AFP and, on average, an increase in the number of the earning family member by 1 reduces the offered price for an AFP by BDT 1423. Importantly, the command area that is the size of the land that an irrigation service provider served, positively and significantly affects the willingness to purchase an AFP and the overall price a service provider is willing to offer for the AFP. It shows that, on average, a 1 ha increase in the command area that a service provider served, increases the revealed price for an AFP offered by a service provider by BDT 248 (p < 0.10) probably because an AFP can lift up to 55% more water than a centrifugal pump. Therefore, it is more beneficial for a service provider who provides irrigation services to more land and more clients. On average, the sampled service providers that attached an AFP to a two-wheeled tractor to run the machine were more interested in purchasing an AFP (p < 0.05), and offered BDT4,622 more for the AFP than others. At the lowest, the engine of a two-wheeled tractor should be a 16 horsepower. The minimum requirement to run an AFP was an 8-12 horsepower engine. It means the AFP which was run as an attachment to a two-wheeled tractor with a 16-horsepower engine performs much better than others. It positively affects the willingness to purchase and to offer a higher price by the service provider. The physical capital of the service provider in terms of his own land (ha) positively affects the price offered for an AFP, but the overall marginal effect is positive but statistically insignificant. The discharge diameter of an AFP negatively and significantly affects the willingness to purchase an AFP, but is positive and significant in explaining the price a service provider offered. The overall marginal effect of the discharge diameter is insignificant. Similarly, the length of an AFP positively and significantly affected only the price of the AFP that the service provider offered. Econometric findings The overall effect of the length is insignificant in deciding to purchase an AFP and in offering a price for it ( Table 6). The service providers located in Bhola districts are more willing to purchase an AFP and overall they are ready pay BDT 2076 more for an AFP than the service providers in other districts. Ancillary parameters in Table 6 indicate that the model was well fit with Pseudo R 2 0.34 and the linktest results suggest that there was no problem with the model specification. Finally, based on the estimated function, our study shows that, on average, an irrigation service provider in Bangladesh is expected to offer BDT 9650 for an AFP. The expected price calculated in Table 5 from descriptive analysis (BDT 9672.9) and the expected price calculated from econometric analysis (BDT 9650, Table 6) are almost the same. Although the sample size is small, the similarities in the calculated expected price of an AFP indicates the robustness of our findings. Conclusions and policy implications Extreme poverty is widespread among the farm households in the rural areas of developing countries. The rapid proliferation of useful technologies can have profound impacts on rural poverty. The present study demonstrates the problems related to the diffusion of a new agricultural technology despite the visible gain from the adoption of it. Using primary data collected in the 2014-15 boro rice season from 70 irrigation service providers in a demonstration experiment of the axial flow pump (AFP) in eight districts in Bangladesh, this study examined the factors affecting the willingness to purchase an improved agricultural technology in a developing country. In the experiment process, the irrigation service providers were selected based on the fact that they were using conventional low-lift centrifugal pumps for providing irrigation services to client farmers using surface water. Under this demonstration program, an AFP was provided to the selected irrigation service providers for free to use for a season. At the end of the season, the irrigation service providers were requested to rank a number of attributes of the AFP they used in comparison to the centrifugal pump that they were using previously. The findings of the present study confirmed the claim that, in general, AFPs are more efficient in lifting water than the centrifugal pumps [23]. In addition, our study demonstrates that, by using an AFP instead of a centrifugal pump, a surface water-based irrigation service provider in Bangladesh can save at least BDT913/ha due to a reduction in fuel requirements as an AFP can lift more water than a centrifugal pump and, thus, requires less machine time for irrigation of the same amount of land which was irrigated the previous year using a centrifugal pump. However, despite the visible benefits of an AFP over centrifugal pumps, the uptake of the machine is low. From October 2013 to September 2018, only 888 units of AFP were sold. However, a rapid scaling up of the AFP, where its use is feasible particularly in the southern region of Bangladesh where surface water is abundant, can reduce irrigation costs and therefore the overall crop production costs significantly. Based on the findings, the present study suggests conducting more demonstrations and awareness programs of AFP-based irrigation in the areas where there is a high potential for the expansion of irrigation using surface water, particularly, in Chattogram, Khulna, Sylhet and Barishal divisions, where surface water irrigation is prominent. Relatively well-educated irrigation service providers and service providers who are owners of two-wheeled tractors can be targeted, particularly for the rapid diffusion of the AFP in Bangladesh. In addition, based on the requirements, AFPs in Bangladesh should be adjusted to the local demand. For this, the government can provide the necessary technical support to establish local assembling units and workshops. The Government of Bangladesh also may assist in the production of AFP locally. Finally, the price of the AFPs must be competitive with the existing centrifugal pumps. Our study shows that around BDT 10,000 might be a reasonable price for an AFP on average, but this is still much higher than the price of a centrifugal irrigation pump. The policy implications from the present study can be generalized to other agricultural technologies in developing countries. This study shows that it is necessary to make the new technology compatible with local demand and the environment, and importantly the price of the new technology must be competitive with the existing alternatives. Although it is always assumed that the market mechanism can influence the adoption and scaling up of a technology as farmers are rational, in many cases initial support in the form of subsidies and technical supports can facilitate the scaling up process of a useful technology. The present study, therefore urges the international donor agencies together with the national government to support the scaling up and the adoption of AFPs in Bangladesh, and particularly to strongly support the diffusion of useful agricultural technologies in poverty-stricken developing countries. Note that this study is based on information collected from 70 sampled AFP users in Bangladesh. In addition, sampled respondents were selected only from Barisal and Dhaka Divisions, although there is a high potential for the expansion of AFP-based surface water-based irrigation systems in Chattogram and Sylhet divisions. Considering these factors as limitations of the present study, future research endeavors should expand the AFP-based irrigation demonstration program to all potential areas of Bangladesh. Table 6 Function estimated applying a two-part model estimation approach explaining the factors that affect the probability of purchase and the price offered for an axial flow pump by an irrigation service provider in Bangladesh. Source: Author's calculation based on Survey, 2015. Note: Values in parentheses are robust standard errors.*Significant at the 10% level; ** significant at the 5% level and ***significant at the 1% level.
v3-fos-license
2023-04-30T15:03:54.828Z
2023-04-27T00:00:00.000
258411245
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3390/ma16093424", "pdf_hash": "a88e20ee83226c521b36ade72cabe55b7bfffb4b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2849", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "a305f96ff66d6c2d6e5cc4edeb86d51a441a0e33", "year": 2023 }
pes2o/s2orc
Luminescence Characteristics of the MOCVD GaN Structures with Chemically Etched Surfaces Gallium nitride is a wide-direct-bandgap semiconductor suitable for the creation of modern optoelectronic devices and radiation tolerant detectors. However, formation of dislocations is inevitable in MOCVD GaN materials. Dislocations serve as accumulators of point defects within space charge regions covering cores of dislocations. Space charge regions also may act as local volumes of enhanced non-radiative recombination, deteriorating the photoluminescence efficiency. Surface etching has appeared to be an efficient means to increase the photoluminescence yield from MOCVD GaN materials. This work aimed to improve the scintillation characteristics of MOCVD GaN by a wet etching method. An additional blue photo-luminescence (B-PL) band peaking at 2.7–2.9 eV and related to dislocations was discovered. This B-PL band intensity appeared to be dependent on wet etching exposure. The intensity of the B-PL was considerably enhanced when recorded at rather low temperatures. This finding resembles PL thermal quenching of B-PL centers. The mechanisms of scintillation intensity and spectrum variations were examined by coordinating the complementary photo-ionization and PL spectroscopy techniques. Analysis of dislocation etch pits was additionally performed by scanning techniques, such as confocal and atomic force microscopy. It was proved that this blue luminescence band, which peaked at 2.7–2.9 eV, is related to point defects those decorate dislocation cores. It was shown that the intensity of this blue PL band was increased due to enhancement of light extraction efficiency, dependent on the surface area of either single etch-pit or total etched crystal surface. Introduction Gallium nitride (GaN) is a wide-direct-bandgap semiconductor suitable for the creation of modern optoelectronic devices and radiation-tolerant detectors [1][2][3][4][5][6]. This material can be synthesized using various crystal growth methods [7]. Metalorganic chemical vapor deposition (MOCVD) is commonly used for growing of rather thin GaN crystal layers on various substrates. However, the density of dislocations in these crystals can reach 10 10 cm −2 [8,9], while small density of vacancies is inherent to MOCVD materials [10]. Modern growth methods of bulk GaN crystals, such as ammonothermal (AT), hydride vapor phase epitaxy (HVPE), and lateral epitaxial overgrowth (LEO) techniques, enable reduction in the density of dislocations to electronic grade values of 100 cm −2 [11,12]. Nevertheless, a rather high density of voids is inevitable for AT materials [13][14][15]. Edge, screw, and mixed-type dislocations serve as accumulators of point defects within space charge regions covering cores of dislocations. Space charge regions act as local volumes of enhanced recombination and reduced carrier mobility [16,17]. The outspreading of space charge regions also depends on free carrier density in GaN crystals doped with various impurities, such as Si, Mg, etc. In MOCVD GaN, these dislocations might compose disordered material nets with inherent carrier transport and recombination features. The disorder might be a reason for considerably delayed signals of photoconductivity and photoluminescence [18]. The disorder might also lead to the stretched exponent relaxation (SER) of excess carriers. Nevertheless, MOCVD GaN can be employed for the fabrication of double response radiation sensors, in which both the electrical and the scintillation signals are simultaneously recorded. Therefore, enhancement of the efficiency of the electrical and scintillation responses of MOCVD GaN materials is desirable. Surface etching appears to be an efficient means to increase the photoluminescence yield from MOCVD GaN materials. Surface regenerating can be implemented by wet (chemical) etching [9,[19][20][21]. GaN crystal surface modification (dry etching) can also be performed using reactive ion plasma etching (RIE) through the creation of nanowires by a maskless lithography process [22][23][24][25][26]. Light extraction efficiency (LEE) from dislocation-rich GaN scintillators is limited by internal reflection and polarization factors [2,3,6]. LEE can be enhanced by modifying the geometry of the crystal surface and its effective area. Wet etching leads to the formation of pits due to different etching rates assigned to various crystalline planes and types of dislocations [19][20][21]. Macroscopic polarization is inherent to GaN and its alloys due to spontaneous and piezoelectric polarization, leading to modifications of light extraction efficiency dependent on local strains. Local stress can also be relieved by increasing the density and size of the etching pits [21]. Thereby, wet and dry etching can also cause changes in the photoluminescence characteristics of GaN crystals [27][28][29][30]. This work aimed to improve the scintillation characteristics of MOCVD GaN double response sensors using wet etching techniques. Defects responsible for luminescence spectrum variations were controlled by coordinating the complementary photo-ionization and PL spectroscopy techniques. An additional blue luminescence band peaking at 2.7-2.9 eV and dependent on wet etching exposure was revealed. The intensity of this PL band was considerably enhanced when scintillation was recorded at rather low temperatures. Localization of the radiative recombination centers responsible for this blue PL band was identified by combining scanning techniques, such as confocal and atomic force microscopy. The origins of the prevailing radiative recombination centers were separated by employing different excitation wavelengths, namely 354-nm pulsed laser beams suitable to efficiently stimulate electronic transitions within C N O N complexes and 408-nm laser diode illumination to activate the C N centers, according to configuration diagram models presented in Ref. [31]. The hydrogen complexes with carbon-originated point defects appeared to be the non-homogeneously distributed blue luminescence centers those decorate cores of dislocations. These B-PL centers appeared to be sensitive to thermal quenching. It was shown that the intensity of this blue PL band was increased due to enhancement of light extraction efficiency, dependent on the surface area of either single etch-pits or the total etched crystal surface. The etch-modified surface area increases with the duration of the etching process and the temperature of the etchant. Thereby, etching technologies can serve as tools to govern the scintillation properties of MOCVD grown GaN crystals and devices. Samples and Measurement Techniques In this work, 3.8 µm-thick GaN epi-layers grown on sapphire substrates were examined. These GaN epi-layers were grown using an MOCVD closed-coupled showerhead reactor. The 430 ± 25 µm-thick sapphire substrate with surface roughness of <0.2 nm was on a c-plane inclined by about 0.20 degrees in the m-plane direction. Trimethylgallium (TMG), silane (SiH 4 ), and ammonia (NH 3 ) were used as Ga, Si, and N precursors, respectively. The processes from sapphire surface pre-treatment to the growth of the final Si-doped GaN layer were conducted in a hydrogen atmosphere. The 0.9 µm-thick buffer layer of un-doped GaN (u-GaN) was initially deposited. The functional GaN was grown at a temperature of T gr = 1040 • C and an mbar pressure of p = 100. This functional GaN layer was doped with Si of a nominal concentration of N Si ∼ = 10 17 cm −3 . The free electron concentration of 10 17 cm −3 and mobility of up to 430 cm 2 /V·s were determined by Hall effect measurements at room temperature. Concentrations of carbon impurities were estimated to be N C > 10 16 cm −3 in MOCVD GaN layers grown using a similar regime. The photoluminescence (PL) spectra and scintillation intensity topography were initially examined. Rather homogeneous samples (of 6 × 6 mm 2 area) concerning scintillation spectrum structure and intensity were cut from MOCVD GaN-layered wafers. Values of PL intensity within yellow-green spectral band peaks were further exploited for normalization of spectral and intensity changes dependent on the temperature and etching regimes. The examined MOCVD GaN layers were etched using orthophosphoric acid (85% H 3 PO 4 ), varying its temperature in the range of 90-160 • C. The etch exposure t i was varied in the range of 0-1600 s in increasing duration of this procedure by 200 s steps (i = 0-8). Thereby, a collection of samples created using different temperatures and exposure parameters was examined. Additionally, a single sample that aggregated a set of etching exposures was also controlled after each etching procedure. Scanning electron microscope (SEM) imaging was initially employed for etch-pit control after short etching exposures. An Olympus BX51 microscope (Olympus Corporation, Tokyo, Japan) was used for the preliminary inspection of etched layers under ultraviolet (UV) light illumination. The average density of the dislocations of 8 × 10 8 cm −2 was extracted by calculating the etch-pits within definite areas of SEM images and including all dislocation types. Profiling of dislocation etching pits was implemented using atomic force microscope (AFM) imaging performed by a WITEC microscopy system Alpha 300 (WITec GmbH, Ulm, Germany). The latter AFM system enabled etch-pit profiling with vertical and in-plane resolutions of 1 and 10 nm, respectively. AFM scans were used to estimate the density of dislocations within etched MOCVD GaN layer fragments and to identify the types of prevailing dislocations using etch-pit shape analysis. Photoluminescence spectroscopy (PL) and topography were performed under steadystate and pulsed laser excitation. The pulsed PL spectroscopy measurements were performed with varying sample temperatures in the range of 20-300 K. The closed-cycle He cryogenic system, together with the sample mounting arrangement, was used for temperature-stabilized measurements. Pulsed 400-ps excitation was implemented using a UV 354-nm laser (STA-03 "Standa") beam. PL spectra were recorded by accumulating and averaging hundreds of PL response pulses within an Avantes AvaSpec-ULS2048XL-EVO spectrophotometer (Avantes B.V., Apeldoorn, The Netherlands). These measurements were combined with room-temperature PL spectroscopy correlated with etched surface topography. The latter PL measurements were performed using the multi-functional WITEC system Alpha 300 (WITec GmbH, Ulm, Germany), implemented in confocal microscopy mode. The PL light signal captured within the confocal image was transmitted via proper optical fiber to a thermoelectrically cooled CCD camera and a UHT300 spectrometer. A continuous-wave (CW) laser diode emitting at a fixed wavelength of 405 nm was there for PL excitation. A high numerical aperture (NA = 0.9) objective was applied to focus the excitation beam on the sample, providing an in-plane spatial resolution of around 250 nm and a vertical resolution of about 1000 nm. Different excitation wavelengths were combined to clarify the origins of the prevailing radiative recombination centers. The 354nm pulsed laser beams were suitable to efficiently stimulate electronic transitions within C N O N complexes, while 408-nm laser diode illumination can be employed to activate the C N centers, according to configuration diagram models presented in Ref. [31]. The correlated PL and photoconductivity transients were additionally examined to evaluate the temporal parameters of pulsed signals. Additionally, pulsed photo-ionization spectroscopy (PPIS) was employed to correlate Stokes shifts between excitation and PL spectral bands ascribed to the same scintillation centers. The PPIS method was implemented at varying excitation photon wavelengths in the range of 210-2300 nm generated by a 4-ns Ekspla NT342B Optical Parametric Oscillator (OPO) instrument (Ekspla UAB, Vilnius, Lithuania). The photon-electron interaction spectral steps were recorded in contactless mode by 20 to 22 GHz microwave probing of the photo-ionized carrier density and its relaxation rate. PPI step-structure spectra appeared due to variations in the MW-PC response amplitudes when the energy of excitation photons was scanned over a rather wide spectral range. A spectral Materials 2023, 16, 3424 4 of 14 step peak appeared when the photon energy (hν) matched the energy between the electron ground state and that of a deep level associated with a defect. The activation energy E d of photo-absorption due to defective photo-ionization is related to the photon-electron interaction cross-section σ ph-e . The most comprehensive approach to estimating σ ph-e is the Kopylov-Pikhtin model [32]. The activation energy E d of a photoactive center is related to the photon-electron interaction cross-section as follows: Here, M ik is the matrix element of a dipole transition from an initial (i) trap level to the continuum (k) state, and integration over all the conduction band states E is performed. The electron-phonon coupling is also determined by the broadening factor Γ, which depends on temperature (thermal energy k B T) as: Here, S is the Huang-Rhys [32,33] factor. The photon-electron interaction cross-section σ ph-e is thereby related to the Franck-Condon shift and the energy of the vibrational modes [34]. The cross-section σ ph-e of the photon-electron coupling directly determines the efficiency of the conversion of absorption into emission [34,35]. Correlation of PL Spectra with Photo-Ionization Characteristics Photoluminescence excitation in our experiments was performed by UV (354 nm) and violet (408 nm) laser beams, as mentioned above, to separate the prevailing radiative recombination centers. The UV laser beam generates excess carriers through inter-band electron transitions, which later recombine through excitonic annihilation and radiative processes via deep electronic states ascribed to various defects. Together with inter-band processes, the excess carriers can be generated by photo-ionization of defects, leading to a step-like spectrum of absorption. The pulsed photo-ionization spectra (PPIS) recorded at room temperature in MOCVD GaN are illustrated in Figure 1a. Several carrier photoactivation centers (up to six traps for photon energies in the range of 1.0-3.3 eV) are inherent to MOCVD GaN materials. Separation of the PPI spectral steps was accomplished by controlling whether the same step appeared in different samples. Non-resonant excitation was implemented using fixed-wavelength light (354 and 408 nm) illumination with relative efficiency (determined by relative absorption coefficients and denoted in Figure 1a by square points) attributed to various defects. An absorption coefficient α d (hν) = σ d (hν)n d related to a definite trap also depends on filling of its levels n d . The similar structure of the photo-ionization spectra has been revealed [36] for the MOCVD GaN layers grown on Si 2 mm-thick substrates using close growth parameters (T gr = 1040 • C, p = 100 mbar, N C ≈ 5 × 10 16 cm −3 ). The correlation between the photoionization spectra obtained for MOCVD GaN layers grown on different substrates (such as Si [36] and sapphire in this work) using similar growth parameters indicates that the same radiative recombination centers prevail. Figure 1a shows the σ d (hν) spectrum illustrating α d (hν) with normalized n d distribution inherent for MOCVD GaN materials. It can be deduced from Figure 1 that excess carriers excited by UV laser are capable of inducing photo-luminescence from all the traps (highlighted as the spectral steps) including exciton annihilation, followed by phonon replicas, while the laser diode radiation at 408 nm wavelength affords ionization of the centers with activation energy less than that of the E5 trap. The origin of traps, tentatively identified by coinciding peak values of activation energy referenced in literature, is denoted in Table 1. activation centers (up to six traps for photon energies in the range of 1.0-3.3 eV) are inherent to MOCVD GaN materials. Separation of the PPI spectral steps was accomplished by controlling whether the same step appeared in different samples. Non-resonant excitation was implemented using fixed-wavelength light (354 and 408 nm) illumination with relative efficiency (determined by relative absorption coefficients and denoted in Figure 1a by square points) attributed to various defects. An absorption coefficient αd(hν) = σd(hν)nd related to a definite trap also depends on filling of its levels nd. The photo-activation centers (E1-E6) resolved from the photo-ionization spectral steps, measured on MOCVD GaN (scattered circles-data) and simulated using Kopylov-Pikhtin [32] approach (Equation (1))-dashed lines. These traps were tentatively identified using the activation energy values taken from the literature referenced. The peak values of spectral bands were adjusted by varying the trap concentration parameters. Arrows denote the photon energy for fixed wavelength excitation of photoluminescence employed in these experiments. The gray rectangles indicate a relative efficiency of PL excitation by intersection of photon energy with spectral shapes of the identified photo-ionization centers. (b) PL spectrum (black circle symbols) recorded on MOCVD GaN material at room temperature using the multi-functional WITEC system Alpha 300 in confocal microscopy mode. This measured spectrum was fitted (gray solid line) using the van Roosbroeck-Shockley model (Equation (2)). This simulated gray solid line represents a resultant PL spectrum of simultaneous action of photoactive centers revealed by pulsed photo-ionization spectroscopy (PPIS) techniques. The contribution of each center (dashed lines) was simulated using parameters extracted from PPIS analysis. Fitting of the experimental PL spectrum (scattered circles) by a simulated resultant PL spectrum (solid gray curve) was implemented by slightly adjusting the contribution of various centers (dashed lines). The latter procedure was performed by varying the peak amplitudes (related to concentrations of definite centers). Table 1. Activation energy estimated by fitting PPI spectral steps recorded on MOCVD GaN and associated with different defects. Origin of these defects was identified using experimental activation energy values and theoretical parameters estimated based on configuration diagram models referenced in literature. Photo-Active Center Activation Energy (eV) ± 0.14 eV Γ Defect Type Several resolved deep photoactive centers with activation energies in the range of 1.3-3.3 eV indicate a large nomenclature of point defects inherent to MOCVD GaN materials. This large amount of different species of point defects complicates the analysis of each definite center. The more reliable estimation of optically active centers would be fitting to the conversion from absorption to light emission spectra. The van Roosbroeck-Shockley relation [33,34], based on the detail balance condition, is acceptable for describing the spectral shift of light the emission rate ∆P d (hν) and the temperature dependent crosssection σ(hν/k B T) attributed to the d-th center [40]: Here, n i = 2 × 10 −10 cm −3 is the intrinsic carrier concentration for GaN [41]; k B is the Boltzmann constant, h is the Planck constant; ρ d is the surface density of photons ascribed to a fixed frequency ν within absorption spectra for the spectral range ∆(hν), inherent to a dedicated trap of concentration N d ; and n ex,∆(hν) is the excess carrier density generated through photo-ionization in the definite spectral range ∆(hν). This approach enables prediction of the Stokes shifts between the outspread PPIS steps and the respective PL band peaks. The temperature dependence in σ(hν/k B T) appears through the broadening factor Γ(hν/k B T), which is rather weak in the range of low and moderate temperatures. A predicted PL spectrum in MOCVD GaN (simulated using photo-ionization spectroscopy data) is illustrated in Figure 1b. The red-shifted (relative to photo-ionization spectral peaks) bands of yellow-green luminescence (YG-PL), composed of the radiative recombination through E 1 -E 4 traps, and blue (B-PL) luminescence are in line with our experimental observations in photo-ionization and photoluminescence spectra. Here, the YG-PL band is determined by the radiative recombination through E 1 -E 4 traps, while B-PL appears due to E 5 traps. The additional recombination channel, observed as a violet PL (V-PL) band (within spectra illustrated in Figure 2d-f), peaked at 3.2-3.3 eV, and it has been traditionally (Ref. [42]) attributed to the donor-acceptor pair recombination. Here, ni = 2 × 10 −10 cm −3 is the intrinsic carrier concentration for GaN [41]; kB is th Boltzmann constant, h is the Planck constant; ρd is the surface density of photons ascribe to a fixed frequency ν within absorption spectra for the spectral range Δ(hν), inherent to dedicated trap of concentration Nd; and nex,Δ(hν) is the excess carrier density generate through photo-ionization in the definite spectral range Δ(hν). This approach enables pre diction of the Stokes shifts between the outspread PPIS steps and the respective PL ban peaks. The temperature dependence in σ(hν/kBT) appears through the broadening facto Γ(hν/kBT), which is rather weak in the range of low and moderate temperatures. A predicted PL spectrum in MOCVD GaN (simulated using photo-ionization spec troscopy data) is illustrated in Figure 1b. The red-shifted (relative to photo-ionizatio spectral peaks) bands of yellow-green luminescence (YG-PL), composed of the radiativ recombination through E1-E4 traps, and blue (B-PL) luminescence are in line with our ex perimental observations in photo-ionization and photoluminescence spectra. Here, th YG-PL band is determined by the radiative recombination through E1-E4 traps, while B PL appears due to E5 traps. The additional recombination channel, observed as a violet P (V-PL) band (within spectra illustrated in Figure 2d-f), peaked at 3.2-3.3 eV, and it ha been traditionally (Ref. [42]) attributed to the donor-acceptor pair recombination. Correlation of PL Spectra with Etch-Pit Profiling Scans in MOCVD GaN Layers PL spectra in the etched samples were examined under pulsed UV excitation in wide temperature range of 20-300 K. Evolution of these spectra as a function of 150 °C H3PO4 etching exposure is illustrated in Figure 2. The changes in spectrum structure an PL intensity were mainly obtained within blue PL bands when analyzing the sample covering the entire etching exposure range. Therefore, spectra ascribed to the initial an final stages of etch processing and the intermediate regime of 600 s exposure are illus trated in Figure 2. No surface pits could be resolved in the pristine sample (t0 = 0 s). Etchin Correlation of PL Spectra with Etch-Pit Profiling Scans in MOCVD GaN Layers PL spectra in the etched samples were examined under pulsed UV excitation in a wide temperature range of 20-300 K. Evolution of these spectra as a function of 150 • C H 3 PO 4 etching exposure is illustrated in Figure 2. The changes in spectrum structure and PL intensity were mainly obtained within blue PL bands when analyzing the samples covering the entire etching exposure range. Therefore, spectra ascribed to the initial and final stages of etch processing and the intermediate regime of 600 s exposure are illustrated in Figure 2. No surface pits could be resolved in the pristine sample (t 0 = 0 s). Etching pits appeared after even a short etching duration (t 3 = 600 s). Either cone-(t 3 ) or hexagon (t 8 = 1600 s)-shaped pits (Figures 2 and 3c,d) appeared due to different rates of lateral and depth removal of material. The lateral dimensions of etching pits clearly increased with etching exposure when comparing the spatial extension of the etch-pit-profiles after t 3 = 600 s and t 8 = 1600 s (Figure 3) exposures. It can be deduced from Figure 2d-f that the intensity of the violet PL peaking near 3.2 eV increased with the sample temperature in both pristine and etched samples. This outcome can be explained by an increase in excess carrier density with temperature, excited by UV laser light into conduction/valence bands; these carriers then recombine through donor-acceptor states. Only a weak short-wavelength B-PL wing appears on the background of YG-PL in non-etched samples when using pulsed excitation. The additional a B-PL band peaking at 2.7-2.9 eV appears in the etched samples starting from the shortest etching exposures (t 1 ). This band has not been debated in detail in the literature. Intensity of the latter B-PL band increases with sample cooling and etching exposure. The intensity of the YG-PL also slightly increases with reduction in the sample temperature. The latter result can be understood by assuming variations in the concentration of the excess carriers recombining through the photoactive centers, as identified from photo-ionization spectroscopy. Correlation of the B-PL band intensity with dimensions of etching-pits, as an observed increase in B-PL peak amplitude with exposure to a fixed temperature (e.g., T = 20 K), implies a relation of these B-PL centers to dislocations. There, no 2.7-2.9 eV B-PL under pulsed UV excitation and no pits are observed for the pristine, non-etched samples. Meanwhile, B-PL appears, and its peak amplitude increases with exposure (t 3 ) in the etched samples, in which etch-pits simultaneously manifest. Dimensions of etch-pits and the size of the hexagonal valleys (revealed by AFM scans, Figure 3c,d) also increase with etching duration. The left column of Figure 3 illustrates a scanned profile of a rather bright pit, starting from the periphery of the 271 nm-wide and 65 nm-deep etch-pit. The peripheral zone (labelled as spectrum 1) and central area (spectra labelled as 3-5) exhibit PL spectra with a prevailing YG-PL band. There, a difference in spectral structure (a V-PL is lacking), relative to those recorded using UV pulsed laser (Figure 2d-f), appears due to peculiarities of the WITEC instrument. The WITEC scanner-spectrometer enables only room temperature measurements. On the other hand, the excitation density obtained using a sharply focused laser-diode beam in the WITEC system significantly exceeds that of a pulsed UV laser beam. Therefore, the intensity of the B-PL band peaking at 2.7-2.9 eV also becomes resolvable at room temperature using the WITEC scanner. Thereby, it can be inferred from the analysis of the evolution of PL topography that the B-PL band (within the spectrum labelled as 2 in the left column of Figure 3) appears when the excitation beam is localized at the boundary of the single etching-pit. This B-PL band is observable only for a narrow range (~10%) of etch-pit lateral dimensions. A long exposure time, which leads to the wide area and deeply etched material valleys, determines the YG-PL dominated spectral bands, though these bands spread out into the B-PL range (see the right column of Figure 3). Such a structure of spectral band, with the peak shifted toward short wavelengths, is better highlighted within the boundaries of the etched hexagons. This observation also implies that the B-PL peaking at 2.7-2.9 eV should be ascribed to the boundaries of etching pits. Analysis of the shape of etching pits can be employed for rough identification of the dislocation type [21]. The triangle/cone-shaped etch-pit (left column, Figure 3) seems to be caused by edge dislocation. The hexagonal etch pits (right columns of Figures 2 and 3) are inherent to screw dislocations. The wide area hexagonal valleys of etched GaN material containing local cone deeply imply mixed dislocations. pits appeared after even a short etching duration (t3 = 600 s). Either cone-(t3) or hexagon (t8 = 1600 s)-shaped pits (Figures 2 and 3c,d) appeared due to different rates of lateral and depth removal of material. The lateral dimensions of etching pits clearly increased with etching exposure when comparing the spatial extension of the etch-pit-profiles after t3 = 600 s and t8 = 1600 s (Figure 3) exposures. figures (a,b). The related optical images (bottom figures (a,b)) obtained in reflected light on the examined sample areas. The atomic force microscopy profiles scanned close to a single etch-pit (c) highlighted after short etching figures (a,b). The related optical images (bottom figures (a,b)) obtained in reflected light on the examined sample areas. The atomic force microscopy profiles scanned close to a single etch-pit (c) highlighted after short etching exposure; and rather extended areas (d) of the intersecting etched surfaces, formed under long etching procedures. The shape of the dislocation-ascribed single etch-pit can be employed for identification of dislocation type (as the edge dislocation ascribed etch pit illustrated in (c)). The deep etch-pits ascribed to either screw-or mixed type dislocations can be assumed by analyzing intersections of wide area etched hexagon valleys (d). The blue PL band spectral components appear only (e,f) when the confocal microscopy probe is localized either close to the core (location 2 in (a,c,e)-assuming dimensions of space charge region R 0 ) of the single dislocation or steep planes of hexagonal valleys ((b,d,e), respectively). Numbers (1-8) denote locations nearby etch-pits where the PL spectra are recorded. Discussion The relations between photo-ionization (PI) and photo-luminescence (PL) spectra enable the prediction of a structure of Stokes-shifted PL bands on the basis of photoexcitation spectroscopy. Additionally, analysis of these correlations could serve for more reliable identification of the origin of the photoactive centers and their roles in the formation of PL bands. In the literature, it has been widely reported that, within the photoluminescence spectra, the yellow-green (YG) band prevails in MOCVD GaN. This emission is probably carbon impurity related [10,[44][45][46][47]. However, in earlier publications, this yellow luminescence band was alternatively associated with either gallium vacancy (V Ga ) or a complex of gallium vacancy-oxygen on nitrogen sites (V Ga O N ) [42,[48][49][50][51]. On the other hand, either V Ga O N -2H or V Ga -3H complexes could be the reason for the appearance of yellow luminescence in highly vacancy rich samples, for instance, heavily irradiated GaN [10]. The carbon on nitrogen site (C N ) defects or more complicated complexes composed of carbon on nitrogen site-oxygen on nitrogen sites (C N O N ) can also cause the appearance of yellow luminescence (2.2 eV) in MOCVD GaN materials [10,45,46,52]. Therefore, it can be inferred that the YG band of GaN luminescence is composed of several spectral components attributed to different point defects. However, combining different excitation wavelengths indicates a prevalence of carbon attributed C N and C N O N complexes when 354 nm of excitation can efficiently stimulate electronic transitions within C N O N complexes, while 408 nm of illumination activates the C N centers [31]. Interstitial hydrogen is a widespread impurity in GaN crystal. It may interact with carbon-originated point defects, causing the appearance of blue luminescence. These hydrogen complexes with carbon-originated point defects seem to be non-homogeneously distributed blue luminescence centers that decorate the cores of dislocations. This B-PL is weak in non-etched samples at elevated temperatures when excess carriers diffuse out of the dislocation cores and B-PL light extraction is inefficient. The presented characteristics of our experiments imply that the YG-PL band, peaking at 2.2 eV, is composed of PL processes ascribed to several E1-E4 photoactive centers, namely Ga vacancies (V Ga ) and charged carbon impurities localized on N sites (C N − ). This YG-PL band has a more sophisticated origin (relative to that ascribed to a single type defect [10]), as deduced from the correlation of photo-ionization and photo-luminescence spectra, compatible with comprehensive experimental results. The observed V-PL band, ascribed to radiative recombination through donor-acceptor pairs, can only be induced using UV excitation of excess carriers through inter-band transitions. The revealed B-PL in our experiments should be ascribed to point defect complexes decorating the cores of dislocations. This ascription is inferred from the appearance of the B-PL correlated with an etch-pit profile and the formation of a B-PL containing wide-PL bands at boundaries of deeply etched hexagonal valleys, as illustrated in Figure 3. The evolution of a B-PL band with etching exposure can be explained by assuming local variations in PL light extraction efficiency when the area and geometry of cone or hexagon terrace boundaries are modified. As explained in [21], the rather high refractive index of GaN material determines the trapping of most B-PL photons generated in the vicinity of dislocation cores, due to total internal reflection. There is a very small critical angle between air and GaN due to the difference in the refractive index [21], and a relative area of the dislocation core of radius R 0 = [f /π∆a(N D − N A )] 1/2 [53] is considerably small for a non-etched GaN layer surface. Here, f is the Fermi distribution function; N D and N A are concentrations of donors and acceptors, respectively, which determine an extension of space-charge volume; and ∆a is a distance between broken bonds of the order of several angstroms for Si doped MOCVD GaN with N Si = 10 17 cm −3 , R 0 ≤ 100 nm. This leads to the non-resolvable intensity of B-PL for non-etched GaN layers. The etching-pits appear due to different etching rates for horizontal r h and vertical r v removal of GaN material. The etch-pit depth d and width l are related to the etching exposure t as d = r v t and l = r h t, respectively, where it is always true that l > d. The etched surface provides more paths for B-PL light to escape from the space-charge region of dislocation cores. PL intensity I B-PL , measured as photon number Φ per time unit normalized to surface area, also increases due to the extraction of more photons via light scattering from the sidewalls of the etch pits. The etched terraces also have a larger surface area per unit volume crystal, from which B-PL is collected. However, defect complexes decorating the dislocation core constitute a κ fraction of the lattice atoms. The B-PL photons are scattered from the sidewalls of the etched cone with a side-surface area of 2πld. The average number of recorded B-PL photons is proportional to a cone's base area of S ≈ π(r h t/2) 2 . Thereby, the B-PL intensity is expressed as I B-PL ≈ Φπ(r h t/2) 2 for a single etch-pit. Total intensity is also proportional to the dislocation density D. Thereby, I B-PL shows a parabolic increase with etch exposure, when the density of dislocations is invariable, and etch-pits do not coalesce. In the case of terrace formation through the merging of wide etch-pits, the total B-PL light collection area increases by partial component A of each (m) etching session-removed area. Then, I B-PL obeys the generalized dependence on etching exposure as follows: The coefficient M~(r h t) 2 /R 0 for total enhancement of I B-PL can be estimated using the ratio of light extraction areas M ≈ (A/R 0 ) = (π/R 0 )(r h t/2) 2 | for m=1 when comparing these values in etched (~t 2 ) and non-etched (~R 0 ) samples. The dislocation decorating traps comprise only 0.1R 0 , as revealed from scans of B-PL constituent within the spectrum recorded on the periphery of etch-pits. The etched GaN surfaces might also increase the efficiency of PL excitation (due to enhancement of the excited area), resulting in a greater number of excess carriers and radiative recombination photons within etched samples. Variations in the PL spectral structure and the B-PL peak intensity dependent on etching exposure are illustrated in Figure 4. The I B-PL dependence on etching exposure, simulated using approximation Equation (3), is compared in Figure 4b with that measured at T = 20 K. This dependence indicates a nearly quadratic increase in the peak intensity after primary sessions of etching despite a later trend toward saturation appearing when the etched terrace area stabilizes. B-PL light to escape from the space-charge region of dislocation cores. PL intensity IB-PL, measured as photon number Φ per time unit normalized to surface area, also increases due to the extraction of more photons via light scattering from the sidewalls of the etch pits. The etched terraces also have a larger surface area per unit volume crystal, from which B-PL is collected. However, defect complexes decorating the dislocation core constitute a κ fraction of the lattice atoms. The B-PL photons are scattered from the sidewalls of the etched cone with a side-surface area of 2πld. The average number of recorded B-PL photons is proportional to a cone's base area of S ≈ π(rht/2) 2 . Thereby, the B-PL intensity is expressed as IB-PL ≈ Φπ(rht/2) 2 for a single etch-pit. Total intensity is also proportional to the dislocation density D. Thereby, IB-PL shows a parabolic increase with etch exposure, when the density of dislocations is invariable, and etch-pits do not coalesce. In the case of terrace formation through the merging of wide etch-pits, the total B-PL light collection area increases by partial component A of each (m) etching session-removed area. Then, IB-PL obeys the generalized dependence on etching exposure as follows: The coefficient M~(rht) 2 /R0 for total enhancement of IB-PL can be estimated using the ratio of light extraction areas M ≈ (A/R0) = (π/R0)(rht/2) 2 |for m=1 when comparing these values in etched (~t 2 ) and non-etched (~R0) samples. The dislocation decorating traps comprise only 0.1R0, as revealed from scans of B-PL constituent within the spectrum recorded on the periphery of etch-pits. The etched GaN surfaces might also increase the efficiency of PL excitation (due to enhancement of the excited area), resulting in a greater number of excess carriers and radiative recombination photons within etched samples. Variations in the PL spectral structure and the B-PL peak intensity dependent on etching exposure are illustrated in Figure 4. The IB-PL dependence on etching exposure, simulated using approximation Equation (3), is compared in Figure 4b with that measured at T = 20 K. This dependence indicates a nearly quadratic increase in the peak intensity after primary sessions of etching despite a later trend toward saturation appearing when the etched terrace area stabilizes. The horizontal etch rate r h additionally depends on H 3 PO 4 temperature. The etching rate r h was evaluated using the relation r h = l/t and measurement data of lateral dimension l and exposure time t, extracted from etch-pit scans under the control of the acid temperature. This characteristic is illustrated in Figure 5. The horizontal etch rate rh additionally depends on H3PO4 temperature. The etching rate rh was evaluated using the relation rh = l/t and measurement data of lateral dimension l and exposure time t, extracted from etch-pit scans under the control of the acid temperature. This characteristic is illustrated in Figure 5. It can be deduced from Figure 5 that fast removal of GaN material is achieved at elevated temperatures of 160 °C close to the boiling temperature of H3PO4, while a nearly exponential decrease in etching rate appears with the reduction in the acid temperature. Summary It has been found that red-shifted, relative to photo-ionization spectral peaks, bands of yellow-green luminescence (YG-PL) and blue (B-PL) luminescence are observed under pulsed UV excitation. The cross-correlations of the photo-ionization and photo-luminescence spectra, ascribed to the point photoactive centers, can be well simulated using the Kopylov-Pikhtin approach in the description of the absorption steps and vanRoosbroeck-Shockley relation to transform the photon-electron interaction cross-section data into photoemission spectra. It has been shown that this cross-correlation enables prediction of the Stokes shifts between the outspread PPIS steps and the respective PL band peaks. The recombination channel, ascribed to the violet PL band and peaking at 3.2-3.3 eV, corresponds to the donor-acceptor pair recombination. The additional B-PL band peaking at 2.7-2.9 eV has been revealed in the etched samples. The combination of different excitation wavelengths enabled the estimation of the prevailing PL centers. Carbon-oxygen CNON complexes prevail when 354 nm of excitation can efficiently stimulate electronic transitions within CNON defects, while 408 nm of illumination activates the CN centers. The hydrogen complexes with carbon-originated point defects seem to be the non-homogeneously distributed blue luminescence centers, which decorate cores of dislocations. This B-PL is weak in non-etched samples at elevated temperatures when excess carriers diffuse out of dislocation cores, and B-PL light extraction is inefficient. The intensity of the latter B-PL band increases with sample cooling and etching exposure. Correlation of the B-PL band intensity with dimensions of etching-pits implies the relations of these B-PL centers to dislocations, where B-PL appears, and its peak amplitude increases with increased etching duration (t). The transforms of the etch-pit shape and dimensions simultaneously manifest with the increase in exposure. It has been inferred from the analysis of the evolution of PL topography that the B-PL band appears when the excitation beam is localized at the boundary of the single etching-pit, ascribed to a single dislocation. The wide area and deeply etched hexagon-shaped valleys appeared under prolonged etching It can be deduced from Figure 5 that fast removal of GaN material is achieved at elevated temperatures of 160 • C close to the boiling temperature of H 3 PO 4 , while a nearly exponential decrease in etching rate appears with the reduction in the acid temperature. Summary It has been found that red-shifted, relative to photo-ionization spectral peaks, bands of yellow-green luminescence (YG-PL) and blue (B-PL) luminescence are observed under pulsed UV excitation. The cross-correlations of the photo-ionization and photoluminescence spectra, ascribed to the point photoactive centers, can be well simulated using the Kopylov-Pikhtin approach in the description of the absorption steps and vanRoosbroeck-Shockley relation to transform the photon-electron interaction cross-section data into photoemission spectra. It has been shown that this cross-correlation enables prediction of the Stokes shifts between the outspread PPIS steps and the respective PL band peaks. The recombination channel, ascribed to the violet PL band and peaking at 3.2-3.3 eV, corresponds to the donor-acceptor pair recombination. The additional B-PL band peaking at 2.7-2.9 eV has been revealed in the etched samples. The combination of different excitation wavelengths enabled the estimation of the prevailing PL centers. Carbon-oxygen C N O N complexes prevail when 354 nm of excitation can efficiently stimulate electronic transitions within C N O N defects, while 408 nm of illumination activates the C N centers. The hydrogen complexes with carbon-originated point defects seem to be the non-homogeneously distributed blue luminescence centers, which decorate cores of dislocations. This B-PL is weak in non-etched samples at elevated temperatures when excess carriers diffuse out of dislocation cores, and B-PL light extraction is inefficient. The intensity of the latter B-PL band increases with sample cooling and etching exposure. Correlation of the B-PL band intensity with dimensions of etching-pits implies the relations of these B-PL centers to dislocations, where B-PL appears, and its peak amplitude increases with increased etching duration (t). The transforms of the etch-pit shape and dimensions simultaneously manifest with the increase in exposure. It has been inferred from the analysis of the evolution of PL topography that the B-PL band appears when the excitation beam is localized at the boundary of the single etching-pit, ascribed to a single dislocation. The wide area and deeply etched hexagon-shaped valleys appeared under prolonged etching exposures, followed by the formation of broad spectral bands spreading out into the B-PL range. The revealed B-PL could be ascribed to the point defect complexes decorating these cores of dislocations. This outcome has been deduced from the appearance of the B-PL correlated with the etch-pit profiles and formation of B-PL-containing wide PL bands at boundaries of deeply etched hexagonal valleys. The evolution of the B-PL band with etching exposure has been explained by assuming local variations in PL light extraction efficiency when the area and geometry of cone or hexagon terrace boundaries are modified. The etching-pits appear due to different etching rates for horizontal r h and vertical r v removal of GaN material and based on scans of etch-pit dimensions dependent on acid temperature and etching exposure time t. It has been shown that the coefficient M~(r h t) 2 /R 0 for the total enhancement of B-PL intensity can be estimated using the ratio of light extraction areas M ≈ (π/R 0 )(r h t/2) 2 when comparing these values in etched (~t 2 ) and non-etched (with radius R 0 of dislocation core) samples. The etched GaN surfaces might also increase the efficiency of PL excitation (due to enhancement of the excited area), resulting in a greater number of excess carriers and radiative recombination photons within etched samples. A model of evolution of B-PL intensity with etching exposure has been approved. Funding: This research was funded by the LR Ministry of Education, Science, and Sport through CERN related activities.
v3-fos-license
2018-10-17T04:40:02.167Z
2016-02-01T00:00:00.000
53332311
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "http://centaur.reading.ac.uk/59895/1/creating-a-nonword-list-to-match-226-of-the-snodgrass-standardisedpicture-set-jpay-1000109.pdf", "pdf_hash": "8c856743a5b97ef80ff82dd17cea378480a3c06f", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2852", "s2fieldsofstudy": [ "Psychology" ], "sha1": "8c856743a5b97ef80ff82dd17cea378480a3c06f", "year": 2016 }
pes2o/s2orc
Creating a Non-Word List to Match 226 of the Snodgrass Standardised Picture Set Creating non-word lists is a necessary but time consuming exercise often needed when conducting behavioural language tasks such as lexical decisions or non-word reading. The following article describes the process whereby we created a list of 226 non-words matching 226 of the Snodgrass picture set [1]. In order to examine phoneme monitoring in fluent and non-fluent speakers we used the Snodgrass pictures created by Snodgrass and Vanderwart [1]. We also wished to look at phoneme monitoring in non-words so began creating a list of words that were matched to the Snodgrass pictures. The non-words created were matched on the following dimensions; number of syllables, stress pattern, number of phonemes, bigram count and presence and location of the target sound when relevant. These properties were chosen as they have been found to influence how easy or difficult it is to detect a target phoneme. Introduction Creating non-word lists is a necessary but time consuming exercise often needed when conducting behavioural language tasks such as lexical decisions or non-word reading.The following article describes the process whereby we created a list of 226 non-words matching 226 of the Snodgrass picture set [1].In order to examine phoneme monitoring in fluent and non-fluent speakers we used the Snodgrass pictures created by Snodgrass and Vanderwart [1].We also wished to look at phoneme monitoring in non-words so began creating a list of words that were matched to the Snodgrass pictures.The non-words created were matched on the following dimensions; number of syllables, stress pattern, number of phonemes, bigram count and presence and location of the target sound when relevant.These properties were chosen as they have been found to influence how easy or difficult it is to detect a target phoneme. Rationale for creating a non-word list The nature of non-words used in experimental work has been shown to be extremely important to the results of the study they're used for.For example, the more or less similar a non-word is to a real word effects the speed at which a lexical decision is made [2][3][4][5].Gibbs and Van Orden [3] found that lexical decisions were fastest when the non-words used contained illegal letter strings -strings of letters that do not appear together in the language used e.g., /gtf/.Keuleers and Brysbaert [6], state that due to the impact non-words have on lexical decisions, they should only contain legal letter strings thus more closely approximating real words. Phonotatic probability is the frequency with which different sound segments and segment sequences occur in the lexicon [7][8][9][10][11].For example, /bl/ occurs commonly in English and is therefore thought to have a high phonotactic probability.It has been found that sensitivity to phonotactic probability develops in childhood and becomes increasingly sensitive as our lexicon grows [8,[12][13][14].Munson and Bable [15] suggested that this increase in sensitivity is reflective of our lexical representations becoming more segmental.As our lexicon expands, so too do the phonotactic possibilities and we become more sensitive to those segments which appear most often e.g., /bl/.Coady and Aslin [12] Storkel [8] and Zamuner, Gerken and Hammond [16] have found that phonotactic probability is reflected in the accuracy of speech in young children e.g. the lower the phonotactic probability the less accurate the speech.This finding, when applied to the two-step model of lexical access [17] can be explained in terms of the level of activation.When a speaker attempts to access a word in their lexicon this model proposes two steps, lemma retrieval and phonological retrieval.These two steps are not sequential and activation spreads throughout the retrieval network from semantic features to phonological features and back again.The most active phoneme units are then selected and positioned into the phonological frame.The model would suggest that those units with higher phonological probability have higher activation and are, therefore, more readily retrieved.For this reason it may be easier to detect /l/ when it is in a /bl/ combination rather than a /nl/ combination as /bl/ occurs more often in English than /nl/.As our list was created for a phoneme monitoring task controlling for the number of letter bigrams was especially important. In Levelt et al., [18] model of speech production it is noted that we have the ability to monitor phonological code that is generated in the syllabification process which occurs before word production.Tasks such as phoneme monitoring can be used to test our ability to monitor phonological code which is what Schiller [19] did.Adult Dutch speakers were given a silent phoneme monitoring task in which the phoneme they had to monitor for occurred in the syllable initial and stress initial position and was compared to when it occurred in syllable initial but not stress initial position.It was found that phoneme monitoring occurs fastest when the phoneme occurs in the initial stress position.Dutch like English is a language in which the majority of multisyllabic words have their syllable stress on the initial syllable so results can be generalised to English.Coalson and Byrd [20] conducted a study asking participants to monitor for a phoneme in non-words.They found similar results to Schiller (2005) and also suggest that fluent adults monitor for phonemes more slowly in non-words as opposed to real words.It can be seen from this work that controlling for the position of the phoneme within the word and whether it occurs in the stressed syllable is important as it affects speed of monitoring. Purpose of the list -current study We created this non-word list as in our subsequent study we wished to examine phoneme monitoring in real and non-words in adult who are fluent vs. adults who are dysfluent.As we also wished to do this in a silent picture phoneme monitoring paradigm we chose to use the Snodgrass picture set [1].Snodgrass and Vanderwart created this their set of 260 line drawings which they standardised on four variables; familiarity, image agreement, name agreement and visual complexity.These variables must be controlled for as they affect cognitive processing in pictorial and verbal form.More familiar items are more easily named as are words learnt at a younger age, those with higher name and image agreement, and less visual complexity, are also more easily named [21][22][23]. Generating the non-words Initially we excluded some of the Snodgrass words e.g.those which are not regularly used in British English e.g.wrench (in English we would use spanner) noun phrases were also excluded e.g., wine glass.We then transcribed each word orthographically and phonologically detailing position of primary stress, total number of syllables and the total number of phonemes.A letter bigram count was also calculated by hand.This count, taking account of phonological transcription, was vital as English orthographic transcription does not consistently agree with phonological transaction.Once we had all of this information we could begin creating our non-words. In order to create the non-words we used two software programs.The first was the ARC Nonword Database [24].This database was created so that researchers could access monosyllabic non-words or pseudo-homophones, chosen on the basis of a number of properties including; the number of letters, the neighbourhood size, summed frequency of neighbours, number of body neighbours, summed frequency of body neighbours, number of body friends, number of body enemies, number of onset neighbours, summed frequency of onset neighbours, number of phonological neighbours, summed frequency of onset neighbours, bigram frequency -type, bigram frequency -token (both position specific and position non-specific), trigram frequency -type, trigram frequency -token (both position specific and position non-specific) and the number of phonemes. Values for each of these can be set (upper and lower limits) and the fields you wish to have output for can also be selected.Non-words and pseudo-homophones can be chosen to be only orthographically existing onsets, be only orthographically existing bodies, only legal bigrams, monomorphemic only syllables, polymorphemic only syllables and morphologically ambiguous syllables.The ARC software, whilst extensive, could only be used to create non-words for all of the monosyllabic words in the Snodgrass set (121 words of the 226 total).Each word was chosen from a list of possible options given by the ARC database, when the target sound needed to be present non-words had to be selected that also had the target sound in the same position.It was not possible to ask the software to do this for us so added additional workload. For the remaining 105 multisyllabic words we used the Wuggy software (Keuleers and Brysbaert, 2010) to create the non-words.Once again words were matched to real words in terms of, phoneme length, syllable length, presence or absence of the target sound, place in which the target sound occurred when it occurred and stress pattern.Wuggy is a multilingual pseudo-word generator designed to elicit non-words in Basque, Dutch, English, French, German, Serbian (Cyrillic and Latin), Spanish, and Vietnamese.This software was developed to expand upon what ARC offers as it can generate multisyllabic words.A word or non-word can be inputted and the algorithm can generate pseudo-words which are matched in sub-syllabic structure and transition frequencies.In the Wuggy software, after the language has been selected, it is possible to select whether real or pseudo-words are required.Output restrictions can then be applied including; match length of sub-syllabic segments, match letter length, match transition frequencies (concentric search) and match sub-syllabic segments e.g. 2 out of 3.There are also output options similar to ARC, including; syllables, lexicality, OLD 20, neighbours at edit distance, number of overlapping segments and deviation statistics.Each of the remaining 105 words were put into Wuggy and one of the options generated was chosen based upon whether it had the target sound (when applicable) in the correct location. Once each non-word had been chosen and transcribed orthographically and phonologically a manual bigram count was taken.To ensure no bigrams were missed the total number of phonemes was calculated (980 phonemes in each list -words and nonwords) following this the total number of possible bigrams was calculated (754 bigrams in each list -words and non-words).Bigram frequency data was calculated for real and non-words and a Wilcoxon signed rank test similar frequencies across the two word lists (z=-0.123,p=0.902).None of the non-words differed to the real words by more than 2 standard deviations (more than 5 bigrams) and the greatest difference was 6 occurrences of a bigram vs 1 occurrence of it.By ensuring that the lists are as similar as possible we have minimized the chance of any differences between performances on each list being down to factors other than the word/non-word distinction. Outcome The completed non-word list with corresponding Snodgrass words can be found in Table 1.The target phonemes that we used in the subsequent phoneme monitoring task are highlighted in bold (where applicable).It should be noted that whilst this list is matched and the bigram frequencies are such that there is no significant difference between the two lists, this is only the case when all 226 words are used.If exclusions are made in any work using them then a new bigram count must be taken to ensure that lists remain well matched. S.NO. Non-Word List Non-Word List S.NO. Non-Word List Non-Word List Table 1 : Bretherton-Furness J, Ward D, Saddy D (2016) Creating a Non-Word List to Match 226 of the Snodgrass Standardised Picture Set.J Bretherton-Furness J, Ward D, Saddy D (2016) Creating a Non-Word List to Match 226 of the Snodgrass Standardised Picture Set.J Phonet and Audiol 2: 109.doi:10.4172/2471-9455.1000109The completed non-word list with corresponding Snodgrass words.
v3-fos-license
2018-08-13T15:41:25.018Z
2018-07-27T00:00:00.000
52023989
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://innovation-entrepreneurship.springeropen.com/track/pdf/10.1186/s13731-018-0085-4", "pdf_hash": "10fc322601eec1dca15913aecdb49ab8629486e0", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2853", "s2fieldsofstudy": [ "Economics", "Business" ], "sha1": "0516069c82598d5db864ecc8ba02467eca1503ab", "year": 2018 }
pes2o/s2orc
Enterprise innovation in developing countries: an evidence from Ethiopia Enterprise innovation has gained the interest of development policymakers and scholars as the bases for the industrial development. This study comprehensively analyzes the drivers of enterprise innovation in developing countries. The study uses survey data to analyze the determinants of enterprise innovation in Ethiopia using a multivariate probit (MVP) model. For this study, enterprises were grouped into four categories: all-sized, large-sized, medium-sized, and micro- and small-sized enterprises. It appears that engagement in R & D, on-the-job training, and website ownership significantly determine enterprise innovation. This study, unlike previous studies, comprehensively analyzes drivers of innovation by considering enterprises in different sizes and all at the same time. This helps identify factors most relevant for enterprise innovation at all stage which help policymakers get focused on strategy development. Based on the findings, further emphasis on engagement in R & D would help enterprises to become innovative for all categories of enterprises. Furthermore, strengthening the available formal training and diversifying type of the training that is related to skills, knowledge, and techniques that help achieve the long-term objective of the enterprises are worth considering. Enterprises also need to subscribe to different sites that help learn more and access information. transformation, and innovativeness of its enterprises (Ethiopian Science and Agency Technology (ESTA) 2006). Governments and donors in the developing countries have shown increasing interest in promoting enterprise innovations and entrepreneurship to encourage enterprises. This is due to the potential role enterprise innovation play in the enterprise development for the industrial and economic development. Enterprises create job opportunities and income for the youth and poor in a developing country. The impression is that innovation is important for enterprises to become and remain competitive, to move to higher return activities, and to grow and graduate to a larger enterprise status, hence creating new employment and income opportunities. However, the effectiveness of such interventions by understanding the role of innovation in the growth and development of the economy depends on determining factors influencing innovation (Abdu and Jibir 2017). Most business enterprises in developing countries like Ethiopia are small-and medium-sized and face various challenges including lack of processed technological information, inadequate training capabilities at technical and vocational education training (TVET) centers, lack of access to financial and other resources and absence of consultancy support (FDRE 2010), poor infrastructural base, and unfavorable government policies which weaken their innovation activities (Abdu and Jibir 2017;Adebowale et al. 2014;Choi and Lim 2017;Dotun 2015;Egbetokun et al. 2016). It is interesting to observe that despite all the difficulties, a large share of firms can still innovate in the African context (Egbetokun et al. 2016;Abdu and Jibir 2017). The greatest challenge to understanding the role of innovation in the growth and development of the economy has been lacking meaningful data to determine the factors influencing innovation. Moreover, there has been a development of new data sources like the Enterprise Data Survey (EDS) collected by the World Bank which have spurred many empirical studies, in the developed countries, on the determinants of a firm's innovation. Adebowale et al. (2014) argue that some ideas and concepts which have emerged in the innovation systems community have been derived from specific experiences in rich countries and cannot be universal templates. Perhaps the conclusion to be drawn from these studies may be misleading, inconclusive, and difficult to generalize to enterprises in developing countries. Empirical studies on determinants of innovation by small firms in Africa are relatively scarce (Abdu and Jibir 2017a;Adebowale et al. 2014). Studies conducted on enterprise innovation so far suffer from several limitations. First, they focus only on product and process innovation determinants. The conclusions and policy recommendations derived from these studies cannot be generalized to other innovation types. This is due to the fact that what fosters innovation in process innovations may inhibit/not affect organizational innovations at all. For instance, Stojčić and Hashi (2014) found that cost factors affect product innovations but do not affect process innovations. The study further reveals that firm size fosters new process innovations while it hinders new product innovations. The implication is that some determinants of innovations are specific to the type of innovation the enterprises engaged in. Second, enterprises are almost not homogeneous in size, capability, background, and sector types. Under this circumstance, it is impossible to expect the same factors determining innovation of enterprise (Gebreeyesus 2011). This study reveals that large-sized firms and firms in the manufacturing sector are more likely to engage in innovative activities. Similarly, Hashi and Stojcic (2013) noted that under different circumstances, firm size could positively/negatively determine innovation. This proves that the "one-size-fits-all" principle does not work. Third, most of the studies of the enterprise innovation are conducted in the developed countries which may challenge generalization to the developing countries. As developing countries deviate from the developed countries in institutional structures and development infrastructures, it needs due emphasis. This is because the business environment in which enterprises practice may mask the effect of the different factors on the innovation of the enterprise. There is no comprehensive empirical evidence on determinants of enterprise innovation in developing countries including Ethiopia. The existing few studies in Africa examined the determinants of innovative activity and attributes of innovation (Gebreeyesus 2011). Given the above research gap, this paper contributes to the narrow literature on innovations of enterprises in Ethiopia in the following ways. First, it analyzes not only the determinants of product and/or process innovations but also the determinants of four types of innovations (that is, a new product innovation, a new method of production innovation, a new marketing innovation, and a new organizational structure). Distinguishing enterprises into different sizes helps to identify important factors regarding firms' size. Second, to address the bias that might arise from pooling a heterogeneous group of firms, this study tries to investigate the determinants of innovation by classifying enterprises into all-sized, large-sized, medium-sized, and micro-and small-sized enterprises. Third, contrasting to most of the earlier studies, this study covers not only manufacturing but also retail services and non-retail services. The rest of the paper is organized as follows. The next section presents a brief literature review. In the third section, the data and method of data analysis is presented. Results and discussions are discussed in the "Results and Discussion" section. Lastly, the conclusions and policy implications are discussed. Literature review The Organization for Economic Cooperation and Development (OECD) defines innovation more broadly as the implementation of a new or significantly improved product (that is, a physical good or service), a process, a new marketing method, or a new organizational method in business practices, workplace organization, or external relations (Organizations for Economic Co-operation (OECD) 2010). Enterprise innovations can arise at different points in the development process, including conception, R & D, transfer of the technology to the production organization, production, and marketplace usage (Atkinson 2013). A wide range of factors affects innovation process, including firm size and age, research and development (R & D) efforts, the quality or skill level of managers/employees, employee participation and motivation, managerial practices and inter-departmental cooperation and knowledge exchange, factors related to the firms' network and its interactions with outside organizations, and factors specific to the industry (Egbetokun et al. 2016). External market target, capacity building, facilitative support to enterprises, and entrepreneur's characteristics determine the innovation ability of enterprise. Enterprises' characteristics such as size of the enterprises (Hadhri et al. 2016;De Mel et al. 2009;Stojčić and Hashi 2014;Zemplinerová and Hromádková 2012) and enterprise's maturity (Zakic et al. 2008) determine innovation of the enterprises. By implication, larger and mature enterprises are more innovative than the smaller and less mature enterprises. Studies show that enterprises' external market target and strategic relation formation determine enterprises' innovation. Foreign market access for the enterprises would help enjoy the large market size for their goods and services and help earn foreign currency which will have a multiplier effect on their activities. Strategic relation behavior of the enterprises would help them with whom to make collaboration in international and national entities. This would help enterprises advance their business. Enterprises that use foreign inputs and that have collaboration with foreign are interrelated (Avermaete et al. 2004). Foreign market orientation of enterprises also determines enterprise innovation (De Mel et al. 2009;Stojčić and Hashi 2014;Zakic et al. 2008;Zemplinerová and Hromádková 2012). This shows that firms that are foreign market-oriented have experience and strategic relation with foreign sectors and are more innovative than their counterparts. The enterprise's capacity level related to investment in human capital of the enterprises determines the enterprise's innovation. Investment in human capital affects the ability, skills, and knowledge of the workforce of the enterprises. These investments affect innovation of the enterprise. Several studies proliferated this issue. For instance, van Uden et al. (2014) analyzed the impact of human capital innovation in developing countries (Kenya, Tanzania, and Uganda) using data from the Enterprise Surveys of the World Bank and found that human capital spurs innovation. Mahendra et al. (2015) also argued that human capital affects innovation abilities of enterprises in Indonesia. Mahendra et al. (2015) further showed that different combinations of human capital affect innovative output depending on the context in which these combinations are implemented (manufacturing or service sector). Moreover, Audretsch et al. (2016) added that academic-based human capital encourages innovativeness of enterprises while business-based human capital does not play a role. The firms' extent of investment in the R & D, skills of the firms' workforce, the firms' investment in know-how (Avermaete et al. 2004;Dotun 2015;Raymond and St-Pierre 2010;Romijn and Albaladejo 2002), and the use of known technology transfer mechanisms (Hadhri et al. 2016) determine the enterprises' propensity to innovate. This shows that the capacity of the enterprises explains their innovative ability. Empirical evidence shows that the most important factor of innovation is R & D activity though findings are mixed. El Elj and El Elj (2012) argued that the value of the R & D activity is related to the core competencies of the firm and to its efficient innovative processes in Tunisia. However, Aralica et al. (2008) found that continuous engagement in R & D and R & D cooperation has turned out to be insignificant in relation to the share of sales of innovative products in Croatia. Some argue in low-and medium-technology industries, creativity, not technological knowledge, is the driver of innovation, because in those industries, innovation is based on the general knowledge stock of the firm and the creativity to transform such a stock, instead of scientific research (Goedhuys et al. 2014;Santamaría et al. 2009). Studies also witness that owner's and entrepreneur's specific characteristics determine enterprise innovation. Owner's characteristics such as the educational background of the owner, prior experience of owner-manager (Avermaete et al. 2004), owner's ability personality traits (De Mel et al. 2009), the age of the entrepreneurs, and the gender of the entrepreneurs (Gebreeyesus 2011) explain firms' innovativeness. Here, the higher the educational level, the younger, and the more the male owners, the more the firms are innovative. Factors exogenous to the enterprises are also found to be important determinants of the enterprises. These factors are less controllable by the enterprises by themselves. Enterprises that are more active in using available external resources and supports are more likely to be innovative. Few studies showed that facilitative supports such as government support, availability of patent and copyright (Dotun 2015), better institutional quality at the local, access to finance (Mahendra et al. 2015), and the use of external sources of information (Avermaete et al. 2004) determine firms' innovation. These studies' focus contended that external support to the firms determines the firms' innovation. There are factors which positively and negatively affect enterprise innovations. Other studies have emphasized on the importance of the innovation for survival in a volatile environment (Johnson et al. 1997). Some studies that have dealt with enterprises' innovation even did not conduct their study by unraveling firms into different respective sizes. Distinguishing enterprises in terms of their size help to identify more relevant factors affecting enterprise innovation. Factor that is more important for a small enterprise may not be important for the large or medium enterprise and vice versa. Identifying factors important in all cases is also worth dealing as it helps policymakers to get focused in devising enterprise innovation and industrial development strategies. The literature that deals with the characteristics of enterprise innovation activities and connects innovation and other enterprise activities are concerned with the context and content of innovation processes. The focus of the literature, in this case, is whether enterprise innovation activities are related to the existence of R & D activities. The R & D activity is an indispensable part of enterprise innovation activities. A significant amount of innovation and improvements originates from design improvements like "learning by doing" and "learning by using" (Arrow 1962;Mowery and Rosenberg 1989), and such informal efforts are embodied in people and organizations (Teece 1986a(Teece , 1986b. These literatures stress an importance of the experience of the enterprises that emanates from the on-the-job training. Other literatures point out the link between innovation and enterprise-level determinants of innovation characteristics such as firm size (Aralica et al. 2008;Mahendra et al. 2015). Following the work by Schumpeter (1942), there has been a wide-ranging debate on the differences and complementary qualities of small and large firms in the face of innovation and technological change. As per Schumpeter (1942), large firms have advantages in comparison with small ones when taking part in innovation activities and, what is more, these advantages increase according to firm size. In addition, size emerges as a primary internal force driving technological innovation (Alsharkas 2014) and its relevance is motivated by several intertwined arguments. This hypothesis has been reviewed in various empirical studies without any definite conclusion being reached that there is a positive relationship between the propensity to innovate and firm size for Sri Lanka (De Mel et al. 2009), for Lebanese (Hadhri et al. 2016), for Nigeria (Moohammad et al. 2014), and for Ethiopia (Gebreeyesus 2011). On the other hand, some scholars (Martínez-ros and Labeaga 2002); Plehn-Dujowich 2009) argue that firm's size and innovation abilities are inversely related because they are more dynamic in the decision to innovate. Some studies found innovation to be negatively related to firm size for Croatia (Aralica et al. 2008). Some of the authors found an inverted-U relationship between firm size and R & D intensity, i.e., the ratio of R & D expenditure or personnel to size, or between firm size and the ratio of patents to size (Koouba, Karim et al. 2010).Others found a positive relationship up to a certain threshold and no significant effect for larger firms. The inconclusive results regarding the effect of firm size on innovative capacity of the firms justify the inclusion of many control variables to get robust results (Hadhri et al. 2016). For instance, a systematic review by Becheikh et al. (2006) shows there are about 40 determinants concerning the characteristics of innovating firms. According to Becheikh et al. (2006), these driving forces of innovation are categorized into internal determinants of innovation and contextual determinants of innovation. Internal determinants of innovation include firms' general characteristics (age of the firm, ownership structure, past performance), firms' global strategies (export/internalization, external/internal growth), firms' structure (formalization, centralization, and interaction), management team (leadership variables and manager-related variables), and functional assets and strategies (R & D, human resources, finance, etc.). Contextual determinants of innovation are firms' industry-related variables (sector, demand growth, industry concentration), firm's regional variables (geographic location and proximity advantage), networking, knowledge/technology acquisition, government and public policies, and surrounding culture. The impact of these internal and contextual determinants of firm's innovation activities have been studied in developing countries showing varying, inconclusive, and contradictory results (Becheikh et al. 2006;Hadhri et al. 2016). Descriptive statistics results Description of variables used in the study and their descriptive statistics are presented in Table 1. The descriptive result shows that, during the last 3 years, 40% of enterprises introduced a new product innovation; 34% of them introduced a new method of production innovation; 30% of them introduced a new organizational structure innovation; and 34% of them introduced new marketing methods. From the descriptive result, it is showed that 1.74% enterprises were micro-enterprises, 46.38% of them were small enterprises, 30.63% of them were medium enterprises, and 21.25% of them were large enterprises. The result also showed that majority of the enterprises (62%) is in the capital city of the country. About 10% enterprises had a female top manager. There were 38% enterprises that had and own a website. During the last 3 years, about 22% of the enterprises conducted formal training programs for their permanent full-time employees, while only 14% of them spend on formal R & D activities. The average age of the enterprises is 14 years. The mean top manager's experience is 14 years. On average enterprises, the share of the direct export is about 5%. Out of the full-time permanent workers of the enterprises, 68% of them have above secondary school education level. Table 1 presents the minimum, maximum, and standard deviations. Econometric analysis Tables 2 and 3 present the estimated effects of the multivariate probit model on factors affecting enterprises' innovation (new product innovation, a new method of production innovation, a new marketing method innovation, and a new organizational structure innovation) based on two scenarios (regardless of enterprises' size and based on their size). Analyzing determinants of enterprises' innovation endeavors by segregating enterprises into different size helps to identify important size-dependent factors that affect enterprises' innovation. It helps to uncover factors which determine enterprises' innovation abilities regardless of the enterprises' size. In what follows, we present and discuss the determinants of enterprise innovation. Then, we conclude and recommend. Innovation in all-sized enterprises The multivariate probit regression result shows that website ownership, the percentage of full-time permanent workers who completed secondary school, the availability of formal training programs for permanent full-time employees, and engagement of the a6b enterprises in R & D activities significantly affect the four enterprises' innovations irrespective of the enterprises' size (see Table 4 for a summary of the main results). The implication of this finding is that enterprises, regardless of their size, that have access to information, have more educated permanent full-time workers, have a regular on-the-job training program for the workers, and conduct research and development are more innovative than their counterparts. Enterprises which have their own website are more likely innovative than those do not have a website. Social networking sites like website provide information about individuals and their networks which enables enterprises to create online social communities shared by external stakeholders. A website helps enterprises interact with external factors such as customers and public institutions. This helps enterprise get, transfer, and assimilate external knowledge within the enterprise and then generate innovation. Moreover, according to the triple helix theory, the success of innovation endeavors depends on what integration and cooperative interaction develop between the academia, the private sector, and the government which is shaped by the social networking sites. Our finding is in line with that of Scuotto et al. (2016), Martins (2016), Guo et al. (2016), and Del . Having a website may help enterprises to use all possible available resources in the world via the Internet. These resources may be related to new technologies (production), knowledge, and techniques helpful in upgrading the method of production, management of resources, marketing of the products, and so on. They may also use the Internet to identify areas of more demanded products they focus on. Enterprises may conduct an assessment of their product, a method of production, and management through an online survey using their website. Thus, website ownership may determine enterprise innovation through the provision of important information, resources, and online survey services. Tables 2 and 3 show that different aspects of human capital (general level of schooling and formal on-the-job training) ignite enterprises' , regardless of size, innovation of all types. Enterprises investing in formal training programs for its permanent and full-time employees are more likely innovative than otherwise because it is a worker with knowledge and skill who can generate new knowledge and ability to absorb new knowledge created by other enterprise's employees. Another component of human capital which is a driving force for innovation in this study is a level of schooling attained by the permanent employees. The result shows that the percentage of employees of the enterprises who completed secondary school increases the enterprises' chance of innovativeness. This is because a high number of workers who completed secondary school generates a high level of knowledge and techniques and induces enterprises to develop innovative new practices. The employee of the enterprises with a high school education level may learn from each other, and this may have spillover effects. The spillover effects of this education even may spread to the enterprise's employee with a lower level of education. In this way, even employees with a lower level of education may gain experience and this would stimulate the whole activities of the enterprise. The enterprise's employee with more than a high school education may also have different technical education and experience. Thus, schooling and on-the-job training are an enabling factor in profitable innovation which suggests that investments in skills help expand the group of firms in the economy that have the potential to innovate. This finding is corroborated by Abdu and Jibir (2017) The result reveals that an enterprise's propensity to introduce innovation is higher when it spends on R & D. Involvement in research and development would help the enterprise search new things, to adopt, to develop, and to use them to achieve the enterprise's objectives. As research and development are concerned with searching new mechanisms that solve problems, enterprises also use research and development to advance to their predetermined goals. Research and development may help enterprises to use the available internal and external resources. Research and development added to the on-the-job training would enhance the absorption capacity and stock of knowledge of the enterprises that would induce innovation of the enterprises. The findings of Rehman (2016) conducted in India and those of Abdu and Jibir (2017) in Nigeria corroborate this finding. The study support that R & D has a positive impact on the product and process innovation. Another study also shows that enterprises that received a grant for research and development increase the probability that a firm introduces new goods and services to the world (Jaffe and Le 2015). A study by Yuan et al. (2014) shows that R & D investment intensity positively determines the firm's innovation though the relationship is weak. However, this study showed that the effects of R & D on process innovation and any product innovation are much weaker. The top manager's experience in years determines a new method of marketing innovation. A longer time the manager stays in the enterprises enriches the experience of the manager in every aspect of the enterprises. It might also provide an opportunity for the manager to deal with the innovation of the enterprise. The manager of the enterprise knows the areas that need improvement, and probably, it is the top manager that is exactly keen for the accomplishment of the strategic objective of the enterprise. In this case, the longer the stay of the top manager in the enterprise, the more experienced is the manager about the enterprise. It is argued that managers are likely to have better insights into future business opportunities, threats, niche markets, products, technologies, and market development; in this case, top managerial experience is expected to be positively related to innovative activity and its performance. Managerial experience enhances both the propensity to innovate and the innovative firm performance, as measured by the share of sales accounted for by new products (Balsmeier and Czarnitzki 2014). Thus, the experience of the top manager would help peruse marking innovation that helps achieve the objective. However, a study by Yuan et al. (2014) indicated that the top management team's tenure and firm innovation are negatively related. An enterprise's size determines innovation in all cases except the new market innovation. The size of the firm goes with the capital and human capital. The higher the size of the enterprises, the more they can afford training, R & D, and education and the more the enterprises are innovative. A larger enterprise can amortize fixed costs over a broader base and will, therefore, be more innovative than smaller firms. Moreover, due to their broad base of resources and capabilities, large enterprises are more likely innovative as compared to small ones. The assertion that the size of the enterprise positively affects the innovativeness of the enterprise is also supported by van Location of the enterprise significantly determines new product innovation. The fact that enterprises that are located in the capital city of the country are more innovative than enterprises located outside the capital city can be explained by the compounding effect of the city and the localization (urbanization) economies of the enterprises. The compounding effect of the city is related to the government emphasis on all sectors in the city including the enterprise development. The localization (urbanization) economies' effect is related to that enterprise densely populated in the city which may easily learn from each other either in the formal or informal or in both ways. This may thus help enterprises located in the capital city to be more innovative than others. The assertion here is that in the capital cities' information, the capital (human and physical) easily and freely moves from one enterprise to another. This finding is corroborated by the case of Silicon Valley which is well known for being a learning region and where a successful innovation system has been implemented (Doloreux 2003). Porter and Stern (2001) also argue that location matters for innovation; particularly, most attractive locations enhance the environment for innovation. Innovation in large-sized enterprises For the large-sized enterprise, the MVP regression shows that only the availability of formal training programs for permanent full-time employees and the engagement in formal R & D activities significantly determine the four types of enterprise innovations. This finding suggests that enterprises that emphasize on the on-the-job training of the employee and research and development are more innovative than others. This finding convinces that enterprise innovation whether it is a new product or a new process or new management or new market innovation, human capital accumulation through training, research, and development is indispensable. In this study, it is also indicated that in the all-sized enterprises, training and R & D enhance the innovation of the enterprises. Innovation in medium-sized enterprises Regarding medium-sized enterprise innovation, the MVP regression shows that availability of formal training programs for permanent full-time employees and engagement in formal R & D activities determine new product innovation. Engagement in R & D activities determine a new method of the production innovation. The new organizational structure innovation is determined by website ownership, the availability of formal training programs for permanent full-time employees, the number of permanent full-time workers, and the engagement in formal R & D activities. Here, permanent full-time worker increases determines the new organizational structure innovation. The explanation for this is that with an increased number of the full-time permanent workers, diverse ideas and experiences would interact that adds to the enterprise innovativeness. Website ownership and engagement in R & D determine a new method of marketing innovation. In the medium-sized enterprises, the only variable that affects the four enterprise innovations is an engagement in formal R & D. Innovation in micro-and small-sized enterprises For the micro-and small size, the regression result shows that new product innovation is determined by website ownership, a percentage of full-time permanent workers who completed secondary school, the availability of formal training programs for permanent full-time employees, and engagement in formal R & D activities. The new method of production innovation is determined by the sex of the top manager, the website ownership, and the availability of formal training programs for permanent full-time employees and engagement in R & D activities. Here, micro-and small enterprises, which have a female as a top manager, are more innovative in a new method of production innovation. Some empirical studies also contend that female representation in top management improves firm performance that focuses on innovation (Dezsö and Ross 2012). In contrast, in hiring more female managers, companies can be more innovative, but having a top female at the top position negatively influences the innovation if the number of female is lower in the top management team (Lyngsie and Foss 2017). The new organizational structure innovation is determined by website ownership, a percentage of full-time permanent workers who completed secondary school, availability of formal training programs for permanent full-time employees, and engagement in R & D activities. The new method of marketing innovation is determined by years of experience of a top manager, website ownership, a percentage of full-time permanent workers who completed secondary school, availability of formal training programs for permanent full-time employees, and engagement in R & D activities. The top manager's experience determines the new method of marketing innovation in micro-and small enterprises. This is explained that as the top manager works more in the sector, he/she will be experienced in dealing with the selling of product and services. For the micro-and small enterprise, the regression results showed that website ownership, availability of formal training programs for permanent full-time employees, and engagement in R & D activities affect the four enterprises innovations. Conclusions This study comprehensively examined the main determinants of an enterprise's innovation in Ethiopia using a secondary data collected by World Bank. To achieve the objective, the study MVP model was used. This study categorized the enterprises into four groups, unlike other studies which focus on either enterprise of a specific size or enterprises regardless of size. Our findings show that in all-sized enterprises, website ownership, a percentage of full-time permanent workers whose education is above the secondary school, availability of on-the-job training, and engagement in R & D activities are factors that affect enterprises' innovations. The MVP regression result indicated that for the large-sized enterprises, only the availability of formal training programs for permanent full-time employees and the engagement in R & D activities determine the four enterprise innovations. For the medium-sized enterprises, the regression result shows that engagement in R & D fosters the four innovations. In the case of micro-and small enterprises, the variables that affect the four enterprise innovations are website ownership, the availability of formal training programs for permanent full-time employees, and engagement in R & D activities which encourage four of the innovations for micro-and small enterprises. The finding of the study has strong theoretical implications. First, the finding that schooling and training and R & D drive innovativeness in performance of the enterprise goes with several empirical findings. For instance, schooling and training are important sources of innovation (Abdu and Jibir 2017;D'Este et al. 2014;Dostie 2014Dostie , 2018van Uden et al. 2014;van Uden et al. 2016). Further, R and D contributes to the innovation (Abdu and Jibir 2017; Jaffe and Le 2015; Yuan et al. 2014). And this goes back to replicate Becker's (1964) notion that maintaining humans possess human capital (skills, knowledge, ability) that can be improved and can impact how people act and affect the business entity. Second, the finding that shows website ownership drives an innovation of enterprise replicates the works of Scuotto et al. (2016), Martins (2016), Guo et al. (2016), Del , and Bresciani and Ferraris (2016) which contend that social networking sites, global knowledge, and enterprise embeddedness contribute to the innovation performances of enterprises. And this further goes in line with the phenomenon reflected by Schumpeter (1942) that creative destruction produces product and process innovation and knowledge-intensive entrepreneurship (that can be obtained with the help of information through a website) for entrepreneurs that strive to cope with uncertainty generate changes or creative destructions. The finding of the study also has strong policy implications. It suggests that development partners, policymakers, and enterprises should emphasize on R & D activities, regular on-the-job training, education, and development of a website (information access via the Internet). Specifically, the following policy recommendations help an enterprise enhance their innovation performance: first, conducting on-the-job training on a regular basis to upgrade employee's schooling, skill, and efficiency; second, developing and expanding enterprise's website for acquiring reliable information; and third, activating new and strengthening the existing R & D activities which are salient strategies that can promote enterprises' innovations and achieve their objectives. Conducting on-the-job training on a regular basis to upgrade employee's skill and efficiency would boost the capacity of the employee of the enterprises. This can be conducted based on the identified areas on which employee needs training. In this case, careful human power planning that considers the needs of the enterprises and employees is vital. Training that can be pursued can be specific to the enterprise innovations or general. Indeed, it should also be conducted in a regular, sustainable, and variety of manner that ensures the sustainability of the enterprise operation. Website ownership of the enterprises is indispensable to get information worldwide in this globalization era. Here, an enterprise needs to develop their own website for gaining reliable information that boasts their innovation. Only, developing website does not suffice for promotion of the enterprise innovation; the enterprise also needs to subscribe to the international institutions that encourage their betterment. The R & D activities can be strengthened by allocating a reasonable amount of budget for R & D, by encouraging their workers to conduct R & D, and by making some linkages with institutions that have ample experience in R & D activities. Concerned bodies may incentivize their worker to conduct R & D that result in important enterprise innovations that would have a long-lasting impact on the productivity and profitability of the enterprises. Finally, this study is limited to the Ethiopian enterprises and difficult to generalize to all developing countries. The study also used all enterprises. Therefore, future researchers that emphasize on the enterprises' innovation better consider different countries in the developing countries. Future researchers may also study innovation performances of enterprises based on the sector type, for instance, manufacturing enterprises and trade enterprises. Data source and analytical methods This study used the 2015 Ethiopia Enterprise Surveys (ES) data collected by the World Bank (World Bank 2016) from June 2015 to February 2016. The ES is a panel data which are an ongoing World Bank project in collecting both objective data based on enterprises' experiences and enterprises' perception of the environment in which they operate. The sample for the 2015 Ethiopia's enterprise survey was selected using stratified sampling, following the standard methodology. Three levels of stratification were used in the country: industry, establishment size, and region. Industry stratification was designed in the way that follows: the universe was stratified into four manufacturing industries (food and beverages), textile and garments including leather, non-metallic mineral products, and other manufacturing and three service sectors (transportation, retail) and other services. Size stratification was defined as follows: small (5 to 19 employees), medium (20 to 99 employees), and large (more than 99 employees). Regional stratification for the 2015 Ethiopia ES was done across six geographic regions: Addis Ababa and Dire Dawa City administrations and Amhara, Oromia, SNNPR, and Tigray regional states. For this study, the data were pooled together. Stating the multivariate probit model Behavioral response models with more than two possible outcomes are either multinomial or multivariate. Multinomial models are suitable when respondents can choose only one outcome among the set of mutually exclusive and collectively exhaustive choices. However, in this study, the innovation variables are not mutually exclusive, considering the possibility of the simultaneous involvement of innovation types and the potential correlations between them. Specifically, we examine factors related to different innovations with the following enterprise innovations: new product innovation, new method of production innovation, new organizational structure innovation, and new organizational innovation. The first innovation-dependent variable, new product innovation (h1), takes the value 1 if the enterprise has introduced new or significantly improved products or services during the last 3 years, otherwise 0. The second innovation-dependent variable, new method of production innovation (h3), takes the value 1 if the enterprise has introduced any new or significantly improved methods of manufacturing products or offering services during the last 3 years, otherwise 0. The third innovation-dependent variable, new organizational structure innovation (h5), takes the value 1 if the enterprise has introduced any new or significantly improved organizational structures or management practices during the last 3 years, otherwise 0. The fourth innovation-dependent variable, new marketing method innovation (h6), takes the value 1 if the enterprise has introduced new or significantly improved marketing methods during the last 3 years, otherwise 0. We apply the multivariate probit (MVP) to estimate the jointly dependent variables that exploit a system of simultaneous equations. It is a special case of seemingly unrelated regression (SUR) when the dependent variable is of categorical type. In circumstances where cross-equation error terms are correlated and explanatory variables are same across equations, the MVP model can generate more efficient parameter estimates than single-equation estimation approaches. The description of a regression model is given as:
v3-fos-license
2018-04-03T02:15:27.875Z
2015-12-18T00:00:00.000
16655060
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/srep18546.pdf", "pdf_hash": "46ccea66eceeb29cc11519c3c39a6d548b962db3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2854", "s2fieldsofstudy": [ "Medicine" ], "sha1": "46ccea66eceeb29cc11519c3c39a6d548b962db3", "year": 2015 }
pes2o/s2orc
Swine Model of Thrombotic Caval Occlusion Created by Autologous Thrombus Injection with Assistance of Intra-caval Net Knitting To evaluate the feasibility of a swine model of thrombotic inferior vena cava (IVC) occlusion (IVCO) created by autologous thrombus injection with assistance of intra-caval net knitting. Sixteen pigs were included and divided into two groups: Group A (n = 10), IVCO model created by knitting a caval net followed by autologous thrombus injection; Group B (n = 6), control model created by knitting a net and normal saline injection. Venography was performed to assess each model and the associated thrombotic occlusion. The vessels were examined histologically to analyse the pathological changes postoperatively. IVCO model was successfully created in 10 animals in Group A (100%). Immediate venography showed extensive clot burden in the IVC. Postoperative venography revealed partial caval occlusion at 7 days, and complete occlusion coupled with collateral vessels at 14 days. Histologically, Group A animals had significantly greater venous wall thickening, with CD163-positive and CD3-positive cell infiltration. Recanalization channels were observed at the margins of the thrombus. By contrast, no thrombotic occlusion of the IVC was observed in Group B. The thrombotic IVCO model can be reliably established in swine. The inflammatory reaction may contribute to the caval thrombus propagation following occlusion. Scientific RepoRts | 5:18546 | DOI: 10.1038/srep18546 Angiographic features. Group A. Venogram of the IVC prior to operation showed normal appearance with a mean diameter of 9.90 ± 0.71 mm. Intraoperative venogram performed immediately after knitting showed slight coarctation at the knitting site and signs of 'filling defect' in the infrarenal IVC, indicating thrombus that was captured in situ (Fig. 1A,B). The captured thrombus extended above the net. The appearance of the IVC above the level of the renal veins was normal. Pulmonary arteriography revealed no signs of PE. The venograms repeated at 7 days revealed partial occlusion of the infrarenal IVC and the common iliac veins in all subjects. At 14 days, the infrarenal IVC was completely occluded in five subjects, indicating thrombus propagation over time (Fig. 1C). The collateral vessels were developed, including the pelvic vein plexus, ascending lumbar veins, and inferior epigastric vein. The estimated thrombus volume was 2.98 ± 0.16 cm 3 , 3.33 ± 0.29 cm 3 , and 5.6 ± 0.95 cm 3 at 0, 7, and 14 days post injection, respectively. The difference in thrombus volume was significant different at 14 days versus 0 days (P = 0.026) and at 14 days versus 7 days (P = 0.017), indicating progression of thrombus size over time. There were no differences in thrombus volume at 0 days and at 7 days (P = 0.178). Group B. The venogram of the IVC prior to operation showed normal appearance with a mean diameter of 10.17 ± 0.74 mm. The intraoperative venogram performed immediately after knitting showed slight coarctation at the knitting site and no signs of captured thrombus. The venograms repeated at 7 and 14 days also revealed no evidences of thrombus. The IVC was maintained patent without visible collateral vessels. Manual aspiration and rheolytic thrombectomy for thrombotic IVCO. Three IVCO models in Group A were successfully treated with manual aspiration and rheolytic thrombectomy. The intra-caval net turned was not an obstacle for manipulating these endovascular devices ( Fig. 2A-C). Immediate venography after aspiration and thrombectomy showed partial removal of the thrombus from the iliac vein and the IVC, with successful reconstruction of blood flow in all three subjects (Fig. 2D). The estimated thrombus volume was significantly reduced from 5.88 ± 0.22 cm 3 prior to endovascular treatment to 0.78 ± 0.19 cm 3 post treatment (P = 0.002). Laboratory testing results. In Group A, the levels of proinflammatory (IL-6, hsCRP) and prothrombotic (D-dimer, fibrinogen, TF, PAI-1) markers were significantly higher at 7 and 14 days post-model compared with levels at 0 days (Tables 1 and 2). However, these markers were not changed in Group B (Tables 1 and 2). Histopathology. In Group A, intraluminal thrombus was observed in all animals and was firmly adhered to the IVC endothelium at 7 days. Histological analysis of the thrombus revealed a mixture of platelets, erythrocytes, and fibrin in varying proportions by light microscopy. Platelet aggregates were most often attached peripherally to the vein wall. Erythrocytes were more randomly distributed. At 14 days, a moderately organized thrombus was observed. The venous endothelium and wall were thickened and showed extensive inflammatory cell infiltration (Fig. 3A). The changes were more apparent and severe at 14 days compared with those at 7 days. In Group B, no animals showed intraluminal thrombus, and the venous endothelium was smooth without any leukocyte infiltration within vein wall (Fig. 3B). Phosphotungstic acid haematoxylin-stained sections showed that collagen fibres were deposited and arranged irregularly in the caval wall and thrombus at 14 days (Fig. 4A,B). Recanalization channels were also observed at the margins of the thrombus (Fig. 4C,D), some of which bridged the caval wall and the thrombus. In Group A, CD3-positive cells (lymphocyte) were also found infiltrating the caval wall and the thrombus at 7 days (2.9 ± 1.8% and 2.8 ± 1.7%, respectively) and at 14 days (13.6 ± 3.8% and 7.7 ± 1.8%, respectively). There were no differences between the index of CD3-positive cells at 7 days and that at 14 days (caval wall, P = 0.054; thrombus, P = 0.071). The index of CD3-positive cells in the caval wall in Group B (0.54 ± 0.72%) was significantly less than Group A (P < 0.001). Discussion In the present study, we established an IVCO model in swine by injecting an autologous thrombus that was captured by an intra-caval knitted net. The entrapped thrombus became a core from which extensive IVC occlusion developed over time. Our data also suggest that inflammatory reactions play an important role in development of the thrombotic occlusion. To our knowledge, this is the first report of an IVCO model created with this technique, without the requirement for surgical ligation or balloon occlusion, a widely-accepted method for creating models of venous thrombosis or thrombotic occlusion. DVT and PE are relatively common diseases, and disproportionately affect older populations 7 . Acute DVT and PE may progress into chronic disease, manifesting as venous occlusion, chronic venous insufficiency, and chronic pulmonary hypertension. Unlike acute thrombosis, the treatments for chronically thrombotic occlusion are often challenging using surgical conversion or endovascular recanalization. Indeed, the Surgeon General's Call to Action to Prevent Deep Vein Thrombosis and Pulmonary Embolism 7 invited multiple stakeholders to work together in a coordinated effort to combat this serious health problem. Thus, creation of ideal animal models is fundamental to our understanding of venous thrombosis and following occlusion. Models of venous thrombosis or thrombotic occlusion are typically developed in mice or large animals 8,9 . Murine models are categorized on the basis of the involved veins 10 . Small vein models can be induced by mechanical injury 11 , endothelial stimulation 12,13 , and photochemical injury 14 , while large vein models include the ferric chloride model 15 , IVC ligation model [16][17][18] , IVC stenosis model [19][20][21][22] , and electrolytic vein model 10,23 . The main disadvantage of murine models is their relatively small vessel size, which limits their applicability to fields such as endovascular research. Non-human primates, pigs, and dogs have a similar venous anatomy to humans, which overcomes some of the disadvantages associated with murine models. The most widely-accepted methods to induce venous thrombosis in large animals are based on flow impairment produced by endovascular balloon occlusion 24,25 or surgical ligation of the targeted vein. The advantages of endovascular approach models include minimal invasiveness and technical safety. However, a duration time of 6 h to several days is usually required for indwelling venous balloon catheters, with potential for procedure-related complications including potential balloon migration, renal failure, and PE 26 . Other drawbacks include the higher cost associated with a catheter balloon and other special endovascular devices, and thrombus instability after balloon removal. Surgical interruption to induce venous thrombosis usually requires additional ligation of the branches of the targeted vein to promote spontaneous thrombosis, making the procedures more complicated. The advantages of surgical approaches include quantifiable amounts of vein wall tissue and production of a stable and durable thrombus. However, surgical models have a relatively higher mortality. Lack of blood flow due to the venous obstruction may also reduce the maximal efficacy of systemic therapeutic agents on the thrombus and vein wall. Moreover, the interruption of the vein may impair the navigation of some endovascular devices. Several adjunctive methods, including injection of thrombin 24,25,27 or soluble ethanol 28 , are useful for promoting thrombosis and venous occlusion. In the present study, we established a novel IVCO model in swine. We knitted an intra-caval net followed by injection of autologous thrombus, which developed into a venous occlusion over time. This procedure was very simple, and could be accomplished within an hour. Our method has significantly lower costs and reduced complications compared with endovascular models, as catheter balloons and other endovascular devices were not required. No major complications were observed any animals in our study. Further, the intra-caval net allowed guidewires and catheters to pass through the meshes, and did not impair the AngioJet's Variables hsCRP (mg/l) IL-6 (pg/ml) manipulation. In our pilot studies, this venous occlusion model also responded well to treatment using mechanical aspiration and thrombectomy. In the present study, light microscopy examination indicated involvement of an inflammatory component in thrombus propagation. In the IVCO models, the venous wall was thickened with diffuse infiltration of inflammatory cells. There was a high number of CD163-positive cells (monocyte/macrophage) infiltrating the caval walls and the thrombus. These changes were greater at 2 weeks postoperatively, and suggested that the thrombus itself was not inert, but rather may dictate the venous wall response. These data provide support for reports that the longer a DVT is in contact with the vein wall, the greater the damage 29,30 . Inflammation plays a key role in thrombosis development, and reflects an interaction between the thrombosis and the vein wall. We also observed sporadic infiltration of CD3-positive cells (lymphocyte) into the caval walls and the thrombus. Venous thrombi can exhibit fibrillar collagen deposition and recanalization 31,32 , while macrophages express a wide range of collagen isoforms 31 . Further, the formation of new endothelial-lined channels in thrombus recanalization may represent a type of angiogenesis 32 . In the present study, the induced thrombus presented similar properties. Given the similarity in the inflammatory process between humans and our IVCO model, we suggest that this model is a useful tool to further our understanding of the mechanisms of venous thrombus following thrombotic occlusion. A local inflammatory response in the vein wall and activation of the coagulation cascade is associated with the release of proinflammatory factors including IL-6 and CRP, as well coagulation and fibrinolytic system proteins including fibrinogen and PAI-1 1 . In response to cytokines and inflammatory mediators (IL-6, hsCRP), endothelial cells express and show an increased activity of TF or tissue thromboplastin. Thromboplastin is the cellular receptor of circulating factor VII, and their interaction initiates the coagulation cascade 1,2 . Once the clots are formed, they may undergo uncontrolled growth, resulting in various degrees of venous obstruction. PAI-1 also plays an active Variables TF (ng/ml) PAI-1 (ng/ml) Fibrinogen (g/l) D-Dimer (μg/l) Table 2. Comparison of prothrombotic markers between the two groups. All variables are expressed as mean ± standard error of the mean. *, comparison between D0 and D7. **, comparison between D0 and D14. ***, comparison between D7 and D14. †, comparison between group A and B. P < 0.05 was considered statistically significant. TF, tissue factor; PAI-1, plasminogen activator inhibitor-1. role in this process 1 . In the present study, although the caval clots were not spontaneously formed, we found evidence that similar biochemical markers may play role in the process of propagation 33,34 . There are some potential limitations of the IVCO model. First, the model carries the thrombus, unlike real vein thrombosis in humans, and is therefore not suited to thrombogenesis research. However, real vein venous thrombosis often seems to propagate, as we observed for the IVC thrombosis, existing mainly as an extension from the iliac veins or a complication following caval filter placement. The IVC thrombus can grow over time from its origin, and finally cause chronic IVCO. Our preliminary data also show inflammatory cell infiltration, collagen fibre deposition, and recanalization in our caval thrombus, which are observed in patients with venous thrombosis. Additionally, follow-up venography indicated that the caval obstruction was a result of thrombus propagation. Thus, we believe that our model can duplicate human venous thrombus and its evolution to thrombotic occlusion. Because of the differences in IVC calibres between animals, we could not determine the exact size of the meshes during knitting the intra-caval net. In our experience, four sutures were enough to lodge the injected thrombus effectively in swine, without sacrificing the capacity of endovascular device navigation through the meshes. However, the standard procedure of knitting such a net should be validated further. Finally, endovascular treatment in our IVCO model was beneficial, most likely because the caval thrombus was acute in nature. Further studies should be performed to evaluate the relationship between the thrombus age and various endovascular treatments. In conclusion, we created a successful IVCO model in swine by injecting an autologous thrombus with assistance of an intra-caval net. This model exhibited a steady progression from thrombus lodging to thrombotic venous occlusion, which may be due to inflammation. We believe that this venous model will be ideal for studying the interaction between the thrombus and vein wall, and can also be used to test various endovascular devices. Methods This study strictly complied with the Guide for the Care and Use of Laboratory Animals. The protocol was approved by the Animal Ethics Committee of Nanjing First Hospital, Nanjing Medical University. Animal model. All swine were purchased from our institutional Laboratory Animal Centre. Sixteen Hanford miniature swine were included in the study (eight males, eight females; age 16-20 weeks; weight 15-20 kg). Anaesthesia was induced by administering 300 mg ketamine and 10 mg diazepam intramuscularly. Each subject was then administered a suspension of 100 ml:5 g glucose injection and 20 ml:200 mg propofol via the auricular vein for maintaining general anaesthesia (1 ml/min). All surgical procedures were performed under sterile conditions. Each animal was placed in a supine position on a digital subtraction angiography table to allow fluoroscopic guidance during the procedure. A 6F introducer sheath (Terumo) was inserted into the left or right femoral vein using the modified Seldinger technique with the guidance of ultrasound. A 4F pigtail catheter (Cook) was then introduced into the iliac vein through the sheath and a venography (Omnipaque 350; GE Healthcare; Shanghai, China) was performed to visualize the iliac vein and the IVC. Knitting a net within IVC lumen. After venography, a 0.035-inch guidewire (Terumo) was kept in the IVC through the sheath. The abdomen was incised at midline, and the IVC carefully exposed and isolated. The previously placed guidewire served as a marker of IVC, as it could be detected by fluoroscopy or felt by hand. Two vessel clips were then used to temporarily block the IVC bloodstream. One clip was placed on the IVC just below the renal veins and the other on the IVC just proximal to the bifurcation of the common iliac veins. Next, four 4-0 polypropylene sutures (PROLENE ™ ; Ethicon, USA) with a needle were used to pass through the IVC (2 cm below the renal veins) individually at anterior-posterior, transverse, and oblique directions (Fig. 6A). The intra-caval net was then made, consisting of four sutures intercrossed at the centre of the IVC lumen (Fig. 6B). Each suture was tied end-to-end at the outside wall of IVC. The net was used to capture the downstream thrombus. The vessel clips were finally removed and repeated venography was performed to inspect the IVC. The abdomen was closed, and the skin was sutured with 4-0 silk. Autologous thrombus preparation and administration. All swine with an intra-caval net were divided into two groups: Group A (n = 10; five males, five females) with autologous thrombus injection; Group B (n = 6; three males, three females) with normal saline injection. Blood (10 ml) was collected via the femoral vein using a 20-ml syringe. The blood was then mixed with 500 U lyophilized thrombin powder (Hunan Yige Pharmaceutical Co. Ltd., China) in the syringe for several minutes until it turned into a fresh and soft thrombus. The thrombus was empirically regarded as good for injection if it came out as soft strips when gently pushed by the syringe. The thrombus was administrated manually into the IVC via the 6F sheath. This sheath was placed via the femoral vein with its end at the common iliac vein. The side tube of the sheath was then connected with the syringe. Following injection, the thrombus mainly distributed into the infrarenal IVC that was trapped by knitted net, while a small fraction entered into the iliac vein and the IVC above the net. The thrombus volume in the IVC immediately post injection was calculated, assuming that the thrombus was cylindrical. For all subjects in Group B, 10 ml normal saline was injected. After injection, venography was performed to inspect the IVC clot burden. A pulmonary arteriography was also performed to inspect the PE via a pigtail catheter placed into the pulmonary artery trunk. Finally, the sheath was removed and the animal was allowed to completely recover and permitted free access to water and food. All animals were administered an antibiotic (cefradine; 10 mg/kg; intramuscular) for 3 days postoperatively. Follow-up venography and testing of endovascular devices. A venogram was repeated at 7 and 14 days postoperatively. The procedures were performed under general anaesthesia as previously described. A 6F introducer sheath (Cook) was inserted into the left or right femoral vein, and the contrast agent (Omnipaque 350) was administered intravenously, as described above. Thrombus volume was also estimated at the last venography, assuming that the thrombus was cylindrical. At 14 days post operation, three IVCO models (two females; one male) were all treated with mechanical thrombectomy (manual aspiration plus AnjioJet rheolytic thrombectomy). First, conventional catheters (such as pigtail and headhunter catheters) and 0.035 inch guidewires were introduced via the sheath into the iliac vein and the IVC to test if they could pass though the thrombus and the knitted net. Aspiration of the thrombus was then manually performed using an 8F guiding catheter (Envoy, Cordis). The AngioJet Rheolytic Thrombectomy System (Boston Scientific, Natick, MA, USA) was then primed following the manufacturer's instructions. For each model, two runs of rheolytic thrombectomy were performed over the occluded segment. Finally, repeated venography was performed to visualize the outcomes of thrombus removal. Laboratory testing. Blood for laboratory testing was collected from the auricular vein of the models. Tissue sampling and histological analysis. Two female and three male models in Group A and one female and two male models in Group B were euthanized at 7 days post operation, while the remainder were euthanized at 14 days. The IVC between the renal veins and the bifurcation of the common iliac veins was isolated and removed, and immediately fixed in formalin at 4 °C overnight. The tissues were then rinsed with distilled water, dehydrated through graded alcohol solutions, and embedded in paraffin. The venous tissue cross-sections (5 μ m) were stained with haematoxylin and eosin (H&E) and phosphotungstic acid haematoxylin (PTAH). Immunohistochemical staining using anti-CD163 antibody (ab183476; Abcam Inc.) for macrophages, and anti-CD3 antibody (ab16669; Abcam Inc.) for lymphocytes, was performed to assess inflammatory cell filtration in the vein wall and thrombus, defined as the percentage of all cells that were CD163-positive. Image analysis was performed using Image-Pro Plus version 6.0 software in randomly selected vessel fields from each section. A pathologist who reviewed all the specimens and performed the analysis was blinded to animal randomization, treatment procedure, and follow-up protocol. Statistical analysis. Data are presented as the range (mean ± standard error of the mean). Intra-and inter-group comparisons were performed using the independent-samples t-test. P < 0.05 was considered statistically significant. IBM SPSS for Windows, Version 19.0 (IBM Corp., Armonk, NY, USA) was used to perform statistical analyses.
v3-fos-license
2017-10-02T10:43:17.268Z
2015-08-21T00:00:00.000
6377255
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=59281", "pdf_hash": "f04dd4b4fadd1573ed3058edf882d0b994a43e14", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2857", "s2fieldsofstudy": [ "Medicine" ], "sha1": "f04dd4b4fadd1573ed3058edf882d0b994a43e14", "year": 2015 }
pes2o/s2orc
Burnout Syndrome during Pediatric Residency Training Background: Burnout syndrome is a common professional problem causing mental fatigue, depersonalization, and diminished self-value. Burnout during pediatric residency can significantly influence the resident’s performance and the quality of their training. Objectives: To evaluate the burnout status of pediatric residents across Jeddah, KSA. Methods: A cross-sectional, descriptive study involving pediatric residents across Jeddah, Saudi Arabia was conducted from the 1st of August to 1st of December, 2012. The Maslach Burnout Inventory was utilized in addition to questions about their work environment and lifestyle. Results: Sixty pediatric residents (67% females) were included with ages ranging between 25 30 years (mean 26.5). They practiced in various institutions, mostly (41%) in ministry of health hospitals. Burnout scores were abnormal in 49 (82%) and in 19 (32%) the syndrome was severe. Males were more likely to reach a severe burnout category when compared to females (32% vs 19%, p = 0.01). Residents working in the university hospital (23%), were more likely to have severe burnout when compared to those working in other hospitals (p = 0.002). Junior residents (R1 and R2) were also more likely to have severe burnout when compared to senior residents (34% vs 21%, p = 0.013). Conclusions: Many pediatric residents are suffering from burnout syndrome. It is more common among males, junior residents, and those working in a university hospital setting. Specific strategies should be developed to prevent resident burnout. zation, and diminished self-value [1]- [4].Affected individuals are unable to cope with the ongoing work stress resulting in negative feelings and attitudes toward their medical team members and patients.If burnout syndrome continues with no intervention, it will progress and eventually result in dissatisfaction with work and training [5].In general, burnout syndrome is more prevalent in stressful environments that are physically demanding or requiring higher levels of commitments, such as pediatric residency programs [6].During such training, the resident is expected to have frequent in-house calls, participate in invasive procedures, and look after patients requiring intensive or emergent care [7].Some pediatric rotations could be more stressful and demanding than others, including emergency or intensive care services, neonatology, oncology, and neurology [8]- [10].Looking after patients with progressive or untreatable conditions may further increase their level of frustration and emotional stress.Pediatric residents are also often confronted by anxious or fatigued parents.This further adds to their work stress.Lack of sleep and fatigue because of frequent and busy in-house calls was perceived by many residents to have a major impact on their personal lives and their ability to perform their work [11].All these factors contribute to burnout, which in turn affects the resident's performance and the quality of their training.This area received limited previous study.Therefore, this study was designed to evaluate burnout status of pediatric residents across training programs in Jeddah, KSA and explore possible contributing and correlating factors.We hypothesize that many pediatric residents are suffering from burnout syndrome. Methods A cross-sectional, descriptive study involving pediatric residents across the Jeddah area of Saudi Arabia was conducted over 4 consecutive months (1 st of August to 1 st of December, 2012).Pediatric residents at various levels of training were included.They were enrolled from various pediatric residency training programs in Jeddah, Saudi Arabia, including university, ministry of health, military, and other major hospitals.The Maslach Burnout Inventory was utilized to examine three domains including emotional exhaustion, depersonalization, and personal accomplishment at work as summarized in Table 1 [5].The original English version was used.Demographic variables and questions regarding work and lifestyle-related factors that could correlate with burnout were also included (Table 2).The study design and questionnaires were approved by King Abdulaziz University What used to be a little thing sets me off and I tend to overreact. Answers: 1) strongly disagree, 2) disagree, 3) moderate, 4) agree, 5) strongly agree hospital ethics committee.The questionnaires were distributed to program directors to maximize the response rate.Before consenting for the study, all included residents were assured that their participation is voluntary and that the collected data will be confidential. Data were collected in Excel sheets and statistical analysis was performed using SPSS 17 (SPSS, Inc., Chicago, IL, USA).Descriptive analyses were performed and the variables were examined in 2 × 2 tables using chi-square test.Statistical significance was defined as P values of less than 0.05. Results Seventy two questionnaires were distributed and 60 (83%) were returned.Of these 60 pediatric residents, 67% were females with ages ranging between 25 -30 years (mean 26.5, SD 2.3).Most residents (61%) were single and 29% had children of their own ranging in number from 1 -3 (mean 1.5).Most of those included (67%) were junior residents in their first 2 years of training.The majority practiced in a ministry of health hospital (41%) or the university hospital (23%) with variable monthly income depending on the sponsor of their position. Burnout scores were abnormal in 49 (82%) and in 19 (32%) the syndrome was severe (Table 3).The resident's sex, hospital setting, and training level correlated with their burnout status.Males were more likely to reach a severe burnout category when compared to females (32% vs 19%, p = 0.01).Residents working in the university hospital (23%), were more likely to have severe burnout when compared to those working in other hospitals (p = 0.002).Finally, junior residents (R1 and R2) were also more likely to have severe burnout when compared to senior residents (34% vs 21%, p = 0.013).Age, marital status, and income had no correlation with burnout status.Those with other significant stresses in their lives (Table 2) constituted 14%.This factor was not reflected on their burnout scores.This was also true for those with history of other medical (16%) or psychiatric (2%) illness. Table 2. Factors that were evaluated for possible correlation with severe burnout. Discussion This study confirms that many pediatric residents are suffering from burnout syndrome.Few other studies assessed the prevalence of burnout syndrome in pediatric healthcare workers, which reached 41% of hospital workers attending pediatric patients [12]- [14].In other studies, burnout and depression were found to be major problems among pediatric residents resulting in significant medical errors [15] [16].In our study, males, junior residents, and those working in the university hospital were at increased risk of burnout syndrome.Other authors found females at higher risk of developing burnout in an oncology study [17].This may not apply to other rotations that deal with less life threatening illnesses.In general, females prefer a career in pediatrics when compared to males [18].This may explain a higher work satisfaction and therefore less burnout risk.We also found that junior residents (R1 and R2) were more likely to have burnout when compared to more senior residents.Increased work load, responsibilities, and reduced experience may explain this trend.Sleep loss and fatigue is also more common during these years because of more frequent in-house calls, adding to increased risk of burnout.An additional possible contributing factor is frequent "paging"."Beepers" were found to frequently interrupt pediatric residents involved in patient care activities and scheduled educational conferences [19].Finally, higher burnout levels in residents training in the university hospital can be explained by the increased burden of work that includes teaching and supervising medical students and interns, and participating in periodic medical school exams.This increased risk of burnout was previously reported in a study of medical residents in a teaching hospital [20].Age, marital status, and income had no correlation with burnout status.Those with other significant stresses in their lives were not at an increased risk of burnout.The lack of significant associations may be related to our relatively small numbers.Burnout is a common problem that has to be addressed and highlighted in pediatric residency programs.Specific strategies should be developed and implemented to prevent or limit burnout keeping in mind its negative effects on patient safety.This is particularly relevant in our region where this problem is not well studied [21] [22].Resident well-being is closely connected to professional development and required some varying degrees of self-sacrifice and rebalancing of personal priorities, all dependent on the resident her/himself [23].The findings of this study should be considered by training programs that are interested in enhancing resident well-being.Affected pediatric residents may develop negative attitudes toward self and professional activity, and eventually lose interest in pediatric care, have low productivity, and self-esteem.Routine exercise should be encouraged and promoted as it is associated with lower burnout scores [17] [24].Physicians who are satisfied with their lives outside of work are also less likely to have burnout.Periodic social and community activities should be encouraged and promoted during residency.The availability of direct communication with program directors with periodic meetings for debriefing could also prevent burnout.Finally, more research is needed in this area, particularly regarding the development and implementation of effective interventions aimed at preventing and treating resident's burnout. Conclusion To conclude, at least one third of pediatric residents training in the Jeddah region are suffering from burnout syndrome.This finding further substantiates the growing concern about the potential impact of burnout on professional development.These observations should be taken into account in developing new training guidelines and educational interventions for pediatric residents. Indicate how strongly you agree or disagree with the following: I often have a desire to escape.I have a sense of inner emptiness.I am indecisive.I have erratic or incongruent emotions.I often have a "don't care" attitude.I don't feel like I have any control over my life.I don't have much motivation to be with people.My interest in friendship, food, entertainment is low.I feel emotionally exhausted.I feel depressed.I rarely have a good day.I am chronically tired and may even wake up exhausted.I have symptoms such as heart palpitations, recurrent or lingering sickness, chest pains, or aching.I feel "wiped out" a lot.I feel "run down".I feel trapped.I feel hopeless.I feel worthless.I feel anxious most of the time. 4 ) Number of children if any 5) History of medical or psychiatric illness Work-related factors 1) Work type (junior, senior) * 2) Hospital type (ministry of health, university, military, other) * 3) Number of in house calls 4) Number of clinics per week 5) Monthly income Other possible sources of stress over the last year 1) Loss of a child 2) Loss of a partner 3) Loss of a parent 4) Financial loss 5) Job change * Factors associated with severe burnout (p < 0.05). Table 3 . Burnout severity scores in the study sample (n = 60).
v3-fos-license
2018-04-03T00:00:36.745Z
2016-12-09T00:00:00.000
8103999
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://springerplus.springeropen.com/track/pdf/10.1186/s40064-016-3744-6", "pdf_hash": "cbd6c733a023aa2cc33fb9940247bafbfd9cb927", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2858", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "cbd6c733a023aa2cc33fb9940247bafbfd9cb927", "year": 2016 }
pes2o/s2orc
Rice grain nutritional traits and their enhancement using relevant genes and QTLs through advanced approaches Background Rice breeding program needs to focus on development of nutrient dense rice for value addition and helping in reducing malnutrition. Mineral and vitamin deficiency related problems are common in the majority of the population and more specific to developing countries as their staple food is rice. Results Genes and QTLs are recently known for the nutritional quality of rice. By comprehensive literature survey and public domain database, we provided a critical review on nutritional aspects like grain protein and amino acid content, vitamins and minerals, glycemic index value, phenolic and flavonoid compounds, phytic acid, zinc and iron content along with QTLs linked to these traits. In addition, achievements through transgenic and advanced genomic approaches have been discussed. The information available on genes and/or QTLs involved in enhancement of micronutrient element and amino acids are summarized with graphical representation. Conclusion Compatible QTLs/genes may be combined together to design a desirable genotype with superior in multiple grain quality traits. The comprehensive review will be helpful to develop nutrient dense rice cultivars by integrating molecular markers and transgenic assisted breeding approaches with classical breeding. Background Rice is the most well known cereal and staple food which serves as major carbohydrate for more than half of the world population. Half of the world's population is suffering from one or more vitamin and/or mineral deficiency (World Food Program 2015). More than three billion people are affected by micronutrient malnutrition and 3.1 million children die each year out of malnutrition (Gearing 2015) and the numbers are gradually increasing (FAO 2009;Johnson et al. 2011). The developed countries are managing deficiency by adopting fortification programs, but same programs are not affordable to poor countries. Therefore, an alternative and less expensive strategy is to modify the nutritional quality of the major cereals consumed by the people. To improve the nutritional value of rice, research programs should be reoriented to develop high yielding cultivars with nutrient dense cultivars either by selective breeding or through genetic modification (Gearing 2015). Increase in literacy percentage and awareness of diet, people tend to be more health conscious and interested to have nutritionally enriched food. The quality of rice is an important character to determine the economic value in the export market and consumer acceptance (Pingali et al. 1997). The genetic basis of the accumulation of micronutrients in the grain, mapping of the quantitative trait loci (QTL) and identification of genes will provide the basis for preparing the strategies and improving the grain micronutrient content in rice. Integrating marker assisted breeding with classical breeding makes, the possibility to track the introgression of nutritional quality associated QTLs and genes into a popular cultivar from various germplasm sources (Fig. 1). Till date classical breeding has a significant impact on improving biofortification of rice cultivars by making crosses, backcrosses and selection of the desired superior rice cultivars with high nutritional value. However, by availing technologies such as DNA markers, genetic engineering and allele mining offers an opportunity to use them as a tool to detect the allelic variation in genes underlying the traits and introgression of nutrition related QTLs/genes to improve the efficiency of classical plant breeding via marker-assisted selection (MAS). Molecular markers such as SNPs (Ohstubo et al. 2002;Bao et al. 2006;Bert et al. 2008;Mammadov et al. 2012), SSRs (Anuradha et al. 2012;Nagesh et al. 2013;Gande et al. 2014), STS (Chandel et al. 2011;Gande et al. 2014), etc. have been developed. Integration of the markers into the breeding programs for effective selection of the plants at early stage of crop growth provides an opportunity to achieve the target earlier than the classical breeding program. Genomic approaches are particularly useful when working with complex traits having multigenic and influence of environment. In this new plant breeding era, genomics will be an essential aspect to develop more efficient nutritional rich rice cultivars (Perez-de-Castro et al. 2012), for reducing human health problems relating to mineral nutrition. Therefore, this is an effective approach for future rice breeding to reduce the malnutrition. By availing the different molecular approaches and advanced genomic technologies such as SNPs array, genome sequencing, genome-wide association mapping, transcriptome profiling, etc. could be strategically exploited to understand molecular mechanism and their relation between the genotypes and phenotypic traits leading to development of improved rice varieties (Chandel et al. 2011;Varshney et al. 2014;Malik et al. 2016;McCouch et al. 2016;Peng et al. 2016). Traits for improvement of the grain nutritive value In the present situation, attention on grain quality and nutritional value has become a primary thought for producers and consumers. Rice grain is relatively low in some essential micronutrients such as iron (Fe), zinc (Zn) and calcium (Ca) as compared to other staple crops like wheat, maize, legumes and tubers (Adeyeye et al. 2000). However, rice grain consists of ~80% starch and its quality is dependent on combination of several traits. Another component of nutritive value of rice is bran, an important source of protein, vitamins, minerals, antioxidants, and phytosterols (Iqbal et al. 2005;Liu 2005;Schramm et al. 2007; Renuka and Arumughan 2007). Rice bran protein has a great potential in the food industry, having unique (Saunders 1990) and reported as hypoallergenic food ingredient in infant formulations (Helm and Burks 1996) and having anti-cancer properties (Shoji et al. 2001). Improvement in these components in the grain can be useful to reduce malnutrition. Grain protein and amino acid content Protein energy malnutrition affects 25% of children where their dietary intake is mainly on rice and staple crops have low levels of essential amino acids (Gearing 2015). Therefore, attempts to improve the nutritional value of rice have been concentrated on protein content (PC) and other nutritional quality (Fig. 2). The amount of PC in rice is relatively low (8.5%) as compared to other cereals like wheat (12.3%), barley (12.8%) and Millet (13.4%) and an average of PC in milled rice is about 7 and 8% in brown rice. The total seed protein content of rice is composed of 60-80% glutelin and 20-30% prolamin, controlled by 15 and 34 genes respectively (Kawakatsu et al. 2008;Xu and Messing 2009). Rice supplies about 40% of the protein to human through diet in developing countries and quality of PC in rice is high, due to rich in lysine (3.8%) (Shobha Rani et al. 2006). Therefore, improvement of PC in rice grain is a major target for the plant breeders and biotechnologists. So far, by classical breeding effort, very limited success has been achieved because of the complex inheritance nature and the large effect of environment on protein content (Coffman and Juliano 1987). According to Iqbal et al. (2006), more than 170 million children and nourishing mothers suffered from Protein-calorie malnutrition (PCM) in developing Afro-Asian countries. In comparison with meat, plant proteins are much less expensive and nutritionally imbalanced because of their deficiency in certain essential amino acids (EAAs). In general, cereal proteins are low in lysine (Lys 1.5-4.5 vs. 5.5% of WHO recommendation), tryptophan (Trp, Fig. 2 Depicted diagram of molecular marker positions associated with grain nutritional quality of rice distributed on 12 chromosomes from comprehensive literature survey. Molecular marker on right and their position (cM) on left side of the chromososmes. MPGQ milling properties of grain quality, GA grain appearance (red), CP cooking properties (blue), NF nutrition factors (pink), FRG fragrance of rice grain (green) (colors indicate markers related to nutritional quality traits in rice) 0.8-2.0 vs. 1.0%), and threonine (Thr, 2.7-3.9 vs. 4.0%). Pulses and most vegetable protein contain 1.0-2.0% of sulfur containing amino acid (methionine and cysteine), compared with the 3.5% of the WHO reference protein (Sun 1999). Therefore, these EAAs become the limiting amino acids in cereals and legumes. Recently, Han et al. (2015) compared the quality of rice bran protein (RBP) with animal and vegetable proteins. The digestibility of RBP (94.8%) was significantly higher than that of rice endosperm protein (90.8%), soy protein (91.7%) and whey protein (92.8%) which is same as that of casein. Among the total grain PC, rice bran protein appears to be a promising protein source with good biological value and digestibility. Recently, Mohanty et al. (2011) reported 16.41 and 15.27% of crude protein in brown rice of ARC 10063 and ARC 10075 respectively on dry weight basis. They observed the total free amino acid content to be higher in these accessions and lysine content was positively correlated with the grain protein content in contrary to the view of Juliano et al. (1964) and Cagampang et al. (1966). Subsequently, by exploiting ARC 10075 as a donor, CR Dhan 310 (IET 24780) rice variety was developed with high protein content of 11% and rich in threonine and lysine (NRRI Annual Report 2014-2015. Several reports claim the varying levels of PC from 4.91 to 12.08%, lysine of 1.73-7.13 g/16 g N and tryptophan from 0.25 to 0.86 g/16 g N in rice accessions (Banerjee et al. 2010). Utilizing the efficiency of molecular marker technology, PC in brown and milled rice were mapped using various rice populations (Tan et al. 2001;Aluko et al. 2004;Weng et al. 2008;Zhang et al. 2008;Yu et al. 2009;Zhong et al. 2011;Yun et al. 2014). Vitamins and minerals Forty-nine nutrients are required for normal growth and development and the demand is fulfilled by nutrients supplied by cereals, particularly rice (Welch and Graham 2004). Among these nutrients, mineral elements play beneficial role directly or indirectly in human metabolism. The wide spread occurrence of anemia and osteoporosis due to deficiency of iron and calcium respectively was observed in most developing countries as well as developed countries (Welch and Graham 1999). In the scenario, plant breeders started to pay more attention to improve the nutrient qualities especially mineral elements of major food grain crops (Zhang et al. 2004). Several researchers have reported genetic differences of mineral elements in rice (Gregorio et al. 2000;Zhang et al. 2004;Anandan et al. 2011;Ravindra Babu 2013;Jagadeesh et al. 2013). However, limited number of reports was observed for molecular level study and QTLs for vitamin and mineral content in rice. Brown rice is an important source of vitamins and minerals and by polishing the brown rice, several nutritional components such as dietary fiber, vitamins and phenols are eliminated that are beneficial to human health. Glycemic index value Glycemic index (GI) is an indicator for the response of blood sugar levels based on the amount of carbohydrate consumption (after ingestion), which can be measured by rapidly available glucose (RAG). Rice, as a staple food contains 80% of starch and increased consumption leads to risk of type II diabetes (Courage 2010) and is predicted to affect almost 330 million people by 2030 (Misra et al. 2010). Brand-Miller et al. (2000) categorized glycemic index foods into low (GI value <55), medium (GI value 56-69) or high (GI value >70) GI foods. Recent studies have shown the ability of lower GI value will help to improve glycemic control in diabetics and cardiovascular diseases (Brand-Miller et al. 2003;Srinivasa et al. 2013). Low GI foods more slowly convert the food into energy by the body, thereby blood glucose levels become more stable than diets based on high GI foods. Therefore, identification of lower GI crops would play a major role in managing the disease. Thus, the diabetic sufferers in lowincome countries such as Bangladesh, India, Indonesia, Malaysia and Sri Lanka may offer an inexpensive way for managing the disease (Fitzgerald et al. 2011). GI range may vary among the genotypes as well as the growing regions. GI varied from 54 to 121 among rice genotypes (Manay and Shadaksharaswamy 2001). The degree of gelatinization is proportional to the amount of amylose; the less amylose there is, the greater the degree of gelatinization and vice versa. In other words, starches with lower amylose content will have higher Glycemic Indexes. Inversely, starches with a higher amylose content will be less susceptible to gelatinization, that is, to breaking down into glucose, that which makes for low Glycemic Indexes. The amount of amylose content (AC), Waxy haplotype and digestibility of rice are significantly correlated (Fitzgerald et al. 2011) and observed that AC plays a key role in rate of starch digestion and GI (Kharabian-Masouleh et al. 2012). Apparent amylose content is primarily controlled by the Waxy gene which codes for granule bound starch synthase (Chen et al. 2008a). The combination of two singlenucleotide-polymorphism (SNP) markers in the Waxy gene allows for the identification of three marker haplotypes in this gene. The first SNP is at the leader intron splice site (In1 SNP), and the second polymorphism is in exon 6. The haplotypes explained 86.7% of the variation in apparent amylose content and discriminated the three market classes of low, intermediate and high AC rice from each other. Chen et al. (2008a, b), Larkin and Park (2003) and Kharabian-Masouleh et al. (2012) reported that Waxy gene showed four haplotypes viz., In1T-Ex6A, In1G-Ex6C, In1G-Ex6A and In1T-Ex6C used for the classification of AC in rice. Conversely, Cheng et al. (2012) identified intron1 is insufficient to explain the genetic variations of AC in rice. Therefore, the study based on the AC and molecular analysis would be helpful for the selection of appropriate nutritional quality rice for diabetic. Angwara et al. (2014) characterized 26 Thai rice varieties for RAG and Waxy haplotype (In1-Ex6) as GI indicators. The four haplotypes, classified 26 Thai rice varieties into grups consisting four varieties having G-A, nine varieties harboring G-C, 13 varieties carrying T-A or T-C allele associated with high, intermediate and low amylose respectively and the varieties having G-A haplotype exhibited low RAG. Phenolic and flavonoid compounds of rice grain The phytochemicals such as phenolic compounds (tocopherols, tocotrienols and γ-oryzanol) and flavonoids (anthocyanidin) are responsible for good source of natural antioxidant and grain colour respectively. Kernel of red rice is characterized by the presence of proanthocyanidins whereas black rice is characterized by the accumulation of anthocyanins, mainly cyanidin-3-glucoside and peonidin 3-glucoside. These compounds help in decreasing the toxic compounds and reduce the risk of developing chronic diseases including cardiovascular disease, type-2 diabetes, reduction of oxidative stress and prevention of some cancers (Ling et al. 2001;Kong et al. 2003;Hu et al. 2003;Iqbal et al. 2005;Yawadio et al. 2007;Shao et al. 2011). Red rice has phenolic compounds in the range of 165.8-731.8 mg gallic acid equivalent (GAE) 100 g −1 (Shen et al. 2009) and black/purple rice reported to have higher amount of Fe, Zn, Ca, Cu and Mg than red rice (Meng et al. 2005). On the other hand, pigmented rice reported to have higher amount of antioxidative activity (Zhang et al. 2006;Nam et al. 2006;Chung and Shin 2007;Hiemori et al. 2009). The concept of the total antioxidant capacity, which represents the ability of different food antioxidants to scavenge free radicals, has been suggested as a tool for evaluating the health effects of antioxidant rich foods. In non-pigmented rice varieties, the bran fraction has a total phenolic content (TPC) of 596.3 mg GAE 100 g −1 , which is close to that of the husk (599.2 mg GAE 100 g −1 ) followed by the whole grain (263.9 mg GAE 100 g −1 ) and the rice endosperm (56.9 mg GAE 100 g −1 ) (Goufo and Trindade 2014). The phenolic compounds are mainly associated with the pericarp colour, darker the pericarp higher the amount of polyphenols (Tian et al. 2004;Zhou et al. 2004;Yawadio et al. 2007). Shen et al. (2009) characterized coloured parameters of rice grain (white, red and black rice) in wide collection of rice germplasm and found significantly associated with total phenolics, flavonoid and antioxidant capacity in three types of rice grain. Moreover, the correlations among the white rice accessions are rather weak. Goffman and Bergman (2004) evaluated different colour of rice genotypes and their total phenolic content ranged from 1.90 to 50.32 mg GAE g −1 of bran, and between 0.25 and 5.35 mg GAE g −1 of grain. Recent evidence of Goufo and Trindade (2014), showed 12 phenolic acids are generally identified in rice ranging from 177.6 to 319.8 mg 100 g −1 in the bran, 7.3 to 8.7 mg 100 g −1 in the endosperm, 20.8 to 78.3 mg 100 g −1 in the whole grain, and 477.6 mg 100 g −1 in the husk, depending on the rice color. This suggest that, rice bran has highest source of phenolic acids than others consumable part of rice. Numerous literatures have shown that consumption of colored rice reduces oxidative stress and simultaneously increases in antioxidant capacity. Consumption of colored rice varieties is very limited in Western countries, but in some growing areas of Asia, traditional varieties with colored pericarp are particularly valued in local markets (Finocchiaro et al. 2007). Effect of phytic acid in rice grain An important mineral storage compound in seed is phytate, a mixed cation salt of phytic acid (InsP6) accounted approximately 75% of total phosphorus in seeds (Lott 1984;Suzuki et al. 2007;Raboy 2009). A considerable part of the phosphorus taken up by plants from soil is translocated ultimately to the seed and synthesized into phytic acid (PA). Therefore, this compound represents a major pool in the flux of phosphorus and recently estimated that, the amount of phosphorus synthesized into seed in the form of PA by crops each year represents a sum equivalent to >50% of phosphorus fertilizer used annually world-wide (Lott et al. 2000). Phytate being vital for seed development and higher seedling vigour, often considered as an anti-nutritional substance, but may have a positive nutritional role as an antioxidant, anti-cancer agent, lowering chronic disease rates, heart diseases in humans and prevents coronary diseases (Bohn et al. 2008;Gemede 2014). PA is considered as an anti-nutritional factor, as it forms complexes with proteins in seeds and essential minerals, such as Fe, Zn and Ca. (Reddy et al. 1996;Mendoza 2002;Bohn et al. 2008;Tamanna et al. 2013). However, Welch and Graham (2004) finding indicates that, PA have no much negative effects on Fe and Zn bioavailability. Prerequisite for improvement of Fe and Zn content in rice grain Iron and zinc micronutrients are the most important elements, deficiency of which is a major cause for malnutrition. More than half of the world population is suffering from bioavailable nutrient deficiencies particularly in developing countries (Seshadri 1997;Shahzad et al. 2014). The main reason of these deficiency occurred due to consumption of polished cereal based food crops as rice, wheat and maize (Pfeiffer and McClafferty 2007). Modern high yielding rice varieties are poor sources of essential micronutrients like Fe and Zn (Zimmerman and Hurrel 2002). On an average, polished rice has 2 mg kg −1 , while the recommended dietary intake of Fe for humans is 10-15 mg kg −1 . Therefore, globally more than 3 billion people were affected by Fe deficiency, particularly in developing countries Welch and Graham 2004). Pregnancy maternal mortality by anemia leads to 1.15 lakh deaths per year, resulting in 3.4 million disability-adjusted life-years (DALYs), has been recognized to Fe deficiency (Stoltzfus et al. 2004). Hence, improvement of Fe content in rice grain is necessary, which is a major challenge to the plant breeders. In plants, Zn plays a significant role in the biosyntheses and turnovers of proteins, nucleic acids, carbohydrates and lipids, with functional aspects as integral cofactor for more than 300 enzymes, coordinating ion in the DNA-binding domains of transcription factors and equally important as Fe and vitamin A (Marschner 1995). Males within the age bracket of 15-74 years require approximately 12-15 mg of Zn daily, while females within 15-74 years of age group need about 68 mg of Zn (Sandstead 1985). Generally, the content of Zn in polished rice is an average of only 12 mg kg −1 , whereas the recommended dietary intake of Zn for humans is 12-15 mg kg −1 (FAO 2001). About 17.3% of the global population is under risk of Zn deficiency and in some regions of the world, it is as high as 30% due to dietary inadequacy (Wessells and Brown 2013). Therefore, to enhance the concentration of these micronutrients in rice grain could be possible as signified the presence of vast genetic potential of various rice germplasm by adapting appropriate genetic approaches (Fig. 1). However, major attention to date has been paid on identification and development of genetically engineered rice grains with increased bioavailable contents of Fe and/or Zn. The list of rice cultivars that possess dense micronutrient are presented in Table 1. Recently, Indian Institute of Rice Research, Hyderabad has developed a genotype (IET 23832) that possesses high Zn (19.50 ppm). As the brown rice has higher amount of Fe and Zn, more than 70% of micronutrients are lost during polishing (Sellappan et al. 2009) as they are located on the outer layer of the kernel. Martinez et al. (2010) found 10-11 ppm Fe and 20-25 ppm Zn in brown rice, while 2-3 ppm Fe and 16-17 ppm Zn was observed in milled rice. QTLs for protein content in rice Protein content in rice grain is a key factor for the enhancement of nutritional values and influencing the palatability of cooked rice (Matsue et al. 1995). Tan et al. (2001) mapped two QTLs for PC in the interval of C952-Wx on chromosome 6 near to waxy gene with 13% PV and LOD score of 6.8 and another QTL was mapped within the interval of R1245-RM234 on chromosome 7, which accounted for 4.7% of the PV and LOD score of 3.2. On the other hand, Aluko et al. (2004) identified four QTLs located on chromosomes 1, 2, 6 and 11 in a DH population from an inter specific crosses between O. sativa and O. glaberrima. Among the four QTLs, one QTL was located on chromosome 6, which is closely associated with Wx gene influencing rice quality. Three QTLs viz., qPC1.1, qPC11.1, and qPC11.2 were associated with PC of brown rice (Qin et al. 2009). Among them, qPC11.1, and qPC11.2 were identified on chromosome 11 exhibiting 22.10% and 6.92% PV with LOD score of 4.90 and 2.75, respectively. The QTL qPC11.2 was found to be consistent over two years of trial and linked with marker RM287. Yu et al. (2009) detected five QTLs for PC and four QTLs for fat content from 209 RILs. The five QTLs (qPC-3, qPC-4, qPC-5, qPC-6 and qPC-10) for PC were detected on chromosomes 3, 4, 5, 6 and 10 with LOD score of 6.25, 2.87, 2.28, 9.78 and 4.50 respectively. Among these five loci, qPC-6 observed to be nearer to the Wx marker between RM190 and RZ516 on the short arm of rice chromosome 6, explaining 19.3% of the PV and other four QTLs explained 3.9-10.5% of the PV. Zhong et al. (2011) reported two consistent QTLs for PC in milled rice as qPr1 and qPr7 detected over two years and positioned in the marker interval of RM493-RM562 and RM445-RM418 on chromosome 1 and 7 respectively. Recently, three QTLs qPro-8, qPro-9 and qPro-10 were detected on chromosome 8 flanked by RM506-RM1235 with a LOD score of 2.57, chromosome 9 in the interval of RM219-RM23914 with a LOD score of 2.66, and chromosome 10 separated by RM24934-RM25128 with a LOD score of 6.13 respectively for PC from 120 DH lines ). QTLs associated with amino acid in rice Amino acid (AA) composition and mapping was reported in milled rice using 190 RILs and detected eighteen chromosomal regions for 17 out of 20 AA (except Tryptophan, Glutamine, and Asparagine), essential AA in total and total AA content in rice grain . Two major QTL clusters in RM472-RM104 (1-19) and RM125-RM542 (7-4, 5) were detected consistently in two years and explained about 30 and 40% of PV. Zhong et al. (2011) detected 48 and 64 QTLs related to AA in the year of 2004 and 2005, respectively. Most QTLs colocalized, forming 29 QTL clusters on the chromosomes with three major ones detected in both years, which were mapped on chromosomes 1, 7 and 9, respectively. The two QTL clusters for amino acid content, qAa1 and qAa7, influenced almost all the traits and the third QTL cluster for amino acid content, qAa9, increased the lysine content. Therefore, these identified QTLs and their association with particular grain quality nutrient trait results will be useful to find the candidate genes and favorable alleles to transfer into elite breeding rice cultivars through marker-assisted breeding program. QTLs responsible for mineral contents in rice Several QTLs related to nutritional quality traits have been reported in rice from different genetic backgrounds of intraspecific and interspecific crosses using molecular markers. The grain nutrient traits associated with various QTLs and linked/flanking markers are summarized in Table 2 and Fig. 2. Three loci explaining 19-30% variation for Fe content on chromosomes 7, 8, and 9 were observed by Gregorio et al. (2000). A major QTL explaining 16.5% of PV for Fe content on chromosome 2 was identified from a DH population derived from a cross between IR64 and Azucena (Stangoulis et al. 2007). Besides, Garcia-Oliveira et al. (2008) reported a QTL for Fe content close to the marker RM6641 on chromosome 2 from an introgression line derived from a cross between Teqing and Oryza rufipogon. Wild rice (O. rufipogon) contributed favorable alleles for most of the QTLs (26 QTLs), and chromosomes 1, 9 and 12 exhibited 14 QTLs (45%) for these traits. One major effect of QTL for zinc content accounted for the largest proportion of phenotypic variation (11-19%) was detected near the simple sequence repeats marker RM152 on chromosome 8. James et al. (2007) used a DHs population for three Fe linked QTLs on chromosomes 2, 8 and 12, explaining 17, 18 and 14% of the total PV, respectively. They also reported two QTLs for Zn content on chromosomes 1 and 12, explaining PV of 15 and 13% respectively. Norton et al. (2010) reported ten QTLs for five mineral elements (Cu, Ca, Zn, Mn and Fe) and Fe (qFe-1) mineral trait explained the highest PV of 25.81% with LOD score of 7.66. Anuradha et al. (2012) identified 14 QTLs for Fe and Zn from unpolished rice of Madhukar/Swarna RILs. Seven QTLs each for grain Fe and Zn content were identified on chromosomes 1, 3, 5, 7 and 12 and the PV ranged from 29 to 71%. In addition, Gande et al. (2014) identified 24 candidate gene markers responsible for Zn content and four candidate genes namely OsNAC, OsZIP8a, OsZIP8c and OsZIP4b showed significant PV of 4.5, 19.0, 5.1 and 10.2%, respectively. Garcia-Oliveira et al. (2008) identified 31 putative QTLs associated with microelements (Fe, Zn, Mn, Cu,) and macro elements (Ca, Mg, P and K) on all chromosomes except on chromosome 7. Among the total QTLs identified, chromosomes 1 and 9 had the highest number of QTLs having five QTLs each. Earlier reports showed several QTLs for the mineral content associated with different chromosomal regions of rice. QTLs for K on chromosomes 1 and 4 , P on chromosomes 1 and 12 Wissuwa et al. 1998; Ming et al. Wang et al. (2008) 2001; Wissuwa and Ae 2001a, b) and Mn on chromosome 10 (Wang et al. 2002) were reported. Lu et al. (2008) observed 10 QTLs for Ca, Fe, Mn, and Zn accumulation in rice grains on seven chromosomes. Zhang et al. (2014) reported 134 QTLs for 16 elements in unmilled rice grain and among them, six were considered strongly associated and validated. QTLs for phenolic compounds in rice grain The Rc locus regulates pigmentation of the rice bran layer, and selection for the rc allele (white pericarp) occurred during domestication of the crop. Two loci, Rc and Rd were found to be responsible for the formation of pericarp colour (Sweeney et al. 2006;Furukawa et al. 2007). Rc produces brown pericarp and seed coat, with Rd it develops red pericarp and seed coat, while Rd alone has no phenotype. Rc encodes a regulatory protein (Basic Helix-Loop-Helix Protein) that allows the accumulation of proanthocyanidins (Sweeney et al. 2006), while Rd encodes the enzyme DFR (dehydroflavonolreductase), which is involved in anthocyanin and proanthocyanidins pathway (Furukawa et al. 2007). Consequently, wild-type allele (Rc), the domestication allele (rc) and a mutant allele (Rc-s) were cloned and sequenced. The allele rc was found to be null with 14-bp deletion, responsible for frame shift mutation and a premature stop codon (Brooks et al. 2008). Through classical genetic approaches, Yoshimura et al. (1997) identified two loci, Pb (Prp-b) and Pp (Prp-a), located on chromosomes 4 and 1, respectively for the pericarp pigmentation with anthocyanin of black rice. Further, Wang and Shu (2007) mapped Pb gene responsible for purple pericarp on chromosome 4 and suggested that, the gene Pb may be a mutant of gene Ra caused by a two bases deletion (GT) within exon 7 of the Ra. Bres-Patry et al. (2001) identified two QTLs controlling rice pericarp and it was located on chromosomes 1 and 7. By association mapping Yafang et al. (2011) and Shao et al. (2011) reported that RM339 and RM316 were the common markers for antioxidant, flavonoids and phenolic content. Ra and Rc were main effect loci for pericarp color and phenolic compounds. Associated QTLs for phytic acid In rice, phytic acid (PA) is a major source of P for support of seedling growth on P-deficient soil and important role of anti-nutritional factor. Liu et al. (2005) reported the amount of PA and protein content (PC) in 24 cultivars of rice and found to be no significant correlation between them. Among the cultivars, PA content ranged from 0.68% for Xiu217 to 1.03% for Huai9746, with a mean of 0.873%, and PC ranged between 6.45% for Xiu52 and 11.10% for K45, with a mean of 8.26%. The molecular mechanism and genetic trait of phytate accumulation in rice grain is necessary to understand for designing a breeding program. James et al. (2007) identified two QTLs for phytate concentration on chromosomes 5 and 12 with LOD score of 5.6 and 3.5 explaining 24.3 and 15.4% of PV, respectively. In addition, they reported significant positive correlation of phytate with inorganic P and total P (R = 0.99), indicating that the majority of P in grain was stored in the form of phytate. Achievements through transgenic approaches to enhance nutritional values Genetic engineering, an alternative approach to enhance nutritional values, has been considered to be the potential tool for the sustainable and efficient strategy for increasing the nutritional quality traits in target area of plants (Uzogara 2000;Lucca et al. 2001;Zimmerman and Hurrel 2002;Dias and Ortiz 2012). The world population would likely to reach 8 billion by 2030. Therefore, the problem of malnutrition would further exaggerated to 93% (Khush 2005(Khush , 2008. Numerous evidences are piling up showing significant increase of bioavailable content in rice grains by transfer of biofortfication genes through biolistic and Agrobacterium-mediated transformation method (Table 3). Through the transgenic approaches, Goto et al. (1999) kuzzaman et al. (2006) observed increase in Fe T 1 brown seeds and T 2 polish rice seeds compared to control. Thus, the Fe content increased more than 2-fold in transgenic lines. Subsequently many researchers have attempted to increase Fe content in rice endosperm by over expressing genes involved in Fe uptake from the soil and translocation from root, shoot, flag leaf to grains, and by increasing the efficiency of Fe storage proteins (Kobayashi and Nishizawa 2012;Lee et al. 2012;Bashir et al. 2013;Masuda et al. 2013;Slamet-Loedin et al. 2015). Several studies exhibited the associated increase in Fe and Zn content in rice grain obtained by over expression or activation of the Nicotianamine Synthase (NAS) genes or influenced with other transporters genes (Table 3) Masuda et al. (2013) introduced multiple genes viz., OsSUT1 promoter-driven OsYSL2, ferritin gene under the control of endosperm-specific promoter, barley IDS3 genome fragment and NAS over expression and observed significant increase in 1.4-fold, 2-fold, 6-fold, 3-fold of Fe concentration respectively as compared to polished rice seeds. These results suggest that, targeting multiple genes would be more successful in enhancing nutritional values of rice. Rice lacks the ability to produce β-carotene, the precursor of Vitamin A. Ye et al. (2000) developed golden rice that yields 1.6-2.0 μg g −1 of β-carotene of dry rice which is very beneficial for retina (Vitamin A) to create visual pigment and ultimately leads decreasing of night blindness and particularly useful for people in developing countries. It was possible by introgression of major four genes phytoene synthase, phytoene desaturase, β-carotene desaturase, and lycopene β-cyclase into rice. Advanced genomic technologies The ever-increasing demand for rice production with higher quality drives to the identification of superior and novel rice cultivars. To meet these challenges, plant breeders and biotechnologist together has to explore efficient breeding strategies that integrate genomic technologies by using available germplasm resources to a new revolution in the field of plant breeding for better understanding of genotype and its relationship with the phenotype, in particular for complex traits. Genomic approaches are particularly useful when working with complex traits having multigenic and environmental effects. In this new plant breeding era, genomics will be an essential aspect to develop more efficient nutritional rich rice cultivars for reducing human health problems relating to mineral nutrition (Perez-de-Castro et al. 2012). In 2011, Zhao et al. genotyped 413 diverse accessions of O. sativa with 44,100 SNP and phenotyped them for 34 traits including grain quality parameters. Deep transcriptional analysis by MPSS and SBS brought out several differentially expressed genes that affect milling yield and eating quality trait in rice (Venu et al. 2011). The genes that expressed were identified to be involved in biosynthesis of starch, aspartate amino acid metabolism, seed maturation and storage proteins. Peng et al. (2016) developed a stable variant line (YVB) having greatly improved grain quality traits using restriction-site associated DNA sequencing technology (RADseq) from a BC 1 F 5 backcross population derived (Zhao et al. 2005). The deep re-sequencing of genomes of both the parents V20B and YVB showed read coverage of 89.04 and 93.13% and depth of sequencing 41.26-fold and 87.54-fold respectively. A total of 322,656 homologous SNPs were identified between V20B and YVB. A total of 17 QTLs for rice grain quality were detected on chromosomes 3, 5, 6, 8, and 9 through genetic map analysis with PV ranging from 5.67 to 35.07%. Invention of SIM technology enabling introduction of exogenous DNA helped in creating a large number of new rice germplasm accessions and the variants were analyzed using molecular markers (Pena et al. 1987;Zhao et al. 2005). Conclusion The nutritional value enrichment of rice grain is very much essential to reduce malnutrition of developing countries in the post green revolution era. The current gain in knowledge on the nutritional value related genes and QTLs will help into develop desired genotypes for the humankind. The availability of gene based markers and advanced tool will assist breeders to accumulate specific alleles of genes known to play a role in nutritional grain quality traits in rice. In recent years, significant achievement has been made in genetic studies on grain protein and amino acid content, vitamins and minerals, glycemic index value, phenolic and flavinoid compounds, phytic acid, zinc and iron content along with QTLs linked to these traits but needs more research for processing and curative properties. Recent release of high protein and zinc rich rice varieties in India gives the positive note on progressive move in crop improvement program in rice. The, transgenic approach will further strengthen to enrich grain nutrition to desired level rapidly. The recent development of genomic technologies may augment for improving the nutritional quality in rice when it goes hand in hand with breeding program. Increases of iron content in grain Paul et al. (2012) 15 MOT1(molybdenum transporter 1) grain molybdenum concentration Norton et al. (2014) 16 COPT1 and COPT2 (copper transport) grain copper concentration Norton et al. (2014) 17 Lsi1(arsenic transport) inter and extra cellular transporters of arsenic Ma et al. (2008), Norton et al. (2014)
v3-fos-license
2019-03-30T13:12:17.251Z
2013-01-01T00:00:00.000
86327443
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.omicsonline.org/the-contribution-of-next-generation-sequencing-technologies-to-epigenome-research-of-stem-cell-and-tumorigenesis-2161-0436.S2-001.pdf", "pdf_hash": "671d510b8cc583a07c8ad58c0cb37183b79bc6b9", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2859", "s2fieldsofstudy": [ "Biology" ], "sha1": "844a355643522a9b8dd13e68231c88fc9a7a3fd1", "year": 2013 }
pes2o/s2orc
The Contribution of Next Generation Sequencing Technologies to Epigenome Research of Stem Cell and Tumorigenesis Epigenome contains another layer of genetic information, not as stable as genome. Dynamic epigenome can serve as an interface to explain the role of environmental factors. Stem cell and tumorigenesis are reported to be closely associated with epigenome modifications. Next generation sequencing (NGS) technologies have directly leaded to the recent advances in epigenome research of stem cell and cancer. DNA methylation and histone modification are two major epigenetic modifications. Four NGS-based approaches have been developed to identify these two epigenetic modifications, including whole genome bisulfite sequencing (WGBS), methylated DNA Immunoprecipitation Sequencing (MeDIP-Seq), reduced representation bisulfite sequencing (RRBS) and chromatin immunoprecipitation sequencing (ChIP-Seq). This paper reviews the recent advances of WGBS, MeDIP-Seq and RRBS for DNA methylation and ChIP-Seq for histone modification in the field of stem cell. The potential contribution of epigenetic modifications to tumorigenesis is also described. At present, the epigenome research still faces the defects of current sampling strategy and unknown network regulation pattern. In future, worldwide collaboration and latest sequencing technologies application are expected to solve these problem and offer new insight into epigenome research. Introduction Genome sequencing has great positive effect on human disease research since its emergence. It enables researchers to explore and understand the mechanism of disease development on nucleic acid level. The effect of genome sequencing on human disease research has been obviously demonstrated by several international collaborative projects [1,4]. Human genome project, began in 1990 and completely accomplished in 2003, had constructed the first map of human genome, widely used as the reference sequence of subsequent human genome researches [1]. International Hapmap project, officially started in 2002 and initially published in 2005, firstly described the haplotype map of the human genome, revealing the common patterns of human genetic variants. The single nucleotide polymorphism (SNP) information in the Hapmap project is fundamental to explore common genetic variants affecting human health and disease [2,3]. Furthermore, the 1000 genome project launched in 2008 is expected to find more genetic variant information with larger samples and resources and build the most comprehensive catalogue of human genetic variations [4]. The project is designed to sequence 2,500 genomes of individuals from 27 populations and obtain comprehensive genetic variants contributing to the genetic diversity in human population, such as structural variants (SV) and copy number variants (CNV). The pilot study of the project was finished in 2010 and revealed unprecedented number and type of genetic variants [4]. The achievements of these large projects have switched on the "big science" mode of human disease research by collaboration of worldwide scientists. They are regarded as the milestones, setting the clear goal and reference for the subsequent numerous human disease researches based on genome sequencing. However, as more genome sequencing researches emerged, it was found that the genetic variants of genome level were not enough to fully demonstrate and understand human disease mechanisms. It was speculated that there was another layer of information besides genome sequence to determine the state of human health and disease, based on two reasons below. First, as a multicellular organism, human body can produce a variety of cells corresponding to distinct functions. Since all human cells share the DNA sequence, information other than DNA sequence may occur to control cell development for a particular type to function in different tissues [5]. Second, the expression of gene in DNA sequence is regulated by environmentally induced changes, such as nutrient, toxins, drugs, infection, behavior and stress [6,7]. Genome sequencing can merely clarify the life diversity among individuals, populations and ethnic groups by detected genetic variants. However, the results of genome level research cannot help to explain the regulation mechanism of external factors to make the diversity occur, especially those similar genomes with different phenotypes. For example, monozygotic twins were born to have identical genome sequence, but would have different diseases with their growing up. Thus, it is expected to reveal the mystery by the study of another layer. This further layer of information for regulating the differential gene expression was early described as 'epigenetic control' by Nanney in 1958 [8]. Although there is a little debate on the precise description of epigenetics, the fundamental definition of epigenetics refers to the heritable changes in cell or tissue specific gene expression with no alteration in the DNA sequences [6]. The heritable changes, inherited from cell to cell and generation to generation, are mostly established during the process of cellular differentiation and are steadily maintained through multiple cycles of cell division [9]. These heritable regulation mechanisms mainly include DNA methylation, histone modifications, nucleosome positioning, chromatin remodeling, genomic imprinting and ncRNA regulation. Multilevel epigenetic mechanisms constitute the system of regulating gene expression in cells. By cell specific regulations, those mechanisms are crucial for cellular developments, such as embryogenesis, cell differentiation [10]. Thus, the aberration in the epigenetic regulation system is reported to be associated with a wide range of diseases [11,14]. Similar to genome, epigenome contains another layer of genetic information, representing the overall epigenetic state of a cell. But epigenome is not as stable as genome, varying with influence of internal and external factors. According to alterations, various epigenomes can originate from one genome. Since most human diseases are well recognized to be jointly affected by genetic and environmental factors, the epigenome can consequently serve as a vital bridge of gene-environment interactions. Epigenome has been proved to play an important role in the development and function of cells, especially early embryo development [15,17]. The understanding of epigenome is clearly beneficial to human disease research. The increased epigenome researches in recent one decade have laid the good basis for understanding ( Figure 1). Here, we will review recent epigenomic research advances of human disease. The review focuses on the application of next generation sequencing (NGS) technologies to demonstrate the contribution of epigenome to stem cell and tumorigenesis. [18,19]. NGS platforms (Roche 454 GS FLX, Illumina GA and HiSeq and Life Technologies SOLiD) are able to massively sequence a large quantity of sequence reads in parallel. Due to the characteristics of high-throughput data output, NGS has significantly accelerated the speed of scientific discoveries in epigenome research ( Figure 1). The ability of massively parallel sequencing also allows researchers to first gain the comprehensive mapping of epigenome in different states. Compared with the previous techniques, NGS genome-wide epigenome mapping can reach unprecedented resolution through high-throughput data output. And several effective approaches based on NGS technologies are well developed and widely used [20,22,29]. These innovative advantages have made the stem cell research in the field of epigenomic blossom, but the review cannot cover all. In this section, we mainly focus on the application of four NGS-based approaches, including WGBS, MeDIP-Seq, RRBS and ChIP-Seq, to two primary forms of epigenetic marks, DNA methylation and histone modification (Table 1). The NGS epigenome and stem cell research DNA methylation: DNA methylation is the most well studied epigenetic mechanism, referring to adding a methyl group at the carbon 5 position of cytosine through DNA methyltransferase (DNMT) enzymes to cytosine methylation in human genome. De novo methyl groups are catalysed by DNMT3A and DNMT3B enzymes to cytosine in newly synthesised DNA. Cytosine methylation, associated with gene silencing, is critical for hypermethylation in the promoter with CpG islands. The status of CpG sites in the genome is mostly methylated. But, CpG islands in the promoter regions in most human genes are not methylated [23]. DNA methylation is involved in a number of important processes such as maintaining genome stability, transcriptional silencing and genome imprinting. As a stable and heritable epigenetic mark, correct patterns of DNA methylation are crucial for normal development and lineage commitment [24,25]. Thus, the approaches based on NGS technologies to reveal the methylome are very crucial for human disease research. Three innovative NGS techniques are widely used in DNA methylation research, consisting of WGBS, MeDIP and RRBS. • WGBS: Whole genome bisulfite sequencing (WGBS) is the gold standard method to detect and calculate DNA methylation level. NGS technologies enable WGBS to conduct DNA methylation study at single base resolution [26,28]. Treatment of DNA with sodium bisulfite will change unmethylated cytosine into thymine without alterations of methylated [27,28]. As the first genome-wide map of methylated cytosines in a mammalian genome, Lister et al. [27] compared the human embryonic stem cells (hESCs) and fetal fibroblasts. The portion of non-CG methylation was much higher than expected through this study, for nearly one-quarter of all methylations identified in embryonic stem cells was found to be in a non-CG context. And non-CG methylations were enriched in gene bodies and depleted in protein binding sites and enhancers. Furthermore, non-CG methylation disappeared upon induced differentiation of the embryonic stem cells, and was restored in induced pluripotent stem cells. These interesting results strongly suggest that embryonic stem cells may rely on the high level of methylation in non-CG context for different regulatory patterns to affect gene regulation to maintain the pluripotency. It is also implied that there are alterations in epigenomic regulation mechanisms during the cell differentiation stages. As mentioned above, Laurent et al. [29] also reported the dynamic changes in the human methylome during differentiation by WGBS. Three cultured cell types were selected, including hESCs, a fibroblastic differentiated derivative of the hESCs and neonatal fibroblasts. And the mature peripheral blood mononuclear cells (monocytes) were set as a reference, for they were fully differentiated as an adult cell type. Developmental stage was reflected in both the level of global methylation and extent of non-CpG methylation. As representatives of progressive differentiation stages, hESCs have the highest level of methylation as a representative in the early stage of differentiation, while monocytes have the lowest level in the last stage, together with intermediate level of fibroblasts in the middle stage. Thus, epigenetic marks will dynamically regulate the development of various types of cells in different stages to function exactly. In addition to hESCs, WGBS can also be used to study induced pluripotent stem cells (iPSCs). iPSCs are derived from somatic cells, epigenetically reprogrammed to lose tissue-specific features and gain pluripotency. Similar to hESCs, they can theoretically differentiate into any type of cells [30]. But the reprogramming mechanism of iPSCs is different from ESCs, so it is a hotspot to distinguish epigenome and genome betweem iPSCs and ESCs. Lister et al. [31] reported the first genome-wide DNA methylation profiles of iPSCs at single-base resolution. By comparison among the methylomes of human ES cells, somatic cells, and differentiated iPSCs and ES cells, the difference in DNA methylation status was found between iPSCs and ESC. Human iPSCs exhibited large aberrant epigenomics reprogramming, including somatic memory and aberrant reprogramming of DNA methylation. Moreover, it was revealed that errors in reprogramming CG methylation were transmitted at a high frequency by analyzing differentiation of iPSCs into trophoblast cells. The result proved that an iPSC reprogramming signature was maintained after differentiation. As an important regulatory mechanism in development, epigenetic reprogramming of DNA methylation occurs frequently during differentiation. The differentiation extent of iPSCs is intermediate between embryonic stem cells and somatic cells. It can be predicted that researches on epigenetic reprogramming will increasingly use WGBS to study iPSCs to reveal the accurate mechanisms. WGBS can be engaged to study not only several types of stem cells mentioned above, but also adult somatic cells [28]. Wang et al. [32] studied the methylome of human peripheral blood mononuclear cells (PBMCs) by WGBS, and revealed the first Asian epigenome map of the same Asian individual whose genome was decoded in the YH project. Different from the result of Lister et al. [27] above, the portion of non-CG methylation in this study was minor, only <0.2% methylated non-CG sites. In addition, this study also revealed allele specific methylation between the two haploid methylomes, together with the previously generated whole genome sequencing data. From integrated results of different types of human cells in two methylome studies above, it could be clearly concluded that epigenomic statue is not stable to regulate the differentiation level in various types of cells. The conclusion has enlightened us to explore the contribution of non-CG methylation in maintaining and inducing cellular development, and implicated that non-CG methylation is not just existed in embryonic stem cells. With the characteristic of single base resolution, WGBS is expected to become a powerful tool in exploring the methylome differences of cells in various differentiated stages and tissue types. • MeDIP-Seq: Similar to WGBS, Methylated DNA Immunoprecipitation Sequencing (MeDIP-Seq) is a genomewide method to detect DNA methylation. However, different from sodium bisulfite treatment in WGBS, MeDIP-Seq is based on enrichment of methylated DNA sequence. The antibody especially recognizes genome-wide methylated cytosines, and the purified fraction of methylated DNA can be input to highthroughput DNA detection methods such as NGS [33]. Thus, this method is sensitive to the highly methylated and high CG density regions. Although lower resolution and less accuracy than WGBS, the characteristics of time saving and cost effective make it suitable for disease research in large sample size between cells and tissues. For example, the world largest ever epigenetics project, named as EpiTwin, was launched in 2010 by collaboration between Beijing Genomics Institute (BGI) and King's College London (TwinsUK). The EpiTwin project is to capture the subtle epigenetic differences between 5,000 twins throught MeDIP-Seq, and to explain why many identical twins don't develop the same diseases. Monozygotic twins are highly coincident in DNA sequence and consequently suitable to investigate the influence of epigenetic modifications on human diseases [34], such as autoimmune diseases [35,37]. Besides intensive research of DNA methylation, MeDIP-Seq can be applied for other fields, such as demethylation and 5-methylctosine (5mC). Demethylation is also very crucial for understanding the epigenetic mechanisms of human diseases. With both DNA methylation and demethylation, we could completely understand how these patterns of 5-methylcytosine are established and maintained. DNA demethylation is not as dynamic as methylation, as active DNA demethylation has been revealed to be merely observed during specific stages of development [38]. The existence of genomewide DNA demethylation has been reported in germ cells and early [39]. Although the mechanisms of demethylation remain to be elucidated, few researchers have already begun to use MeDIP-Seq to study DNA demethylation. Chavez et al. [40] used MeDIP-Seq to analyze DNA methylation change during differentiation of hESCs to definitive endoderm. After analyzing the interplay between DNA methylation, histone modifications and transcription factor binding, demethylation was found to be mainly associated with regions of low CpG densities, in contrast to de novo methylation. Even so there are few reports of NGS applications on DNA demethylation research, its importance of DNA demethylation is expected to be gradually recognized as that of DNA methylation. 5-hydroxymethylcytosine (5hmC) is a lysine-modified base in various cell types in mammals at low level, generated by adding the hydroxymethyl group on the cytosine [41]. The formation of 5hmC is regulated by the enzyme reaction of of TET family [42,45]. Similar to the principle of 5mC antibody enrichment in DNA methylation study, MeDIP-Seq or other similar NGS-based techniques can also be applied to investigate the distribution and role of 5hmC in the genome by 5hmC-specific antibodies. As an important and novel mechanism of epigenetics, 5hmC was recently found in 2009 to be existed in embryonic stem cells, as well as human and mouse brains [42,45]. Pastor et al. [41] further used NGS-based approaches to present a genome-wide mapping of 5hmC in mouse embryonic stem cells (ESCs). It was found that 5hmC was strongly enriched in exons and near transcriptional start sites. The result suggested that 5hmC might regulate the transcription of ESCs, but its regulatory role is different from 5mC. Ficz et al. [46] used MeDIP-Seq to confirm the existence of 5hmC in mouse ESCs and its role during differentiation, and demonstrated the relationship of 5mC and 5hmC. 5hmC was found to be mainly associated with euchromatin, while 5mC was enriched at gene promoters and CpG islands. 5hmC could not occur alone, whereas it mostly depended on the existence of 5mC in the genome. It indicated that 5hmC contributed to enhance the transcription as the opposite role of methylation in inhibiting gene expression. During differentiation with decreased TET, the hydroxymethylation level at the ESC-specific gene promoters declined simultaneously with the enhanced methylation level and consequent gene silencing. However, the balance between 5mC and 5hmC was not simple, but different according to genomic regions. It was supposed by the research that the balance between pluripotency and differentiation was associated with the balance between 5mC and 5hmC. Researches have reported the distribution of 5hmC in many types of tissues, and its importance in the ESCs is being gradually recognized as mentioned above. However, researchers have just begun to be interested in this epigenetic mark of 5hmC, the limited information still remains to be investigated. We will know the biological roles of 5mC and 5hmC in ESCs and human diseases more clearly when more powerful methods have been developed to distinguish them discretely. • RRBS: Reduced representation bisulfite sequencing (RRBS) is a fast and cost-effective method to provide qualified DNA methylation data, newly developed in recent years [47,49]. The first step is enzyme digestion by MspI, specifically cutting CCGG sites, and then is bisulfite treatment as the step in WGBS. Hence, RRBS can only cover CpG-rich regions such as promoter and other regulatory element, not genome-wide region as WGBS. It can still reach single base resolution as well as WGBS [48,50]. Thus, it is suitable to investigate the different methylated regions among samples for a broad scope of researches, such as medicine and biomarker [49,51]. As a recently developed NGS technique, few researches using RRBS have been published. Nevertheless, some researched have attempted to apply for biology and disease research [51,52]. For example, Wang et al. [51] applied RRBS to the human PBMC of the Asian individual from YH project, whose genome and epigenome has been systematically deciphered [28,32]. The result revealed that more than half of CpG islands and promoter regions were covered with a good coverage depth. Furthermore, the proportion of the CpG sites covered reached 80-90%, demonstrating good reproducibility of biological replicates [28]. Thus, it is a good choice for RRBS to focus on certain CpG-rich region of large samples to explore the DNA methylation differences. Besides, human disease can also be investigated by RRBS. Gertz et al. [52] used RRBS to study somatic DNA of six members in a threegeneration family. The result demonstrated the close relationship of genotype with DNA methylation. It was found that more than 92% of differential methylation between homologous chromosomes occurred on a particular haplotype, and 80% of DNA methylation differences could be explained by genotype. In addition, the study used transcriptional analysis to measure genes exhibiting genotypedependent DNA methylation, 22% of which had allele-specific gene expression differences. In general, this study highlighted the contribution of genotype to the pattern of DNA methylome. Along with the recognition of RRBS through increased publications, it will become a novel tool for DNA methylation research in many fields. Histone modification: In addition to DNA methylation, histone modification is another type of epigenetic regulation mechanisms via chromatin change. DNA in the eukaryotic chromatin is wrapped around histone octamers, consisting of four highly conserved core histones, H2A, H2B, H3 and H4. Histones are subject to various posttranslational modifications, including but not limited to lysine, lysine and arginine methylation, serine and threonine phosphorylation, lysine acetylation, ubiquitination, sumoylation and ADP ribosylation. These modifications occur mainly within the histone amino-terminal tails [53]. The state of histone tails can contribute to alter the chromatin structure to determine the accessibility of the transcription machinery and other regulatory factors to DNA. Thus, histone modifications of the histone tails are important to regulate the level of chromatin condensation and gene expression [54]. Among various types of histone modifications, acetylation and methylation of specific lysine residues on N-terminal histone tails play a fundamental role in the formation of chromatin domains [53]. Acetylation is respectively established and removed by histone acetyltransferases and deacetylases. Likewise, methylation is regulated by histone methyltransferase and demethylase families. The contributing enzymes on methylation and acetylation specifically affect toward various histone proteins [55]. As the switch in on-off regulation of gene expression, lysine residues acetylation on histones is associated with gene activation, whereas methylation of lysine residues can result in either activation or silencing on gene expressions [56]. As an epigenetic mechanism, posttranslational modifications of histones are involved in the regulation of normal and disease-associated development. Due to technical restrictions, most of these posttranslational modifications of histones remain poorly understood. However, advances have been made obviously in recent years based on NGS application through ChIP-Seq approaches. modifications with specific DNA sequence [57]. In ChIP experiment, chromatin is first treated with sonication or MNase-digestion [58], and then enriched by specific antibody. After immunoprecipitation, NGS technologies can detect specific protein's binding sites. Compared with ChIP-chip, ChIP-Seq shows higher resolution and greater coverage, and can detect more peaks and narrower peaks with a better signal-to-noise ratio [57]. The high-resolution capability of identifying genome-wide histone modifications make it fit for human biology research [59]. For example, Terrenoire et al. [59] used ChIP-Seq to study histone modifications H3K9ac, H3K27ac and H3K4me3 in human metaphase epigenome. By comparison with histone modification levels across the interphase genome, H3K4me3 and H3K27ac were revealed to show a close correspondence. Oppositely, H3K27me3, a epigenetic mark associated with gene silencing, exhibited big differences. The study provided evidence for extensive epigenome remodeling at mitosis. In the field of stem cell research, ChIP-Seq is also used due to its powerful ability of genome-wide histone modifications characterization. Larson et al. [60] used ChIP-Seq to study five histone modification marks (H3K4me2, H3K4me3, H3K27me3, H3K9me3, and H3K36me3) in mouse embryonic stem cells (ESCs). Coupled with a hidden Markov model (HMM), these marks were identified to be respectively existed in active, non-active and null domains. Each type of domains corresponded with distinct biological functions and chromatin structural changes during early cell differentiation. The study offered new insights into the role of epigenetics in long-range gene regulation. From the research examples above, we can conclude that ChIP-Seq is efficient and powerful to reveal the contribution of genome-wide histone modifications in epigenetic regulation mechanisms. More new insights are expected to be offered to make us understand the potential role of histone modifications in stem cell and human diseases deeper by ChIP-Seq. Cancer epigenomics It was the first time to be proved by scientists that epigenetic changes could be involved in both oncogenes and tumour suppressors in 1980s, which laid the cornerstone for our present acknowledgment of epigenetic markers as diagnostics and therapeutic biomarkers for cancer [61,62]. In 1983, Andrew Feinberg and Bert Vogelstein purified DNA from several human primary tumour tissues by methylationsensitive restriction enzymes and found lowered DNA methylation of specific genes in contrast to DNA from adjacent normal tissues. At that time, the predominant theory of tumorgenesis was the activation of oncogenes. However, Feinberg and Vogelstein's findings implied that DNA methylation alteration could lead to oncogene activation [61]. Later in the 1980s, tumour suppressor genes were widely recognized, which made it encouraging when relevant epigenetic changes were discovered in those tumour suppressor genes. For example, Greger et al. [62] demonstrated that an unmethylated CpG island at the 5' end of the retinoblastoma gene turned hypermethylated in tumour tissures from retinoblastoma patients, and they had the right to speculate that methylation could directly silence tumour suppressor genes . Later studies correlated the methylation of tumour suppressor genes to their actual silencing role in cancer, and proved that tumour suppressor genes could be reactivated by inhibiting DNA methylation [63]. Epigenetic modifications: DNA methylation, as the most wellstudied mechanisms in cancer epigenomics, is only one of many aspects of demonstrating the role of epigenetic alterations on tumorigenesis. Cancer epigenomcis involves the researches of all sorts of epigenetic alterations in cancer DNA sequence ( Figure 2). Next, we will summarize the current advances of the hotspots of cancer epigenomic researches, DNA methylation, histone modification, chromatin remodeling to demonstrate the contribution of epigenomics to tumorigenesis. • DNA methylation: Human disease is closely associated with abnormality in DNA methylation pattern. DNA methylation will generally inhibit gene expression. For example, global hypomethylation in cancer genome usually results in genomic instability, and gene silencing of tumour suppressor genes is caused by hypermethylation in CpG islands of the promoter region [14]. The methylated promoter regions may directly prevent transcription factors, e.g. A P-2, c-Myc, E2F and NF-kB, from combining with promoters, leading to gene silence or low gene expression; at the same time, the methylated regulatory elements at the 5' end of the genes may specifically bind to the methyl CpG binding protein (MBP), indirectly inhibiting the forming of transcriptional complex; besides, DNA methylation can alter the conformation of chromatin to inactivate it. Whereas, non-methylation usually correlates with gene activation, and demethylation should be related to reactivation of silencing genes [64]. Thus, Aberrant DNA methylation regulations would lead to tumorgenesis. DNA methylation changes in cancer cells include the loss of methylation at normally methylated sequences (hypomethylation) and the gain of methylated sequences at sites usually unmethylated (hypermethylation) [65]. As two opposite forms of DNA methylation, hypermethylation and hypomethylation play distinct roles in tumorigenesis. Hypermethylation of the promoter CpG islands regions in the 5' end of cancer related genes in human tumour cell lines have been reported, such as tumour suppressor gene (p16) [66], metastasis suppressor gene (Nm23) [67], DNA repair gene (MLH1) [68], angiogenesis suppressor gene [69] and so on. Some genes are hypermethylated in many types of cancers, such as p16 [66]. However, other genes are associated with specific cancer. For example, GSTP1 has been reported to be hypermethylated only in prostate cancer [70]. While hypomethylation has been reported in almost every human malignancy and prefers the repetitive sequences, transposable elements and proto-oncogenes in cancer, some studies indicate that hypomenthylation in cells can increase the expression of certain genes, such as RAS, c-myc and so on. The overall decrease in the level of 5 methyl cytosine can be worse if the tumour has become more malignant [71]. In recent studies, increasing evidences have pointed out the important role of DNA methylation in tumorigenesis. For example, Ummanni et al. [72] previously reported significant downregulation of ubiquitin carboxyl-terminal hydrolase 1 (UCHL1) in prostate cancer, but now showed that the underlying mechanism of UCHL1 downregulation in PCa was linked with the promoter hypermethylation. Furthermore, it was suggested that UCHL1 downregulation via promoter hypermethylation played an important role in various molecular aspects of PCa biology, such as morphological diversification and regulation of proliferation. Then, other experimental results demonstrated that methylation status of DNMT1 could influence the activities of several important tumor suppressor genes in cervical tumorigenesis and may have the potential to act as an effective target for treatment of cervical cancer [73]. Besides solid tumours, the same results can also be found in hematological malignancies. Deneberg et al. [74] observed a negative impact of DNA methylation on transcription in acute myeloid leukemia (AML). Genes targeted by Polycomb group (PcG) proteins and genes associated with bivalent histone marks in stem cells showed increased aberrant methylation in AML (p<0.0001). Furthermore, high methylation levels of PcG target genes were independently associated with better progression free (OR 0.47, p=0.01) and overall survival (OR 0.36, p=0.001). It is expected that methylation-related factors in tumorigenesis will still be the hotspot of cancer epigenome research. • Histone modification: Histones are subject to posttranslational modifications by enzymes primarily on their N-terminal tails, but also in their globular domains. Such posttranslational modifications include methylation, citrullination, acetylation, phosphorylation, sumoylation, ubiquitination, and ADP-ribosylation. Here, we will mainly focus on relatively widespread methylation and acetylation. Histone acetylation is one of the most important modifications in cancer, which regulates the gene expression with reversibility. The histone acetyltransferases (HATs) acetylates conserved lysine amino acids on histone to improve the gene transcription (or the combination of transcriptional factors and regulatory elements). But, histone deacetylases (HDACs) removes acetyl groups from a ε-N-acetyl lysine amino acid on a histone to inhibit the gene transcription. As a major target for epigenetic therapy, HDACs are found overexpressed in different types of cancer. Actually, histone acetylation is essential to maintain the protein function and gene transcription. The imbalance of acetylation in cancer cells can change the structure of chromosomes and the level of gene expression, directly influencing the cell cycle, differentiation, apoptosis and tumorigenesis. Recent advances in NGS enable genome-wide profiles of chromatin changes during tumorigenesis. Fraga et al. [75] have revealed a global loss of acetylated H4-lysine 16 (H4K16ac) and H4-lysine 20 trimethylation (H4K20me3) to lead to gene repression. Further, Wang et al. [76] used ChIP-seq method and found the fusion protein (AML1-ETO) generated by the t(8;21) translocation acetylated by the transcriptional coactivator p300 in leukemia cells isolated from t(8;21) AML patients, which followed by animal trails has indicates that lysine acetyltransferases represent a potential therapeutic target in AML. Lately, in order to investigate the epigenetic inactivation of the SFRP1 gene in Esophageal Squamous Cell Carcinoma (ESCC), Meng et al. [77] applied methylation-specific polymerase chain reaction (PCR), bisulfite sequencing, reverse-transcription (RT) PCR, immunohistochemistry and chromatin immunoprecipitation (ChIP) assay to detect SFRP1 promoter methylation, expression of the SFRP1 gene and histone modification in the SFRP1 promoter region. The SFRP1 promoter was found to be highly methylated in 95% (19/20) of the ESCC tissues and in nine ESCC cell lines. Furthermore, complete methylation of the SFRP1 gene promoter was correlated with its greatly reduced expression level. In cancer cells, promoter CpG island hypermethylation is also associated with the combination of histone marks: deacetylation of histones H3 and H4, loss of histone H3 lysine K4 (H3K4) trimethylation, and gain of H3K9 methylation and H3K27 trimethylation [78,80]. H3K9 methylation and H3K27 trimethylation are also associated with aberrant gene silence in various types of cancer. By ChIP, Ballestar et al. [79] have found that the gene-specific profiles of Methyl-CpG binding proteins (MBDs) exist for hypermethylated promoters of breast cancer cells with a common pattern of histone modifications shared. It's interesting that Fujisawa et al. [81] found CpG sites in IL-13Rα2 promoter region were not methylated in all pancreatic cancer cell lines studied including IL-13Rα2-positive and IL-13Rα2-negative cell lines and normal cells. On the other hand, histones at IL-13Rα2 promoter region were highly acetylated in IL-13Rα2-positive but much less in receptor-negative pancreatic cancer cell lines. When cells were treated with HDAC inhibitors, not only histone acetylation but also IL-13Rα2 expression was dramatically enhanced in receptor-negative pancreatic cancer cells, which makes HDAC inhibitors new opportunity of target therapy. In addition to methylation and acetylation, there are other kinds of modifcations in histone, not so widely distributed as those mentioned above. However, all kinds of histone modifications are not separated but mutually linked in cancer cells. These histone modifications are integrated together to affect the histones of cancer cells. Consequently, the aberrant changes in the histone modifications will result in tumorigenesis. • Chromatin remodeling: Chromatin remodeling is the enzymedriven movement of nucleosomes, performed by chromatin remodeling complexes like SWI/SNF in human. Such can enable proteins such as transcription factors to bind to DNA wrapped around nucleosome cores. Genetic alterations of the genes involved in the chromatin remodeling process have been reported in many types of tumors recently [82,86]. For one study, the protein-coding exome has been sequenced in a series of primary clear cell renal carcinoma (ccRCC). Furthermore, it was reported that the SWI/SNF chromatin remodelling complex gene PBRM1 [4] was identified as a second major ccRCC cancer gene with truncating mutations in 41% (92/227) of cases. These data showed the marked contribution of aberrant chromatin biology [87]. For another study, the exomes of nine individuals with transitional cell carcinoma (TCC) have been sequenced. The study identified genetic aberrations of the chromatin remodeling genes (UTX, MLL-MLL3, CREBBP-EP300, NCOR1, ARID1A and CHD6) in 59% of our 97 subjects with TCC [82]. Dynamic chromatin remodeling is the base of diverse biological processes, such as gene transcription, DNA replication and repair, chromosome separation and apoptosis. Together with these results, it is suggested that the aberrations of chromatin regulation might be a hallmark of cancer. Aberrant chromatin remodeling may directly lead to the dysregulation of multiple downstream effector genes, consequently promoting the process of tumorigenesis [82]. For example, Nakazawa et al. [87] examined the histone H3 status in benign and malignant colorectal tumors by immunohistochemistry and western blotting, the results of which suggested that aberration of the global H3K9me2 level was an important epigenetic event in colorectal tumorigenesis and carcinogenesis involved with gene regulation in neoplastic cells through chromatin remodeling. Besides, different causes of chromatin remodeling may lead to different types of cancers. Much more researches should be carried on to determine the exact reasons and results. Epigenetic marks as therapeutic targets: Epigenetic modifications are reversible, making them perfect therapeutic targets for cancer. Thus, cancer will be theoretically cured if the causal epigenetic aberrations are reversely corrected. According to this principle, many epigenetic drugs have been developed respectively corresponding to various epigenetic marks in recent decades. As hot epigenetic marks, DNA methylation and histone acetylation are extensively studied to successfully act as therapeutic targets. First, the hypermethylation in CpG islands is commonly found in many types of tumours. DNA methylation inhibitor is the first one that is supposed to be available for cancer therapeutics. The remarkable discovery has been found that treatment with cytotoxic agents, 5-azacytidine (5-aza-CR) and 5-aza-2'-deoxycytidine (5-aza-CdR) would lead to the inhibition of DNA methylation that induces gene expression and causes differentiation in cultured cells [88]. 5-Aza-CR (azacitidine) and 5-aza-CdR (decitabine) have been approved by FDA for use in the treatment of myelodys-plastic syndromes, and promising results have also emerged from the treatment of hematological malignancies [89] or solid tumors [90]. There are some other possible DNA methylation inhibitors such as zebularine, which is orally administered and currently under investigation in many types of cancers. However, the demethylation drug have serious side effect of toxicity, which leaves a problem that seeks proper agents to act synergistically with the drugs. Luckily, clinical studies by Silverman et al. [91], Issa et al. [92] and other researchers generated a notable paradigm of oncology: therapeutic efficacy could be achieved at low drug doses. Such reduced doses were adopted in a large trial in patients with myelodisplastic syndrome (MDS) that would lead to leukaemia. It was revealed that the conversion time from MDS to frank leukaemia increased, as well as overall survival [93]. Now, two inhibitors-azacitidine (Vidaza; Celgene) and decitabine (Dacogen; Eisai)--have been approval by the FDA for MDS, and this improves the use of lowdose regimens not only for leukaemia, but also for solid tumours [94]. Second, reversing histone acetylation patterns back to normal through treatment with HDAC inhibitors have been proved to have antitumorigenic effects, including growth arrest, apoptosis and the induction of differentiation [95]. The antiproliferative effects of HDAC inhibitors are mediated by their ability to reactivate silenced tumor suppressor genes [96]. Suberoylanilide hydroxamic acid (SAHA), as an HDAC inhibitor, has been approved for clinical use as treatment of T cell cutaneous lymphoma and has gained the approval of FDA as vorinostat (Zolinza; Merck) [97]. Besides, romidepsin (Istodax; Celgene) with the same remarkable efficacy in cutaneous T cell lymphoma has also been approved by FDA [98]. Although they are well tolerated with little toxicity, HDAC inhibitors as drugs have some side effects, including constitutional and gastrointestinal toxicity, cardiac trouble, myelosuppresion and others. However, the molecular mechanisms for drug response in these patients have not been determined yet. Several other HDAC inhibitors such as depsipeptide and phenylbutyrate are also under clinical trials [99]. Challenge and future of epigenome research Major challenges: Benefit from the advent of NGS technologies, epigenome research has rapidly expanded in recent years. As described above, advances have been achieved in recent years. However, there are still two major challenges in epigenome research, respectively referring to sampling and integrated analysis of various epigenetic modifications [10]. Next, the review will discuss the two aspects in detail. Epigenome research is expected to interpret the effect of epigenetic modifications caused by environmental factors. Thus, most epigenetic modifications are somatic and tissue or stage specific. Due to the dynamics of epigenetics, sampling is the first and critical step of epigenome research. To a large extent, mistakes in sample tissue selection will lead to the aborted and incorrect conclusion. For epigenome research of human disease, cancer is studied more intensively than other human diseases. That is attributed to the easier accessibility of cancer tissues after biopsy or surgery. However, as the obvious characteristic of cancer, tissue heterogeneity is still a problem in sampling for epigenome research. Many complex diseases, such as hypertension, don't exhibit tissue-specific pathogenesis. DNA samples from any tissues do not show significant difference. Thus, based on our current unclear understanding of pathogenesis, it is difficult to conduct epigenome research very well. Second, since the epigenome research of human disease is in the early stage, the study model is still robust and the exact sample size is also unknown. Third, due to tissue specificity, many types of tissues need to be collected to demonstrate the complete picture of epigenome. In general, the challenge of sampling arises from specific tissue selection, exact sample size and multiple tissue collection. There are various types of epigenetic modifications, not limited to those described above in this review. First, it is necessary to explore every type of epigenetic modifications in the human genome. It is possible that most of them still remain to be found in future. Second, even if all epigenetic modifications have been revealed until now, there is still a long way for researchers to move. That is due to the network pattern of epigenetic regulations. Individual epigenetic modification does not work separately, but mutually to regulate gene expression of the whole genome. It is a large-scale project to clearly understand the subtle system of integrated regulations by epigenetic modifications. Future direction: A decade ago, the human genome project (HGP) has been accomplished by collaborations of worldwide scientists. The constructed human genome map is a milestone for genome research in the history, providing a strong foundation for the following countless sequencing researches. Similarly, human epigenome map is essential to be constructed to promote the field of epigenome research. This large-scale scientific project can only be achieved by the way of HGP. Worldwide scientists must join in a global organization for collaborations to achieve this significant goal. Fortunately, many consortiums have been founded in recent years ( Table 2). The human epigenome map is expected to be constructed in the near future. However, both genome and epigenome are desired to explain the mechanisms of complex life activities from the view of DNA level. Although the recent achievements can illustrate many phenomenons that were inexplainable in the past, more unsolved problems still remain to be explored. According to the central dogma, life is a systematic network with multidimensional activities. The activities on DNA level would interact with those in RNA and protein level. Thus, the researches on DNA level are obviously not enough. With various types of NGS technologies, it is possible to apply NGS in DNA, RNA and protein levels. The information in these levels is expected to be explored by NGS and integrated by bioinformatics to together reveal more discoveries in biology and human disease. The rapid progress of sequencing technology has also contributed to the development of epigenome research. Third generation sequencing (TGS) technologies are expected to be commercial in recent several years. Compared to NGS, TGS exhibits many technical breakthroughs, such as small amount of samples, faster speed, less time, single cell sequencing and so on. These characteristics make TGS feasible to reveal unknown epigenetic mechanisms and speed up the epigenome research. The ability of single cell sequencing can largely solve the obtacle of tissue specificity in epigenome research. Combined with large-scale collaborations and latest sequencing technology, it is believed that epigenome research will contribute to explain one aspect of the complexity of nature and improve human health. • Transform health research in Canada by applying next-generation sequencing to more research on targeted priority and under-developed areas such as population health and health services research • Transform research results into policies, practices, procedures, products and services
v3-fos-license
2020-05-07T09:16:09.181Z
2020-05-05T00:00:00.000
218929093
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://journal-buildingscities.org/articles/10.5334/bc.12/galley/5/download/", "pdf_hash": "66ca57e34610034def54acc11f6cadf2845a3bea", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2862", "s2fieldsofstudy": [ "Engineering" ], "sha1": "538cfe86c5bce4ef44ea3a78c628101b4c3bc1f3", "year": 2020 }
pes2o/s2orc
Mitigation and adaptation in multifamily housing: overheating and climate justice Can thermal retrofit measures also enhance summer heat resilience and climate justice? Two common building types of multifamily dwellings in Central Europe are investigated: the ‘Gründerzeithaus’ and post-war large-panel construction along with their different inhabitant demographics. Thermal simulations and demographic surveys were undertaken for dwellings in both building types to evaluate the effectiveness of retrofit measures in reducing winter heat demand and to understand the impacts on summer overheating. Results indicate that standard retrofitting measures can reduce the overheating risks. The high summer temperatures on the top floor can be significantly lowered to values comparable with the ground floor. The remaining overheating in highly exposed rooms is reduced by additional selective adaptation measures. Adaptation requires more than technical interventions. Demographic surveys conducted for both building types show that different social groups are affected. The economics of retrofit requires policy clarity to avoid placing additional burdens on economically disadvantaged people. Inhabitants’ active involvement in night-time ventilation are vital for avoiding overheating. Appropriate affordances and a clear guidance for manual window opening/closing can reduce overheating. However, inhabitants who are unable to act (e.g. the elderly, immobile or those with chronic diseases) will be increasingly vulnerable and disadvantaged by increased exposure to overheating. Practice relevance The existing approaches for reducing heating demand and their impacts on overheating are examined for two common building types in Central Europe: the Gründerzeithaus and post-war large-panel multifamily housing. The evidence of physical effects and social interdependencies provides a basis both for decision-makers to select suitable measures, and for inhabitants to apply appropriate behavioural practices. Thermal retrofitting strategies for reducing winter heating demand can lead to enhanced resilience to hot summer weather, but also entail inhabitants’ active involvement. Additional technical measures are needed to ensure reduced levels of overheating. Inhabitants’ practices have a significant influence on resilience and the reduction of overheating. Therefore, technical interventions must be accompanied by clear strategies to empower inhabitants to control internal temperatures using natural ventilation. Elderly or ill inhabitants may not be able to perform these practices and, therefore, remain vulnerable. Increased rents caused by retrofits may displace socially disadvantaged inhabitants. Background The worldwide building sector accounts for 25% of global fossil fuel-related greenhouse gas (GHG) emissions, mainly used for space heating and cooling (Fosas et al. 2018). Particularly in the European Union (EU), dwellings are responsible for more than one-quarter of the total primary energy demand (Fabi, Andersen, Corgnati, & Olesen 2012). As existing buildings in the EU and other developed countries are expected to form 70-80% of the whole built stock in 2050 (Vellei et al. 2016), the retrofitting of existing buildings to reduce their fossil fuel-related GHG emissions is a prime objective. In this context, an increasing overheating risk due to improved insulation and airtightness is predicted (Mavrogianni et al. 2015;Mulville & Stravoravdis 2016). The frequency and severity of heatwave events is projected to increase in future (IPCC 2014). Hence, overheating risk analysis of such retrofitted buildings is essential. Two factors are of great importance in adapting residential buildings to climate change (CC): • CC mitigation by reducing further GHG emissions of the building operation sector. • CC adaption of buildings to cope with the incremental impacts of CC, especially the risk of overheating. For moderate and cold climates, GHG emissions can be reduced by improving the insulation of buildings and by shifting to heat generation with low CO 2 emissions. However, CC adaptation of buildings entails reducing the overheating risk for residents during warm weather. This must avoid the use of mechanical cooling in order to comply with mitigation strategies. One of the central challenges is to implement such changes without exacerbating social injustice. It is essential not to affect adversely social disadvantaged groups such as elderly or low-income households or by creating differences in quality of life. Study aims and objectives This study investigates the complex interactions of CC mitigation and adaptation measures for residential buildings focusing on aspects of overheating and climate justice. The reasons for this focus on representative multifamily houses (MFH) are as follows: • By the mid-21st century, 66% of the world population is projected to live in cities (UN DESA 2015) and, thus, typically in MFH. • The high degree of sealed surfaces and solar absorption in large cities leads to an additional heat burden of MFH inhabitants by the urban heat islands (Oke 1988) within the city. • The residents of MFH have limited possibilities to transform their dwelling into a CC mitigated and adapted home in comparison with owners of detached houses. • With respect to the issue to climate justice, residents of MFH are strongly dependent on the actions of the building owner. Additionally, residents of these dwellings, especially tenants, usually have a lower income compared with those living in detached houses. However, from a CC mitigated and adapted MFH, numerous residents can benefit as a group. The specific focus of this investigation is two regionally prevalent housing types in the context of Central European climate conditions. The setting in central Germany is a representative European climate with a high heating demand in winter, and passive cooling measures are usually sufficient to provide adequate heat protection in summer (without the need of cooling devices). Two different building types were investigated, one from the turn of the 20th century, the so-called Gründerzeithaus (GZH) and the other from the 1960s-90s, the large-panel construction (LPC) MFH to understand how CC mitigation measures interact with CC adaptation measures to reduce overheating for residents. The impacts of the chosen measures are analysed using thermal building simulations for both buildings to identify the most effective and efficient forms of retrofitting. The choice of the two building types is also grounded on the different social structures of the residents in the LPC and the GZH. While the LPC is often inhabited by socially disadvantaged groups (residents with lower incomes and a higher percentage of elderly people; see section 0), the GZH is a typical dwelling for residents with an average income and better distributed age structure. A key question is whether the thermal comfort and building operation costs, as measured by the heating and cooling demand, differs in the non-retrofitted versus the retrofitted buildings. The objective is to determine how CC mitigation and adaptation measures change residents' well-being. The research is structured as follows. First, an architectural analysis of the two MFH to create realistic building simulation models of the whole buildings and resident structure in both MFH is determined. Building simulation models are validated using temperature and CO 2 measurements in dwellings conducted on different floors during summer 2019 and 2018. The non-retrofitted actual state of both MFH to gain information on heating demand and overheating risk is simulated. Individual CC-mitigation measures to the buildings to reduce heating demand and proof of possible changes in overheating is implemented along with individual CC adaptation measures to the buildings to reduce overheating. Different studies have addressed the question of how retrofitting affects overheating risk (Fosas et al. 2018;Mavrogianni et al. 2012;Porritt et al. 2013). They indicate that overheating risk increases with floor level in high-rise structures, with the strongest heat load in the attics (Hamdy, Carlucci, Hoes, & Hensen 2016;Mavrogianni et al. 2012). Enhanced roof insulation and window retrofitting decreases overheating risk (Mavrogianni et al. 2012). There have been contradictory findings on the use of exterior wall insulation: a slight reduction (Fosas et al. 2018;Porritt et al. 2013) or a slight increase of overheating risk (Mavrogianni et al. 2012;Mulville & Stravoravdis 2016). Internal wall insulation increases the overheating risk (Fosas et al. 2018;Tink, Porritt, Allinson, & Loveday 2018). Finally, external shading has been found to reduce overheating strongly (Porritt, Cropper, Shao, & Goodier 2012), while user behaviour, especially window ventilation behaviour, significantly reduces heat stress (Mavrogianni et al. 2014;Porritt et al. 2013). However, this body of research on overheating in residential buildings has primarily been done in the UK. Approximately 40% of all publications related to the general topic of overheating in residential buildings (monitoring, simulation, health, surveys) are conducted in the UK (Chen 2019). This is very surprising because the UK typically has relatively cool summers (except London because of urban heat islands) compared with other more southern countries in Europe. Residents in France, Spain or Germany are known to be affected by higher summer temperatures, and buildings are typically not equipped with cooling devices. This is verified by Thomson, Simcock, Bouzarovski, & Petrova (2019) who compared the heat burden for different countries in Europe, with the separate mention of share of lowincome residents. The present investigation contributes to our knowledge of factors in a broader continental European context by undertaking an overheating risk analysis for MFH in Erfurt and Dresden in Germany. This will extend the detailed knowledge already present for the climatic conditions and UK building typologies. Climate justice issues The concept of climate justice includes a set of ethical and political considerations for those made vulnerable or disadvantaged by CC. This can be within a society (a country, region or city) or between countries. For the focus of this study, at least two main issues are discussed in relation to the residents of MFH: fuel poverty (financial disadvantage) and health risk/well-being in the context of living in MFH. Fuel poverty can be correlated to heating and cooling the dwelling which affects households with a low income (Hajat, Kovats, & Lachowycz 2007). In Germany, the focus of fuel poverty has been on heating, since mechanical cooling is uncommon in Germany. However, hotter summers caused by CC is expected to shift climate zones in future. If mechanical cooling were to be implemented in German dwellings, this would lead to negative environmental, financial and social consequences for households, and summer fuel poverty (Mavrogianni et al. 2015). Heat stress-related health risk and other limitations on well-being are already major concerns in Germany. However, the investigation of heat stress is highly complex and dependent on multiple variables, that is, health, age, agency or social isolation (Hajat et al. 2007;Mavrogianni et al. 2015). Vandentorren et al. (2006) and Vardoulakis et al. (2015) investigated the vulnerability of residents to summer heat and showed that residents with the following characteristics have an enhanced heat-related health risk: • Elderly people with lack of mobility or chronic diseases. • Low social status. • Living in dwellings that lack thermal insulation, have the bedroom directly beneath the roof or small dwelling area. Elderly people aged over 65 years are generally affected by the overheating of their homes because they spend a majority of their time indoors (>90%) during the summer (Basu & Samet 2002). In addition, heat stress is more likely to have adverse effects, including fatalities, among residents at the lower end of the socioeconomic spectrum (Thomson et al. 2019). In terms of well-being, van Loenhout et al. (2016) found that an increase of 1°C of indoor temperature raised the risk of sleep disturbance by 24% (in the temperature range of 20.8-29.3°C). However, Head et al. (2018) found a lack of detailed epidemiological studies investigating the effect of indoor dwelling temperature on health outcomes. Residential building types Two representative examples of MFH with different design and residents' social structure are the types GZH and LPC. GZH buildings (Figure 1a,b) were primarily constructed in Germany in the period 1870-1918 (categorised as MRG3 by Schinke et al. 2012), and similar to those found in old town centres throughout Central Europe. Their thick brick walls are not insulated and, characteristically, they have a saddle roof as well as wooden beam ceilings in the individual flats. In contrast, LPC buildings (Figure 1c,d) were erected between 1960 and 1990 as a typical form of social housing (categorised as MR6 by Schinke et al.). The walls consist of large panels of reinforced concrete with a thin core of insulation, while the ceilings are made of reinforced concrete. MFH types represent 53% of the German residential building stock (see Appendix A). Both buildings are inhabited and located in city districts with other similar building types. The LPC is located in Dresden and the GZH is in Erfurt. In each studied building, no elevator is installed. MFH resident characteristics The climate justice aspect involves understanding the demographics and vulnerability: the residents' age, income and lifestyle. Fortunately, user surveys were conducted by Sinning (2019a, 2019b) for the neighbourhood types in which the two MFH are located. As the majority of the residential buildings in the surveyed neighbourhoods consist of LPC (Dresden) and GZH (Erfurt) buildings, respectively, the survey by Baldin and Sinning can be used to draw conclusions about the residents of the two selected MFH and compare them with each other. In the two studies, not only occupant data but also their behaviour during heatwaves were surveyed (n = 178 for Dresden, n = 203 for Erfurt). The most relevant data for the focus of the present study are summarised in Table 1. While almost half the residents in the LPC district are pensioners, in the GZH neighbourhood type there were fewer than one-fifth. Almost half the residents live alone in the LPC neighbourhood compared with one-quarter in the GZH neighbourhood. Household income is somewhat lower in the LPC compared with the GZH neighbourhood, although the effect of the smaller household size with a high proportion of single residents is a likely factor. Block population data from the immediate surroundings of the two MFHs provided a basis for evaluating the residents' age structures for both representative buildings and the immediate vicinity (Figure 2) (Landeshauptstadt Dresden 2019; Landeshauptstadt Erfurt 2020). The high share of older people for the LPC is even higher compared with the whole LPC district ( Table 1). As both buildings have no elevators, it is assumed that elderly residents typically inhabit the lower floors in both types of buildings. However, there may be people with other vulnerabilities who are in the upper or top floors: people who are immobile, elderly or with chronic medical conditions. The survey did not identify people with these conditions. Owing to the age structure and high proportion of people living alone, the LPC district is more likely to be vulnerable to summer heat. Therefore, special attention should be paid to adaptation measures for these residents. Building properties The two chosen buildings of type GZH and LPC are very different in terms of their physical structure and architectural design. While the GZH building has a stucco facade and a saddle roof with dormers, the LPC has a clear cubic structure with no decorative elements on the facade and a ventilated cold roof (Figure 1a,c). The two buildings have similar footprints of 230 and 195 m², respectively. However, the LPC building has six full storeys in contrast to only four full storeys and a converted attic in the GZH building, giving total floor areas of 1070 and 725 m², respectively. The LPC has three dwellings on each level with floor areas between 55 and 65 m² (cf. Figure 1d), while each floor of the GZH building has only two flats each with a larger floor area of about 80 m² (cf. Figure 1b). The LPC MFH has a total of 18 dwellings compared with the GZH with only 10 flats. With partner and child 12% 26% Flat-sharing community 3% 15% Other 2% 3% Sources: Sinning (2019a, 2019b). The structural components of the buildings are also different ( Table 2). The thick exterior and interior brick walls of the GZH building provide considerable thermal storage capacities, helping to mitigate extreme summer temperatures. In contrast, the large panel elements of the LPC building include a thin insulating layer in the exterior wall, ensuring a lower transmission heat loss (U-value) in winter than the brick walls of the GZH building which lack such insulation. The buildings have different ceiling constructions and characteristics. The GZH building has suspended wooden beam ceilings with a low U-value, and low heat storage capacities are found; the LPC building has prestressed concrete ceilings which offers considerable heat transmission from one level to the other as well as a high heat storage capacity. The top floors of the two buildings are also rather different. Originally, the attics of GZH MFHs were not used for residential purposes. However, today most attics have been converted into living space, often using dry wall construction, which offer low heat storage capacity. The GZH building in the present study has such a converted attic. Its well-insulated roof contrasts with the ventilated cold dwelling roof on top of the top floor of the LPC. Although the LPC roof is not insulated, the top-floor ceiling has a thin layer of insulating layer. Both buildings have cellars as well as balconies. The latter act as shading elements on the west-facing facades and, in the case of the LPC, also on the south-facing facade. The balconies of the GZH attic dwellings have used awnings. Note: a Frame ratio of all windows was calculated by measuring the glazing and frame areas; the g-value stands for the energy transmittance of the glazing; and U w is the thermal transmittance coefficient of the whole window, including the glazing and frame. Thermal building simulation To estimate the heating demand in winter as well as room temperatures in summer, the MFHs were modelled using the thermal building simulation software IDA ICE 4.8 (EQUA 2018). The simulations were run for one year at a time step of less than one hour to allow a detailed analysis of the evolving operative room temperatures and heating demand for each room. Building components and material layers are entered as inputs to reproduce realistic heat storage capacities and transmissions of each room to the building exterior as well as to neighbouring rooms. In Germany, inhabited rooms of residential buildings are typically heated by radiators or floor heating to comfortable temperatures of between 20 and 24°C in winter. Dwellings are cooled only by passive measures (i.e. opening the windows at night). This kind of cooling is highly effective because of the relatively cool nights during summer, when temperatures generally fall to <20°C. For the thermal building simulation, the following boundary conditions were applied: • Chosen location of both buildings: Dresden, Germany. • Air exchange between rooms: all room doors are closed. • Shading: for the GZH building, the building on the opposite side of the street was taken into account; shading of trees is not considered. • Cooling by window ventilation in summer based on the following: • Window opening control: windows opened when the outside air is cooler than the room air and room air >23°C (depending on the window-opening schedule). • Window-opening schedule: windows fully open from 1800 to 2200 and 0600 to 0700 hours; tilted at night from 2200 to 0600 hours. • Wind and temperature gradient-driven air exchange through windows is taken into account in the building simulation tool IDA ICE. • Degree of window opening: defined by opening profiles of the installed windows. • Minimum room air temperature (heating): 22°C (except for the unheated bedroom and corridor). CC adaptation measures Several alternatives were modelled: • External sun protection systems to reduce solar irradiance. Simulations were conducted for different orientations. • Sun-protection glazing with g < 0.4, where g is the energy transmittance of the glazing. • Mechanical ventilation systems for improved air exchange: standard balanced domestic ventilation systems with a maximum rate of 1 h -1 (air change rate per hour) and those with a higher air exchange rate of at least 2 h -1 . The simulation assumed the system is activated when the indoor air temperature is >23°C (and the outdoor air is cooler). • Phase change materials (PCMs) (Hodzic, Pont, Tahmasebi, & Mahdavi 2019). However, because of the high cost of around €200/m² PCM plasterboard (working out at €70,000 for one attic dwelling in the GZH building), much cheaper and equally effective sun protection systems and ventilation systems are generally preferred. • Improved roof insulation or installing light-reflecting roof coverings. Quality of overheating simulation The quality of the modelling was validated using measured data during operation. Data loggers were installed in selected rooms in summer 2018 and 2019 to record room temperature and CO 2 content. During the absence of the residents in the holiday season, verified by the constant CO 2 content of approximately 400 ppm, the thermal qualities of the respective rooms could be assessed without the influence of residents. Baseline measures of thermal masses, transmissions and window characteristics could be compared with the simulation model. This test validated the temperature simulations (Figure 3). Open room doors and internal transmission to adjacent rooms were found to have a non-negligible influence, indicating the importance of simulating the temperature course in particular rooms within the whole building. Realistic window ventilation A large number of studies have pointed out the influence of the inhabitant on window opening on overheating (Baborska-Narożny, Stevenson, & Grudzińska 2016;Fabi et al. 2012;Fosas et al. 2018;Hamdy et al. 2016;Mavrogianni et al. 2014). However, most studies assumed a simplified air exchange rate when the window was opened. In reality, the air exchange of open windows depends on several variables: wind direction, wind speed, temperature difference of the outside air to room temperature and the degree of opening of the window. The building simulation tool IDA ICE offers the possibility to include all these influences. Therefore, the wind and outdoor temperatures of the weather data set and the flow of air into and out of the building were taken into account by means of pressure coefficients and height-stratified wind speed profiles. This dynamic approach is reflected in the measured values of the buildings during the period of use. There is no standard procedure for simulating the ventilation behaviour of the windows (Fosas et al. 2018;Hamdy et al. 2016;Mavrogianni et al. 2016;Porritt et al. 2012). Therefore, we conducted a ventilation survey in the GZH and LPC district to find out when and how residents open their windows (fully open or tilted) and used cross-ventilation on average summer days and on hot days. The survey (n = 36 for LPC, n = 43 for GZH) showed all possible ventilation behaviour from residents who did not open their windows at all to residents who left their windows open during the day. However, the average ventilation behaviour showed the following characteristics: windows were opened briefly in the morning after getting up and closed during the day. They were typically completely open in the evening until 2300 hours. At night, half were tilted open and the other half were reported fully open. Weather data As Chen (2009) stated in his review, no consistent way of choosing weather data for overheating risk analysis by building simulation can be found in the literature. From a simulated hot spell, weather recording to TRY or DSY weather data have been used. 1 In the present study, localised data for Dresden city (including the urban heat island effect), for the test reference year, version 2015 (TRY 2015 Summer), provided by the German Meteorological Service, was used (DWD 2017). Based on long-term measurements and observation series, this data set includes several meteorological parameters for each hour of the year. Specifically, TRY 2015 Summer describes the climate conditions of a warm summer and a typical winter in Dresden. Figure 4 shows the outdoor temperature over the course of the year; Table 3 gives an overview of some meteorological parameters for 2015. To ensure that the chosen weather data are representative, we compared the heating demand and overheating of the buildings with the weather data from the DWD of the last five years. In general, it should be noted that the focus of the study is not to determine exact values of overheating, but to compare existing buildings and adapted buildings as well as different types of buildings and show the effect of user behaviour on overheating in the dwelling. When it comes to the exact numbers for overheating, a variation of different weather data sets and also synthetic weather data is necessary. Overheating criteria No universally accepted definition and criteria for overheating in residential buildings is available (Chen 2019). Typically the overheating criteria of the DIN EN 15251:2012 and the derived Chartered Institution of Building Both are based on a working environment and are therefore more or less limited to a healthy workforce (Anderson et al. 2013). In Germany, overheating assessment is commonly performed using the national standard DIN 4108-2 (2013), showing some deviations from DIN EN 15251:2012-12 (procedure B) when taking into account the national annex for Germany. As noted previously, the focus of the study is not to determine exact values for overheating but to compare overheating for different building types, retrofitting and resident behaviour. The German standard DIN 4108-2 divides the country into three summer climate regions with different climatic conditions (DIN 2013). The city of Dresden is assigned to summer climate region C, which foresees a maximum acceptable indoor operative temperature of 27°C. This upper boundary is used to calculate the so-called 'overtemperature degree-hours' (Jenkins, Patidar, Banfill, & Gibson 2014) as a measure of overheating within the rooms of a house. These DH (overheating degree-hours) values are summed over a year. For residential buildings, an upper limit for DH of 1200 Kh/a (Kelvin hours/annum) is specified as the standard and has to be achieved in every room of a new building for every hour in a day independent on room usage. However, note that the overheating risk analysis here is only oriented on the standard DIN 4108-2:2013-02 using the specified definition of DH and given the internal heat gains of the rooms. However, ventilation behaviour as well as weather data differ from this standard because of the more precise data required for the current investigation, as discussed above. Effective CC mitigation measures: reducing transmission losses The first question to be answered is: Which retrofitting options for the LPC and GZH MFHs are most effective at reducing the heating demand and, thus, strengthen CC mitigation? For the LPC building, a standard retrofitting package for prefabricated buildings was considered. In fact, this package is currently being used by the housing association owner for the chosen LPC. These measures (described in Table 4) are to add an insulating layer to the walls, the top-floor ceiling and the basement ceiling, as well as to replace the windows. The thermal building simulation showed that this combination of measures is an efficient approach, resulting in U-values typical of new buildings and in line with Germany's energy-saving standard for version 2014 (BRD 2013). Of course, more ambitions retrofitting measures are possible, for example, to meet the standards for passive housing. However, these are not typical for such renovation work. Impact of CC mitigation measures The simulated heating demands for the GZH and the LPC buildings in the existing non-retrofitted state are depicted in Table 5. The annual heating demand per mean m 2 floor space in the existing buildings is higher in the LPC than the GZH. This can be attributed to the fact that the GZH building is part of a large group of buildings (a terrace); hence, heat losses by transmission can only occur on two sides. In contrast, the investigated LPC is freestanding. Furthermore, the roof of the existing GZH building is better insulated than the top-floor ceiling of the LPC, and the current windows of the GZH show lower losses by transmission ( Table 2). In the next step, CC mitigation measures discussed in section 0 were implemented in the building simulation models ( Table 4) to reduce the heating demand and, thus, the GHG emissions. The effect of the enhanced insulation and window replacement can be clearly seen in Figure 5 and Table 5. For both buildings, the heating demand drops to <40 kWh/m²a, that is, less than half the previous value for the LPC and nearly half the previously value for the GZH building. Thus, the two refurbished buildings show comparable energy efficiencies as regards heat demand, as well as a clear reduction in GHG emissions. The reduction in heating demand for the LPC is more marked because of the retrofitting of the entire building envelope, whereas only the facade and windows were changed in the GZH simulation. Note that the given heating demands only consider the heating requirements for living space (useful energy) while ignoring the production of waste heat by distribution and heat generation. CC adaptation measures: reduction of overheating risk in summer Four questions arise in relation to the overheating in the investigated buildings: • How pronounced is the overheating of the individual rooms and dwellings for the existing GZH and LPC buildings and how do the buildings differ? • Does thermal retrofitting ( Table 4) weaken or improve heat resilience in the buildings? • Are CC adaptation measures required to reduce overheating in the buildings? If so, which measures should be implemented? • Most importantly: What influence do residents have on overheating in their dwelling? Figure 6a shows the simulated room temperature in west-facing rooms on the first and top floors of the existing GZH and LPC buildings over 10 summer days. On hot days with outdoor air temperatures >30°C, thermal inertia ensures that temperatures are lower inside the buildings. The significant effect of night-time cooling by window ventilation (fully opened and tilted; see section 0) because of the cooler outside temperature is also clearly visible. Comparing the changing temperature in the rooms of the GZH and LPC buildings, there are three things to note. First, the rooms on the first floor show similar temperature patterns because of their similar thermal masses, identical orientation and a comparable level of balcony shading. Second, temperatures in the top-floor rooms are significantly higher than those of the first floor because of the solar heating of the roof, while the first-floor rooms are cooled by the underlying basement. This pattern of increased temperatures in higher floors can also be seen in Figure 7a,b. Third, there is a significant difference between the GZH and LPC building in the daily temperature pattern of the top-floor rooms. This can be attributed to the much lower thermal storage capacity of the drywall interior in the converted attic of the GZH building compared with the solid walls and ceilings of the LPC. These lower thermal masses lead to a faster increase and decrease in the room temperature over the course of the day. In addition to the detailed picture of the evolving room temperatures, the number of ' overtemperature degree-hours' gives a good indication of the level of overheating in each room during the whole summer. Figure 8 (variant 1) shows a significant increase in DH27 (overheating degree-hours above a maximum room temperature of 27°C, specified by DIN 4108-2 for the climate region of Dresden) on higher floors. The influence of balcony shading is also evident on the west facades of both the GZH and LPC buildings. Considering the threshold value for DH27 of 1200 Kh/a, the unshaded rooms show overheating above this critical value, indicating the need for heat adaptation measures. Impact of CC mitigation measures (thermal retrofitting) on heat resilience Figure 6b,c clearly shows lower room temperatures for both retrofitted buildings. In particular, mainly the enhanced insulation of the top-floor ceiling in the LPC building contributes to the significant decrease in temperature. In addition, overheating is reduced by the lower g-value of the new triple glazing (compared with the existing double glazing), as well as lower solar heat gain through the opaque building envelope (enhanced insulation). The same effect can be seen in Figure 8 (variant 2), which lists the DH27 values for the retrofitted buildings: DH27 is halved for almost all rooms of the GZH and LPC buildings in comparison with their non-retrofitted states. Only in the attic of the GZH building is no significant reduction gained, again due to the low thermal masses of the drywall interior. The marked reduction in DH27 is an impressive confirmation that a typical thermal retrofitting to reduce heating demand can also enhance the heat resilience of an uncooled building during hot summer months. In other words, CC mitigation measures (to reduce GHG emissions) can also act as CC adaptation measures (to reduce overheating). mitigation and adaptation measures) GZH and LPC buildings for two facade orientations. For the framed rooms, the DH27 (overheating degree-hours above a maximum room temperature of 27°C) are also shown in Figure 8. Additional CC adaptation measures Although the thermal retrofitting of the buildings leads to a significant increase in simulated heat resilience, some rooms and dwellings still show high overheating. In particular, the attic of the GZH building and rooms unshaded by balconies still have DH27 values around 1200 Kh/a (Figure 8). Clearly, these rooms require local adaptation measures. Various CC adaptation measures and combinations were simulated in IDA ICE for both buildings to determine the most effective and feasible to reduce overheating in all dwellings. Table 6 overviews the chosen measures, namely external sun protection elements for west-, east-and south-facing windows without balcony shading for both buildings, and an exhaust ventilation device in the attic bathrooms of the GZH. The impact of the heat-resilience adaptation measures at reducing room temperature can be clearly seen in Figure 6b,c). In particular, the combination of sun protection and exhaust ventilation system serves to lower room temperatures in the attic of the GZH building by about 2 K. Figure 8 (variant 3) has even more striking evidence of the positive effect of combining thermal retrofitting of the whole building with individual heat resilience measures. Here, the DH27 for all facade orientations and building levels is <500 Kh/a. Also note the substantially more uniform distribution of low overheating in the buildings (Figure 7b,d) compared with the unrefurbished case (Figure 7a,c). Climate justice and overheating risk Since the described retrofitting options are quite expensive, not all residents will be able to afford such an upgraded MFH which is usually associated with higher rental prices. Therefore, it is necessary to consider options for residents to take action themselves to reduce the heat stress in their dwelling. In addition to installing an opaque, highly reflective interior sunshade, residents can lower the thermal load during heatwaves by avoiding cooking or running electrical devices, but this has a relatively low impact. It is more effective to open all windows fully at night (even if there is no discernible breeze) to allow an influx of cooler air from outside and also to use cross-ventilation. On the other hand, the risk of burglary or outside noise and air pollution can make it impossible to open windows at night. To show the enormous relevance of the residents' window ventilation behaviour is varied in the thermal simulation of the GZH and the LPC buildings in two ways: • Low ventilation: the window-opening schedule is changed and only from 0700 to 0730 and from 1800 to 2200 hours when windows are fully opened if the outdoor air is cooler than inside. In the results discussed in sections 3.3.1-3.3.3, windows are fully opened in the evening and morning and tilted at night (see section 0). The window doors to the balcony stay closed at night in all dwellings and no cross-ventilation is used. The results for the non-retrofitted MFH depicted in Figure 9 show the range of overheating risk as influenced by the residents' ventilation behaviour. For the existing GZH building, there is a considerable drop in DH27 to <700 Kh/a for all rooms by full use of night cooling (variant 1a), including the attic. While the values for the LPC building drop much less significantly, they do become <1200 Kh/a. This is a remarkable impact considering the inferior insulation of the top-floor ceiling and the old windows ( Table 2). In contrast, residents who left the windows closed at night (variant 1b) means the heat burden is enormous compared with the assumed standard ventilation which underpins the effect of windows only tilted at night for night-time cooling (variant 1). The effects of closed windows on high overheating were also observed by other studies (Fabi et al. 2012;Vellei et al. 2016). The lack of knowledge of the residents about the importance of night cooling is a fact. Other reasons why the windows have to stay closed at night might be pollution, crime, noise, limited resident mobility or the fact that residents are also not covered by their building and content insurance in the case of open windows or doors. Therefore, the question arises if the effect of no night ventilation might be less pronounced for the retrofitted MFH (including the CC mitigation and adaptation measures). In Figure 9, variant 3a exhibits significantly lower overheating than non-existing night ventilation in the non-retrofitted building (variant 1a), but is more pronounced for the GZH building. However, the increase in ventilation behaviour (variant 3) in the retrofitted building is nevertheless clearly leads to DH27 to nearly 3000 Kh/a for the LPC building. For the retrofitted GZH, the installation of exhaust ventilation system leads to good night-time cooling and, thus, low overheating, even if no window ventilation is used in attic dwellings at night. Interplay of CC mitigation and adaptation measures The simulations demonstrate that standard thermal retrofitting measures, such as enhanced insulation and window replacement in representative MFH, not only lower heating demands during the winter but also significantly decrease overheating risk in summer. Thus, conventional CC mitigation measures to reduce GHG emissions enhance the heat resilience of the considered buildings, and, as a consequence, can simultaneously be seen as CC adaptation measures. Such measures are highly desirable as they can effectively deal with CC in two ways: by addressing both the origin and the impact of CC. However, additional complementary CC adaptation measures (both physical interventions and inhabitant practices) are needed to improve heat resilience further. Furthermore, the simulations show that simple individual retrofitting measures can ensure low overheating for all inhabitants, regardless of the location of their dwelling in both building types, thus avoiding the need for active cooling. However, for optimal heat resilience, passive protective measures for overheating are insufficient if they are not accompanied by window-opening behaviours to cool the dwelling at night. User behaviour sustaining a consistent ventilation regime has enormous impact on overheating. Heat resilience The thermal building simulations enabled the identification of the most effective passive adaptation measures for the management of overheating (cf. section 0). The most important physical measure to lower the different levels of overheating found in the dwellings from the top to the ground floor is the reduction of solar heat gains through Figure 9: Simulated annual overtemperature degree-hours >27°C (DH27) for different window ventilation behaviour dependent on the building retrofitting: 1, existing state with standard ventilation; 1a, existing state with less ventilation; 1b, existing state with high ventilation; 3, a CC mitigated and adapted building with standard ventilation; and 3a: a CC mitigated and adapted building with less ventilation (depending on orientation and building level) for the GZH and LPC buildings. improved roof or top-floor ceiling insulation. Second, triple-glazed windows can significantly reduce the amount of solar heat radiation reaching the interior. It is important to ensure that these windows are completely opened for ventilation, thereby ensuring effective night-time cooling. Third, an external shading system should be installed for rooms with large windows and high levels of solar exposure. Fourth, massive components with a high thermal storage should be used for building renovations, especially attic conversions. Climate justice 4.3.1. Merits of thermal retrofitting for summer heat reduction Owing to thermal retrofitting, the heating demand of the dwellings in both building types can be halved on average (cf. Table 5). Assuming heating costs of €0.08/kWh, the monthly heating costs are reduced by about €0.20/m² of living space in the GZH and €0.40/m² in the LPC building. However, these savings in heating costs are more than offset by significantly higher rents in retrofitted MFHs, as buildings are usually completely renovated in addition to the energyrelated retrofitting. This problem is clearly related both to fuel poverty (Heindl 2015) and to climate justice in the notion of ' a just distribution justly achieved' (Harvey 1973;Hughes 2013). At the building scale, poorer groups may be disadvantaged by the improvements made to their space. According to the present research, the monthly rents of a dwelling in a retrofitted LPC is around €1/m² higher, which is only partly compensated by reduced heating costs. In both building types, uneven summer overheating of dwellings was found before retrofitting, with significantly higher heat loads for residents in upper floors and sun-exposed rooms (Figure 7), which is in accordance with findings from the literature (Thomson et al. 2019;Vandentorren et al. 2006). As long as no elevator is installed, usually the upper floors are not inhabited by elderly residents (who are particularly vulnerable to heat stress). In both existing buildings, no elevator was originally installed and the accessibility to living space for elderly people was usually limited to the lower and less overheated floors. As part of the retrofitting concepts for LPC, elevators are usually installed, and also top-floor dwellings are made accessible for elderly people. At the same time, the heat load in the top floor of the retrofitted LPC is significantly reduced by insulating the top-floor ceiling, replacing windows and partially installing sun protection. Thus, the upper floors are made bearable for the elderly. In general, all dwellings of the retrofitted LPC and GZH show similarly reduced overheating if night-time cooling is used by adequate window ventilation (Figure 8, variant 3), leading to an improved climate justice in terms of heat resilience between all residents in both building types. Both building types show comparable overheating risk in the state before and after retrofitting (Figure 8). The LPC contain a much higher proportion of heat-vulnerable residents: two-thirds are older than 60 years and a significant proportion live alone (cf. Figure 2). Additionally, the rents of non-retrofitted LPC are usually lower those in GZH for comparable locations in the city. These are mostly occupied by socially and/or economically disadvantaged groups. As a result of retrofitting, a higher number of vulnerable residents can benefit from improvements in the case of the LPC, thus offering generally less costly dwellings at a comparable heat stress level if compared with the GZH. There is significant potential for improving heat stress for disadvantaged and the elderly while still preserving comparably low rents. This is because the renovation of LPC buildings is easier to realise than in GZH-given the prevalent ownership structure in Germany. While the ownership of apartments in GZH is usually individual owneroccupiers, LPC buildings are usually operated by housing cooperatives or large proprietors. This allows the present findings to be fed into decision-making processes for the LPC. This proved to be the case in this project involving cooperation with the LPC. It was also shown that retrofitting the GZH is worthwhile. The challenge now is to reach the dispersed owners, for example, involving professional associations or landlord's associations. Living without thermal retrofit: behavioural options and communication challenges Optimising window ventilation at night (cf. Figure 9) shows the enormous influence of residents on overheating, which has also been confirmed by previous studies (Bouchama et al. 2007;Ezratty & Ormandy 2015;Lomas & Porritt 2016;Toulemon & Barbieri 2008;Vandentorren et al. 2006). This result applies to both retrofitted and non-retrofitted buildings, confirming the ' occupancy overrides design' principle (Morgan, Foster, Poston, & Sharpe 2016). This opens further potential for improving climate justice, particularly for the most disadvantaged. This is important because the proposed retrofitting measures will typically drive up rents, forcing most disadvantaged groups to move to other buildings with a low heat-protection standard. The results provide guidance to reduce the heat burden for this group, but admittedly a lower level of heat stress reduction is achieved. The key is the optimised behaviour enabling effective passive ventilation by using night-time cooling. What sounds self-evident requires the implementation of a consistent ventilation regime by residents. Often, optimised behaviour is impeded because of residents' lack of knowledge, missing routines or more compelling reasons such as traffic noise, limited mobility or the risk of burglary, the latter applying mainly for the lower floors. An essential challenge is the education and motivation of those most affected to make use of night cooling. It is well known this is a non-trivial task. LPC buildings are home to people who are often immobile, elderly or with chronic medical conditions and, thus, may not be able to implement appropriate night ventilation. Baborska-Narożny et al. (2016) trained residents in an overheating tower block in northern England to adapt their behaviour to summer heat reduction via night ventilation and the use of exhaust fans and curtains. However, they found that residents often could not adequately implement the recommendations. Some elderly people have a delayed and insufficient perception of heat stress (Head et al. 2018) and, thus, often underestimate the health risks from high temperatures. In an ageing society, this calls for new solutions. Climate justice for vulnerable or disadvantaged residents such as the elderly calls for innovative approaches beyond welldeveloped communication designs and education efforts. This may imply new responsibilities for a variety of actors, for example, building owners, local action groups and special task forces. The question arises whether building owners (or others) should provide and maintain cooled refuge rooms for vulnerable tenants who cannot control ventilation or are in special need for assistance. This is not merely a technical question, but raises several other social, financial and community issues. Caveats and study limitations Although numerous simulations were performed and validated, the results and their generalisability are limited for the following reasons: • The heat exposure of inhabitants is strongly affected by their actions, not only by window ventilation, which was discussed, but also by their lifestyle, duration of presence and mobility. • Inhabitants' use of ventilation has an enormous influence on the overheating risk of the dwelling. It was not possible to ascertain what the typical individual ventilation behaviour is in the dwellings (the survey results do not necessarily reflect real ventilation behaviour). Current international standards on overheating do not take window ventilation sufficiently into account. There is still an enormous knowledge gap in the evaluation of inhabitant behaviour. • Depending on the selected boundary conditions, the degree of overheating in dwellings can be very different. The focus of the study was the comparison of heat loads before and after retrofitting measures and, thus, not focusing absolute values but their change. However, even this change is shown to be strongly dependent on user behaviour. • The simulations used a test reference year as a weather data set with a warm summer and average winter for the present decade and not future climatic considerations. The effect of hot summers increasing in the next decades was not investigated. • The sensation of heat depends on the individual's perception and may also change over time due to global warming. In this study, the overheating risk exceeding room temperatures >27°C was assessed. So far, however, not enough is known about which limit temperatures really lead to a higher health risk for people in a certain region depending on age, gender and other factors. • The generalisability of the results to other GZH and LPC buildings is limited because of the orientation of the building, location and climate and details such as the presence of balconies. Some conclusions concerning the topfloor dwellings, the impact of ventilation behaviour, enhanced roof insulation, window replacement or exhaust air-ventilation systems on overheating risk are generalisable for most residential buildings. Conclusions Some retrofit actions will have disproportionate negative financial consequences for residents depending on the magnitude of the financial burden. Other factors are under the control of inhabitants themselves. However, due to a lack of technical understanding and access to information on window opening for summer night-time ventilation, residents are unsuccessful at exercising personal agency to mitigate summer overheating successfully. Suitable CC mitigation and adaptation measures were examined for two common types of MFH aimed at reducing heating demand in winter and reducing overheating in summer. The investigated LPC and GZH MFHs represent two typologies commonly found in Germany and throughout Central Europe. These buildings are generally more susceptible to overheating than (for example) single family houses because of their construction and typical location within a dense urban fabric. This research identified two sets of actions. The first set of actions requires retrofit and adaptation of the building fabric by building owners. The other set of actions involved the inhabitants' agency. The significance of these findings for climate justice will involve measures taken to reduce energy demand (climate mitigation) and improved thermal adequacy (both in winter and summer). The findings show that mitigation and adaptation measures require coordination of both social and technical aspects, and that passive technical measures alone are likely to be insufficient. Evidence suggests that the capital expenditure cost of retrofit (and other refurbishment measures that may accompany this) often results in higher rents. This can be prohibitively expensive for poorer sections of society-impacting their ability to pay or displacing them altogether. The economics of retrofit for mitigation and adaptation requires further policy clarity to avoid additional burdens being placed on people who are economically disadvantaged. The influence of the inhabitant is remarkable. Passive overheating measures are not sufficient if the resident cannot open some windows at night. Public education, training and communication are therefore needed to ensure inhabitants understand and implement appropriate ventilation practices. However, a segment of the population who cannot comply because of lack of agency are likely to become increasingly vulnerable-those who are immobile, elderly or with chronic medical conditions. Specific policies are needed to identify those who are vulnerable and provide them with additional forms of assistance to ensure they do not experience overheating and heat stress. Based on thermal simulations, measured data, demographic data and other evidence, several conclusions can be drawn: • Generally, the residents in unrefurbished LPC buildings with lower rents are not more burdened by summer heat than residents in typical (unrefurbished) GZH flats if sufficient window ventilation occurs. The simulation demonstrates that typical residents of the unrefurbished LPC with lower incomes and elderly people are not more affected by summer heat. • The standard thermal retrofitting of such building types aimed at enhancing their thermal insulation as well as the installation of triple glazing can halve the energy demand for heating and, thus, cut GHG emissions (see section 0). An unintended side effect is the considerably reduced overheating especially for the top floor to a level similar to the ground floor (see section 0). • For exposed rooms still suffering from uncomfortably high temperatures, further physical interventions for adaptation are needed. Such CC adaptation measures are external sun-protection systems and ventilation systems in attic dwellings of GZH buildings to enhance the influx of cool night-time air. A well-distributed low amount of overheating can be achieved in all MFH regardless of their orientation or floor level by combining thermal retrofitting measures with heat resilience measures (Figure 8, variant 3). • Inhabitants can implement effective behavioural measures by actively managing a ventilation regime of opening windows to facilitate night-time cooling. • Inhabitants can increase the overheating risk of their dwelling tremendously by unintentional practices. Insufficient window ventilation and night-time cooling occurs for immobile residents and inhabitants with outdoor noise or risk of burglary. Note 1 TRY is the test reference year provided by the German Meteorological Service; and DSY is the design summer year provided by the Chartered Institution of Building Services Engineers (CIBSE) (https://www.cibse.org/ weatherdata). available at the national level. A more precise classification can only be achieved by using regional data for the city of Dresden, the location of the investigated LPC building. As can be seen in Table A1, around 14% of Dresden's stock of residential buildings (highlighted in italics) are of the GZH type, based on the construction year (until 1918) and the standard number of dwellings (three to 12). In comparison, LPC buildings (highlighted in bold in Table A1) only make up around 2-6% of all residential buildings, classified by the period of construction (mainly 1970-90) and the number of dwellings (more than 12). However, the data are not entirely consistent: it is to be supposed that almost half the LPC buildings were built before 1970 and others exhibit fewer than 12 dwellings. In total, around 44% of residential buildings in Dresden are MFHs. This figure is about 27% above the national average. Since for LPC buildings more than 20 dwellings in one building is not unusual, the ratio of dwellings in LPC buildings is much greater than the 6-12% compared with dwellings in other residential buildings. Along with its representative nature, an additional reason for choosing an LPC building for investigation is that this type has been identified as particularly susceptible to overheating (Founda, Pierros, Katavoutas, & Keramitsoglou 2019).
v3-fos-license
2020-12-03T09:07:22.285Z
2020-01-01T00:00:00.000
228092949
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09274349.pdf", "pdf_hash": "3f7be2e6700fd65717300a65b18806c63417f814", "pdf_src": "IEEE", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2863", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "0978c043b11c9c2737449a4abdd253b7748ab5f0", "year": 2020 }
pes2o/s2orc
Spatially Variant Convolutional Autoencoder Based on Patch Division for Pill Defect Detection Detecting pill defection remains challenging, despite recent extensive studies, because of the lack of defective data. In this paper, we propose a pipeline composed of a pill detection module and an autoencoder-based defect detection module to detect defective pills in pill packages. Furthermore, we created a new dataset to test our model. The pill detection module segments pills in an aluminum-plastic package into individual pills. To segment pills, we used a shallow segmentation network that is then divided into individual pills using the watershed algorithm. The defect detection module identifies defects in individual pills. It is trained only on the normal data. Thus, it is expected that the module will be unable to reconstruct defective data correctly. However, in reality, the conventional autoencoder reconstructs defective data better than expected, even if the network is trained only on normal data. Hence, we introduce a patch division method to prevent this problem. The patch division involves dividing the output of the convolutional encoder network into patch-wise features, and then applying patch-wise encoder layer. In this process, each latent patch has its independent weight and bias. This can be interpreted as reconstructing the input image using multiple local autoencoders. The patch division makes the network concentrate only on reconstructing local regions, thereby reducing the overall capacity. This prohibits the proposed network reconstructing unseen data well. Experiments show that the proposed patch division technique indeed improves the defect detection performance and outperforms existing deep learning based anomaly detection methods. The ablation study shows the efficacy of patch division and compression following the concatenation of patch-wise features. I. INTRODUCTION In the manufacturing process, there can be defects in products that should be detected before they are packaged. Defective products on the market can cause problems that can lead to human casualties. Until recently, many companies relied on human inspectors that can induce excessive labor cost. With the development of deep learning, some companies are attempting to replace human inspectors with automatic testing systems. Defect detection is a task for detecting defects on data. Defect detection can be applied to various products such The associate editor coordinating the review of this manuscript and approving it for publication was Min Xia . as fabric, metallic surface, pill, and so on. This is quite different from object detection which is a task for detecting and localizing predefined classes of objects. First of all, collecting defective data is very challenging, because we cannot anticipate all the types of defects in advance. Moreover, defects mostly appear as a part of the object of interest, which makes it hard to distinguish between a normal sample and a defective one. Accordingly, defect detection methods require an ability to handle the problem under the scarcity of annotated data. This paper studies the defect detection algorithm for pills using deep learning, to decrease the waste of human resources and increase the accuracy of defect detection. There are mostly three types of pill packages, i.e., bottle, bagged, and VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ aluminum-plastic packages. In this paper, we focus on pills in aluminum-plastic packages, which are easily accessible in pharmacies. Due to their complicated production process, defective pills are inevitably produced in aluminum-plastic packages [15]. There are various defect detection algorithms [15], [17], [35] based on Bayes classifier, support-vector machine, and mixtures of dynamic textures [5], [8], [9]. However, these are based on conventional machine learning techniques and their performance is rather limited. Recently, some deep learningbased methods have been proposed in various products, such as the detecting defects on fabric, metallic surfaces [19], [34], images and videos [4], [12], [23], [24], [30], [31], [37]. Especially, [19], [34] proposed autoencoder-based anomaly detection methods. An and Cho [2] introduced a variational autoencoder (VAE)-based anomaly detection method. Unlike the autoencoder-based anomaly detection method, which identifies anomalies with reconstruction errors, the VAE-based method identifies anomalies using reconstruction probabilities. There are also the generative adversarial network (GAN) based [3], [13], [25] anomaly detection methods [1], [18], [32]. Schlegl et al. introduced anomaly GAN (AnoGAN) using anomaly scores to detect anomalies in medical images. Following AnoGAN [33], Zenati et al. [36] introduced a model based on BiGAN [10] that learns an encoder network, along with a generator and a discriminator during training, that maps the input sample to a latent representation and evaluated the detection performance. Furthermore, there are adversarial learning-based anomaly detection methods that use the adversarial loss of GAN to detect anomalies without a generator [28], [29]. Above methods have proposed a new way of detecting defects or anomalies in a data, however, they are not appropriate for pill data. The main reason is that these methods are not suitable for training low variance data, which the pill data is. Furthermore, adversarial learning-based methods need to train an additional network to detect and localize the anomalies and have too high capacity for pill data. On the other hand, Du et al. have proposed a change detection algorithm for remote sensing images [11]. The problem that they deal with is somewhat similar to the anomaly/defect detection problem but has a fundamental difference: Their problem is to find the difference between two inputs of data samples, while ours is to distinguish the defective samples from a set of nondefective samples. In other words, the non-defective samples are not unique and these samples also form a distribution in the image space. Although various studies have been conducted on defect detection, detecting pill defects remains a challenge and has many issues to handle. As mentioned earlier, the defective pills should be detected before they are released into the market. In the process, each pill should be inspected. However, in the manufacturing system, several pills are packaged together in a single package. To detect defects in pills, a cropped image containing a single pill should be extracted from the image of a package. A naïve method would be to simply segment the pills by thresholding the pixel values and applying mathematical morphology to the result. However, this process might not be successful when the color of the pill is similar to that of the package. To resolve this problem, we introduce a pill detection module in this paper. After separating the images of individual pills, a defect detection method must be applied to find defects. However, as mentioned earlier, existing autoencoder-based defect detection networks is not the best choice for this purpose. For more accurate results, we introduce the patch division method in this paper. If we align a cropped pill image accurately, the aligned images have low variance. However, most autoencoders are usually too complex for reconstructing these data, which end up being capable of reconstructing unseen (defective) data. Hence, we propose a spatially variant convolutional autoencoder based on the newly introduced patch division method that is designed to be trained only on normal data so that it can only reconstruct the normal pills. We apply the patch division method on the patch-wise features extracted from the output of the convolutional encoder network. These features are encoded using the respective patch-wise encoders, which have independent weights and biases for different patches, hence the name spatially variant autoencoder. The proposed network can have lower capacity due to this patch-wise structure, which enforces the network to largely focus on local information. Furthermore, the computational complexity of the patch division method is comparable to a regular convolution. Therefore, adding the patch division method to an autoencoder or a VAE is not too much of a burden. The spatially variant autoencoder learns normal data with the patch division method and detects pill defects successfully. The overall structure of the proposed method consists of the pill detection module and the defect detection module, i.e., the spatially variant autoencoder. The pill detection module estimates the pill segments using a deep network. The distance transform [22] and the watershed algorithm [20] are then used to divide the package image into individual pill images. The defect detection module, i.e., the proposed spatially variant autoencoder, is based on conventional autoencoder-like structures but has additional layers for patch division, which make the network largely focus on reconstructing local regions. The proposed networks are based either on a convolutional autoencoder or a variational autoencoder (VAE); however, we expect that other generative models can also be used here. Our pill defect detection method does not need annotated data. Moreover, the proposed network doesn't need any additional network to train the overall algorithm, unlike the adversarial methods. Figure 1 shows the overall pipeline of the proposed method. Furthermore, in this paper, we also introduce a new dataset to evaluate our networks. The experiments show that the proposed networks have better performance than the other autoencoder-based baseline methods. Furthermore, we compared our method with the existing deep-learning based anomaly detection methods and show that the proposed methods outperform the anomaly detection methods. We also conducted ablation studies to demonstrates that the patch division method indeed improves the defect detection performance. Our contributions can be summarized as follows: • We propose a pipeline that automates the pill defect detection process based on the pill detection module and defect detection module. • We propose the patch division method to lower the capacity of a defect detection network, because the low variance of the data poses challenges in learning only the information of normal data. As a result, it improves the defect detection performance. The remainder of this paper is organized as follows: In Section II, we present the background of this work, i.e., the autoencoder and VAE. In Section III, we explain the pill detection module. In Section IV, we introduce the architecture of the defect detection module. Section V presents the implementation details of our model. Section VI and Section VII detail the experimental results and ablation studies, respectively. The conclusion follows in Section VIII. II. BACKGROUND The proposed method is based on autoencoders and VAEs. In Sections II-A and II-B, we explain the concepts of an autoencoder and a VAE, respectively. A. AUTOENCODER In this paper, we use autoencoders to differentiate defective data from normal data. The goal of using autoencoders is to train them so that they are only capable of reconstructing the normal data. Autoencoders are a particular structure of neural networks that are used in many problems such as image reconstruction and information encoding. They are often used to learn a meaningful latent representation from input data. An autoencoder consists of an encoder network and a decoder network. The encoder learns the mapping between the input data and a multi-dimensional latent space. On the other hand, the decoder network learns to reconstruct the original input image from the learned feature map of the encoder network. The loss function of an autoencoder is the difference between the original input data and the reconstructed data. The equation (1) maps an input vector x to a latent variable h using the encoder network. The equation (2) maps the latent variable h to the reconstructed vectorx. An autoencoder is usually trained to minimize (3), which is called reconstruction error. No label is required in this learning process, so it is called unsupervised learning. B. VARIATIONAL AUTOENCODER A VAE is a generative model that approximates a posterior density using variational inference [16]. It is based on an autoencoder-like structure and the latent variable is assumed to be a random variable. Let us consider a dataset samples of some continuous or discrete random variable x. The marginal likelihood of data to be maximized is the sum of the marginal likelihoods of individual data: Let us introduce a known model q φ (z|x), i.e., an approximation to the unknown model p θ (z|x). The marginal likelihood of each data can be represented as follows: Because the KL divergence term is greater than or equal to zero, the equation (5) can be rewritten as follows: Now, the optimal parameter can be found by solving the problem: To train a VAE, the loss function should be differentiable. However, the last term of the right-hand side in the equation (6) is not differentiable, because the sampling is not a differentiable operation. Kingma and Welling [16] introduced the reparameterization trick to make the model to be deter- Finally, z becomes differentiable with respect to (w.r.t.) the parameter (µ, ). III. PILL DETECTION MODULE In the manufacturing process, several pills are packaged in a single aluminum package. Then, many of these pill packages are carried on conveyor belts before it is packed in paper boxes. Unfortunately, there can be defects in these products. Thus, before they are on the market, it should be examined if there is a defective pill in each pill package. Even if there is only one defective pill in an aluminum package, the entire package should be abandoned. The images of the package on the conveyor belts can be easily obtained with a camera, so using computer vision and deep learning techniques to detect those defects can be a cost-efficient way for handling this problem. In order to realize this system, we first divide the image of a single package into cropped images containing individual pills so that we can concentrate on each pill. The pre-processing procedure for preparing the training data is three-fold. The pipeline of the data pre-processing part is shown in Figure 2. First, we align the pill package using the principal component analysis (PCA) algorithm (Section III-A). Second, we annotate the pills in the aligned results. Then, we train a segmentation network to segment the pills (Section III-B). Finally, we separate individual pills using the segmentation result (Section III-C). A. PACKAGE ALIGNMENT WITH PCA We used similarity transforms to align packages, because the manufacturing environment restricts the experiment conditions such as camera angle and location of the camera. The package alignment makes the pill segmentation easier. It has been a popular technique to apply PCA to the coordinates of positive points in mask images in order to find similarity transformations and align images. For example, Mudrová and Procházka described two applications of PCA in image processing. One is image compression and the other is image rotation [21]. Recently, Rehman and Lee used PCA to align medical images [26]. We applied PCA to the coordinates of the edges detected by the Canny edge detector [7] to find the principal axes. Then, the input images are aligned based on the principal components of the coordinates. Furthermore, to obtain more accurate principal components, we applied a median filter to denoise the detected edges. Then, edges inside of the package region are discarded by examining each edge pixel whether it is the left-most, right-most, top-most, or bottom-most among the edge pixels in the same row or column. B. SEGMENTATION NETWORK We built a simple segmentation network to separate a package into individual pills. The training data are the alignment result of the pill packages and the labels have been annotated manually. The network is composed of two convolutional layers and a ReLU layer. The output of the segmentation network is the mask of the detected pills in the input image. Because the data is quite simple, two convolutional layers are sufficient to detect pills in package images. The dimensions of the convolutional layers are 128 and 1, respectively, and the kernel size is 3. C. DISTANCE TRANSFORM AND WATERSHED ALGORITHM To divide a package image into individual pill images, we applied the distance transform [22] and the watershed algorithm [20] to the mask. Then, the mask is separated for each pill. We again applied PCA to the coordinates of the mask of each pill to find their principal axes to align each pill image. From the center of each pill, we crop the 160×160×3 pill image, considering the principal axes. Figure 3 shows the result of the pill alignment. After the alignment, the normal pills are aligned with the center. However, the defective pills with cracks or peeled surfaces are often not perfectly aligned, because the masks of the defective pills are usually unbalanced. IV. DEFECT DETECTION MODULE BASED ON AN AUTOENCODER WITH PATCH-WISE FEATURES The proposed spatially variant autoencoder is trained only on normal images. We assume that the network is only trained on normal data and is therefore unable to reconstruct defective pills correctly. However, if the network has a good generalization performance, even if the network is trained only on the normal images, the defective pills would be reconstructed correctly as well. This may be attributed to the high capacity of a deep neural network (DNN). The proposed method effectively minimizes the capacity of the network while maintaining its ability to represent the normal samples accurately, which makes it a good fit for unsupervised defect detection. In the proposed method, the convolved features are divided into patch-wise features and then they are applied to the patch-wise encoders. This method is hereafter referred to the patch division method. The patch division method effectively suppresses the capacity of the network, compared to when only the plain fully connected layers are used, because the patch division method reduces the number of parameters by approximately N 2 p times where N p is the number of patches. The patch division method enforces the network to concentrate on local areas rather than the global area, thereby effectively degrading the generalization ability of the linear layers in the network. Figure 4 shows our network's architecture. The network is basically an autoencoder. The encoder consists of three parts, i.e., the convolutional part, the patch-wise part, and the global part. Likewise, the decoder has corresponding parts that are inverted versions of the above parts. The input size of the proposed network is 160 × 160 × 3, the same as the pill image extracted in Section III. An input image passes through five convolution layers to become a 80 × 80 × 16 feature map. These convolution layers are the convolutional part mentioned above. The convolutional part learns the low-level features of the normal data. Then, we divide the convolved features into 400 disjoint patches. Let us consider a feature map F ∈ R W ×H ×C that is divided into patches {P ij ∈ R m×n }. The pixels of P i can be a derived from F as follows: In the patch-wise encoding layer, each patch has its own weight and bias. This can be viewed as applying a fully connected layer to each patch. The encoded patch-wise features are then vectorized and concatenated. The concatenated feature vector passes through the global encoding layer to further compress the concatenated features into latent features. Note that, because the patch-wise features have been already compressed by individual patchwise encoding layers, this global encoding layer has to only deal with the dimension-reduced versions of the features, which can greatly reduce the capacity of the overall encoder structure. The decoder has a similar structure to the encoder. The latent features pass through the global decoding layer, and they are reshaped again into patch-wise features to which the patch-wise decoding layer is applied. The decoded patchwise features are reshaped to form a 80 × 80 × 16 feature map and pass through five deconvolution layers, i.e., the convolutional decoding layers, and one sigmoid layer. Kernel sizes and strides of the layers in the proposed autoencoder (Autoencoder with patch division). The VAE with patch division has a similar structure with the autoencoder except for an additional Gaussian inference. We refer to the spatially variant fully connected layer as SVFC. SVFC 1 corresponds the patch-wise encoder and SVFC 2 the patch-wise decoder. V. IMPLEMENTATION DETAILS In this section, we will explain the implementation details of our network and experimental settings. We conducted experiments on an Nvidia Titan Xp GPU. We used the binary cross-entropy loss instead of the mean squared error, because the latter can cause blurry outputs. Further, we used the Adam optimizer where the learning rate and the weight decay were both fixed as 10 −4 . All the activation functions were set to leaky ReLU. The size of the mini-batch was five. The number of training epochs was 200. Table 1 shows our network. The size of the input image is 160 × 160 × 3. By passing through the convolutional encoding layers, the dimension of the input features becomes 80 × 80 × 32. The kernel size was seven for all convolutional layers, and the strides were one for from Conv 1 (Deconv 2) to Conv 4 (Deconv 5) and two for Conv 5 (Deconv 1). The number of patches were 400 and the width and height of each patch were both set to four, which achieved the best performance as shown in Figure 6. After vectorizing each patch-wise feature, there were 512 channels; these were then encoded into 256 channels by the patch-wise encoding layer. These 400 256-channel vectors were then concatenated, and further encoded into a 64-channel vector. The decoder has a similar inverted structure to reconstruct a 160 × 160 × 3 image. A. COMPUTATIONAL COMPLEXITY AND PROCESSING TIME In this section, we compare the computational complexities between a regular convolution and the patch division method. Let us consider a feature map F ∈ R W ×H ×C . If we apply a convolution to the feature with a k × k kernel whose output channel size is C , the computational complexity of the convolution is O (WHCC k 2 ). On the other hand, if we apply the patch division method with patch size m × n, the number of patches becomes (WH )/(mn), and the fullyconnected operation for each patch takes O(mnCD) where D is the dimension of output vectors. Accordingly, the computational complexity of the patch division method becomes O(WHCD). Depending on the value of D, the computational complexity and the output size of the patch-wise layer can be different: If D = C k 2 , the complexity is the same as that of the previous convolution. However, the size of the output will become (WH )/(mn) × C k 2 = WHC × k 2 /(mn), which is WHC for the previous convolution, so the output size of the patch-wise layer is either larger or smaller depending on the ratio between k 2 and mn. If D = mnC , on the other hand, the output size becomes (WH )/(mn) × mnC = WHC , which is the same as that of the convolution. In this case, the complexity becomes O(WHCC mn), which is again either larger or smaller than that of the convolution depending on the ratio between k 2 and mn. k, m, and n are usually small positive integers around three to seven, thus we can say that both the operations have comparable complexities. Table 2 shows the processing time of the proposed methods. As shown in the table, all the training took around one hour. Here, we refer to the patch division as PD. Note that the plain autoencoder (or VAE) had a similar structure to that described in Table 1, replacing the SVFC layers with convolutional layers with the same output sizes. k was set to seven and both m and n were set to four, so the computational complexities of the SVFC layers were about three times smaller. However, the above table shows that the plain autoencoder (or VAE) was faster, and we conjecture that this has to do with the convolution operation being optimized on GPUs with CUDA. A. DATASET AND EVALUATION METRIC 1) DATASET We controlled the image acquisition process to have identical conditions for illumination, focal length, and the distance between the pills and the camera. For the convenience of segmentation, the background of the data was set to a blackcolored paper. We selected three different types of pills to demonstrate the efficacy of our networks for diverse data. Data 1 was the easiest one. The difference between normal and defective data was very distinctive, because the pill was white inside and green outside, as shown in Figure 3. Data 2 was the hardest one in terms of defect detection, as shown in Figure 3, because there was major light reflection on the package unlike the other datasets. Data 3 was the hardest data for the segmentation task, because the colors of the package and pills were very similar. To produce defective pills, we manually repackaged some pills after breaking them with hands. Each dataset had 2000 segmented normal pill images that were used to train the spatially variant autoencoder. Additional 500 normal and 500 defective images were used for validation and test. These sets were randomly sampled and fixed, and cross-validation was not used in the experiments. We compared the performance of our networks with that of a plain autoencoder and a plain VAE. Although the pill dataset is quite small compared to what is usually used in deep learning, but the variance in the pill data is also small due to the restricted data acquisition process. Therefore, it was enough to learn the features of the pill data using the patch division method. 2) EVALUATION METRICS The receiver operating characteristic (ROC) curve and area under the curve (AUC) were used for evaluation. To find the optimal parameters, such as the number of patches, the number of channels, and the kernel size, we conducted a random search on a few dozen cases. Following the random search, we chose a few parameters that had high detection performance and conducted narrow tuning around them to identify the best parameters. Table 3 shows the AUC values for each data and the experiments are conducted on AE, VAE, AE with PD, and VAE with PD. The values in the table are the average AUC values and standard deviations of five trials. Here, (A) indicates that the same hyper-parameters were used for all data, while (O) means that different optimal hyper-parameters were selected for each data. When we compared AE (A) with AE with PD (A), the smallest increase in the AUC value was 2.21% on Data 1, and the greatest increase in the AUC value was 4.1% on Data 2. Moreover, note that AE with PD (A) was not less effective than AE (O), even though AE (O) was much more finely tuned. Similarly, the AUC values of VAE with PD (A) were greater than those of VAE (A). The smallest increase in the AUC value was 0.69% on Data 1, and the greatest increase in the AUC value was 5.07% on Data 2. Furthermore, VAE with PD (A) demonstrated better performance than VAE (O), although VAE (O) was much more finely tuned. The table shows that the proposed method does not show much increase in performance for Data 1 as for the other datasets. This is because Data 1 is the easiest data for detecting defects as mentioned in Section VI-A. Accordingly, defects in Data 1 can be easily detected even though the patch division is not used. Figure 5 shows the ROC curve for each data on all the proposed methods. The defectiveness of the result obtained using the plain VAE is measured either based on the reconstruction error or the norm of the latent features. If the norm of the latent features is close to 0, it can be interpreted to mean that the input image is normal, because the network is trained on normal data and the latent features of a VAE are supposed to be standard Gaussian. For all the other networks, the reconstruction error was used. Figure 5(a) and Figure 5(c) shows that the detection performance of the proposed networks was better, compared to the other networks, on all datasets. The ROC curves on all the data show that using the norm of the latent variable in VAE is also useful for detecting defects. When the latent variable (ẑ) of the whole training data follows N (μ,σ ), and the latent variable (z) of a given test image follows N (µ, σ ), we can translate the center of the latter distribution to zero by subtractingμ. As shown in Figure 5, measuring the norm of VAE can be used for detecting defects on pills but shows lower detection performance than measuring the reconstruction error. We conjecture that the distribution of the latent variable may not be exactly Gaussian even if it is enforced during training, because it can be hard to produce a perfect Gaussian distribution. Figure 6 to 8 show the quantitative experiments of hyperparameter tuning on the patch, channel, and kernel sizes. These experiments were conducted only on the autoencoder with the patch division method. The parameters that are finally selected are shown in bold. Figure 6 shows the performance of the proposed network on different patch sizes. We compared the patch size from 2 × 2 to 8 × 8. The performance was similarly good when the patch size was either four or six; however, we chose four, because the average performance was at its best when the patch size was four. Figure 7 shows the performance of the proposed network on different number of channels. Note that the numbers of channels of Conv 5 and Deconv 1 were set as twice as those of the other convolutional layers for all the cases. As shown in the figure, the higher the number of channels, the better the performance become. We tested up to 16 channels due to the memory limitation. Figure 8 shows the performance on different sizes of kernels, which were three, five, seven, and nine. Although the performances were similar for all the kernel sizes on Data 1 and Data 3, . Ablation study. The orange, green, and blue lines indicate the ROC curves of the proposed network, that without the global encoding-decoding part, and that without patch division and global encoding-decoding, respectively. it achieved the best performance when the size of kernels was seven on Data 2. Table 4 shows a comparison with the existing anomaly detection networks. The defect detection performance of f-AnoGAN [32] is lower than 0.6, which indicates that it is not an appropriate algorithm for detecting defects in pills since detecting defects performance is nearly random (0.5). Adversarially learned one-class classifier for novelty detection method (ALOCC) [28] did somewhat better than f-AnoGAN but shows much lower detection performance than the proposed methods. These generative-model-based methods are designed for general data such as CIFAR-10, SVHN, and KDD99. Accordingly, these methods are not appropriate for learning the pill data which has very low variance. On the other hand, the proposed methods with the patch division method learns spatially variant features from the pill data with minimum capacity, which, as result, succeeds in detecting defective samples. Table 5 shows the data samples and reconstruction result of each sample. As mentioned earlier, the proposed network was only trained on normal data; therefore, it was unable to reconstruct the defective data correctly. Because we trained only on normal data, our method yielded almost perfect reconstruction results when the input image was normal. However, when the input image was of a defective pill, the output was quite blurry and more similar to normal data. VII. ABLATION STUDIES The most important parts of the spatially variant convolutional autoencoder are the pill detection module and the defect detection module, and the most essential parts of each module are the segmentation network and the patch division method, respectively. In this section, we demonstrate the effects of the segmentation network and the patch division method. Thus, we conducted ablation studies to check the importance of the segmentation network and each part of spatially variant autoencoder i.e., global encoding layer, global decoding layer, and patch division layer. A. SEGMENTATION NETWORK In this paper, we used the segmentation network to detect pills in a package. Table 6 shows the defect detection performance for both when the data input packages were segmented with the segmentation network and when they were not. Without the segmentation network, the packages were simply segmented by thresholding the color values of the pills, and then applying mathematical morphology to the result. The detection performance without the segmentation network on Data 1 was 51.51%. However, with the segmentation network, the detection performance was 97.40%. Similarly, the differences in the detection performance between using the segmentation network or not on Data 2 and Data 3 were 34.23% and 15.52%, respectively. The average improvement of the detection performance was 32.19%. Thresholding was not very effective, because it was too simple to detect pills. For example, for Data 3, it was hard to separate the pills from the package, because they had similar colors. The other segmentation methods, such as GrabCut [27] and GraphCut [6], [14], can be used instead of thresholding and applying mathematical morphology; however, they require user input during the test phase. Because the purpose of this paper is to build a complete system for pill defect detection, we consider these as out of its scope. B. PATCH DIVISION We conducted an ablation study on three cases, to verify the effectiveness of the patch division method. Figure 9 shows the pill defect detection performance under different conditions. Compared to our network, the performance of the network without patch division deteriorated. Furthermore, the performance of the network without global encodingdecoding also deteriorated. This experiment shows that global encoding-decoding is also important as well. With the global encoding-decoding part, our network can learn some additional global information that is helpful for detecting defects. Furthermore, patch division makes the network learn local information, which can reduce the overall capacity of the network. VIII. CONCLUSION In this paper, we introduced the pill detection module and the defect detection module based on the newly proposed spatially variant autoencoder. We demonstrated that the proposed patch division method could improve the defect detection performance. We experimented with different parameters, such as kernel, channel, and patch sizes, and selected the best hyper-parameters. Although we conjecture that larger channels would result in better performance, due to the lack of memory space, the largest channel size we could afford was 16. We only tested the patch division method on an autoencoder and a VAE, but we expect that the patch division method can also improve the detection performance on GAN. We will explore this further in future work.
v3-fos-license
2020-08-27T09:03:38.132Z
2020-08-21T00:00:00.000
221491795
{ "extfieldsofstudy": [ "Materials Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1016/j.dib.2020.106215", "pdf_hash": "dd121ed45cfc643bb283a5812f84d07f3ee06518", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2867", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Environmental Science" ], "sha1": "6eb07adf2b2de23487fc6f539d54a4e08ed02589", "year": 2020 }
pes2o/s2orc
Data-set collected during turning operation of AISI 1045 alloy steel with green cutting fluids in near dry condition This work is to explicate the data collected during the turning of AISI 1045 alloy steel components in near dry condition with emulsified cutting fluids prepared from cooking oils such as Palm oil and Peanut oil. The base oils are tested for its relative density, viscosity and flash point following ASTM standards. Highly influencing turning factors are identified and the experiments are planned and arranged using Taguchi's L27(35) orthogonal array, the experiments are repeated to reduce the errors. The quality aspect of machined components and the machining interface temperature is observed as the outcomes. The prediction models are created for the experiments through regression analysis. Specifications The tool-work interface temperature was acquired with a infrared thermometer (BEETECH MT-4) at the proximity of 1 ft distance and the quality aspect of turned samples was tested using a surface roughness tester (Mitutoyo SURFTEST SJ201) Data format Raw, analyzed Parameters for data collection Highly influencing turning factors are identified such as Spindle speed (rpm), Feed (mm/rev), Depth of Cut (mm), Tool corner radius (mm) and Cutting fluids Description of data collection Machining of the AISI 1045 steel components was performed following the above said control factors in a high-speed CNC Lathe, the interface temperature was tested while machining, using an infrared thermometer and the quality aspect of the turned steel components were performed using a surface roughness tester. Data Value of the Data • The data presented in this article conveys the feasibility of using eco-friendly green cutting fluids in the machining of steel alloy components. • The data presented in this article such that the preparation of eco-friendly cutting fluids can give a lead to the future researchers in this field. • The data presented here can be used by researchers in the African continent and even the whole world to compare the machining characteristics of AISI 1045 alloy steel with other edible or non-edible vegetable oil based cutting fluids. • The data can be used to study the machining characteristics of AISI 1045 alloy steel machined with eco-friendly cutting fluids in near dry conditions. Data Description The vegetable oil based emulsified cutting fluids would ultimately reduce the cost contribution of cutting fluid on the total manufacturing cost and would eliminate the pollution caused by the oil waste fed on the environment [1][2][3][4] . The data explicated in this article is about producing qualitative turned machine components in turning of AISI 1045 alloy steel components with emulsified cutting fluids prepared from eco-friendly natural oils such as Palm oil and Peanut oil. The properties of oils such as relative density, viscosity and flash point are evaluated following ASTM standards. Servocut 'S' is the mineral-based cutting fluid used along with the vegetable-based emulsions. The quality aspect of machined components and the machining interface temperature is recorded and presented. The optimum cutting fluid based on quality aspect and tool-work interface temperature controlling aspect is presented in the form of charts. The Palm oil and Peanut oil were tested for its relative density, kinematic viscosity and flash point, following the test standards ASTM D5355, ASTM D445 and ASTM D92 respectively [1] . The properties of the oil are given in Table 1 . An anionic emulsifier is used as an additive for the preparation of water-dispersible oil formulation [1] . The appearance of the emulsifier is a pale yellow to clear viscous liquid with a specific gravity of 0.96. The vegetable-based emulsion formed was homogeneous, stable and did not split during the continuous usage. The cutting fluid compositions are given in Table 2 . The highly influencing turning control factors to be spindle speed (n), feed rate (f), depth of cut (d) tool corner radius (r) and cutting fluid (C) are decided for the trials and their levels are indicated in Table 3 . The quality attribute with the sort of 'smaller-the-better' [ 1 , 5 ] measured in this research work was surface roughness (Ra) of the machined samples and tool-work interface temperature (T) while machining. The Signal-to-Noise Ratio (SNR) for the yield responses was computed by Eq. (1) for each machining condition and the corresponding data are given in Table 4 . where i = 1, 2,…, n (here n = 5). The F-Test and P-test are led dependent on the responses and the control factors [6] . Tables 5 and Table 6 show the outcomes accomplished by ANOVA. The regression value is seen as under 0.05 for both response factors demonstrating that the created model is at 95% of the confidence limit [ 1 , 5 ]. The P-value is determined by 95% of the confidence limit. The P-value under 0.05 shows the noteworthy impact of the control factors on the responses. Prediction Model By means of regression analysis with the aid of MINITAB 17 statistical software, the effect of turning factors on the responses was modeled and presented in Eq. (2) and (3) . For the above mathematical model, it was found that r 2 = 0.98 for surface roughness and r 2 = 0.99 for tool-work interface temperature, where 'r' is the correlation coefficient and the value range of 'r 2 ' should be between 0.8 and 1 [7] . The predicted runs have very close values with the measured data. For reliable statistical analyses, error values must be smaller than 20% [8] . The regression model data is detailed in the form of graph for both the outcomes such as surface roughness and tool-work interface temperature and are presented in Fig. 1 and Fig. 2 respectively. Effectiveness of cutting fluids The effectiveness of cutting fluids used in this research work and the average data of surface roughness and tool-work interface temperature are observed and depicted in Fig. 3 and Fig. 4 . The raw data associated with the Fig. 3 and Fig. 4 can be found in the supplementary file. Design, materials, and methods The experiments were arranged in view of Taguchi's orthogonal array in a CNC turning center (LMW Smart Junior). The turning operation is done on AISI 1045 cylindrical components of ( φ50mm x 120 mm) by utilizing PCLNR tool holder and CNMG diamond finishing titanium nitride of three different tool corner radius such as 0.4 mm, 0.8 mm and 1.2 mm. Emulsified cutting fluids prepared of mineral oil, palm oil and peanut oil are used as the coolants/lubricants in this research. All through the experimentation, a steady flow rate of emulsified cutting fluids at the rate of 44.8 ml/hr and steady pressure of 5 bar was kept for the near dry cooling system. While turning the steel samples, the tool-work interface temperature was measured using an infrared thermometer (BEETECH MT-4) at the proximity of 1 ft distance. The quality aspect of turned samples was tested using a surface roughness tester (Mitutoyo SURFTEST SJ201). Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this article.
v3-fos-license
2019-07-22T22:31:08.118Z
2019-06-21T00:00:00.000
197962749
{ "extfieldsofstudy": [ "Physics", "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1088/1755-1315/272/3/032042", "pdf_hash": "0526b612f0c268c5a809f790b5616f9c3856b3aa", "pdf_src": "IOP", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2868", "s2fieldsofstudy": [ "Geology" ], "sha1": "432369fa4f7dc6b5e6797991ab0ed47cb5cfebee", "year": 2019 }
pes2o/s2orc
Content of Microelements in Brown Coals of Transbaikal Region The coal deposits containing a complex of the major valuable microelements which have various degree of study are disclosed in Transbaikal territory. The analysis of geological material on coal deposits of Transbaikal region demonstrates great prospect for search and surveying of germanium-bearing coals in this territory. Gallium and beryllium at infrequent elements’ group is expedient to consider which are contained in local coals in the increased concentration connected among themselves. Content of these elements in coals it is caused by space exposure of coal and rare-metal deposits. The last include the production reserves of beryllium and they are a part of the Transbaikal region beryllium-bearing province in Russia. It is noted that some coal deposits of Transbaikal region have the increased content of rare-earth elements. Noble metals in the increased concentration are established in many assay of coal of various deposits that is caused by constant aurum content within the Transbaikal metallogenic province. There are data on content of valuable ore elements in coal deposits of the Transbaikal region, regularities of their placement depending on geochemical features of metallogenic provinces, conditions of accumulation and to form into various components of coals and also information about ore elements’ localization and their associations. The analysis of these data gives geological and technological characteristic to metal-bearing coals as to perspective complex of mineral raw materials. There are considered infrequent and diffuse elements, non-ferrous, noble, rare-earth and radioactive metals with concentration less than 0,1% as microelements in coals. According to the International Organization for Standardization 84 elements are detected in coals which belong to various groups of a periodic system of elements. 35-40 elements are found stably in coals of deposits of Russia in detection limits of mass analyses. Introduction The analysis of extensive material shows the concentration of the most of microelements in coals and their waste is at the level of percent abundance. At the same time it is identified that microelements' distribution in coals extremely uneven. The microelements content vary widely within the deposit and even one coal-bed and coal-bed crossing, reaching on certain local sites very high concentrations having the production value [9,10]. Now coals are the main source of germanium in the world. Uranium is recovered from coals and carbonaceous rock on an industrial scale. Gallium and molybdenum is recovered from coals it is also profitable. It is possible to utilize beryllium, scandium, boron, lead, zinc at the contents established now. There are premises for identification in the coals of concentration of selenium, vanadium, silver, aurum, rhenium and other elements which are of the production interest [7,8,12]. In general microelements in coals are studied insufficiently. Now no more than 5-10% of elements from all complexes are estimated and only 2-3% develops. Their so low use can be explained with weak geological and geochemical studying of coals and poor volume of technological researches. Therefore many microelements are not used and are lost forever doing environment contamination. In the territory of Transbaikal region the coal deposits containing a complex of valuable microelements having various degree of study are indicated as a result of the analysis of materials of geological reports on exploration. Germanium-bearing coals deposits are held special position among them [1, 2]. Germanium-bearing coals Brown coals in Transbaikal region are characterized by the increased and high content of germanium. Tarbagatai germanium-coal deposit is the industrial installation on mining of germanium raw materials. The analysis of geological material on coal deposits of Transbaikal region demonstrates great prospect for search and surveying of germanium-bearing coals in this territory [23,24,25]. Inferred reserves and expected resources of germanium in coals in this region are distributed as follows: 1) The germanium reserves adopted by State Commission on Mineral Reserves -380 tones; 2) Reserves are counted but were not adopted -420 tones; 3) Expected resources -650 tones. Total reserves and expected resources of germanium in coals are now 1450 tones. They have very great density in relation to the active reserves of germanium in Russia and provide to Transbaikal region the leading position. The coals containing germanium in the territory of the region are propagated widely. The strip of germanium-bearing coals from the East on the West is stretched on 700 km, and from the South on the North on 600 km forming the extensive area about 20 thousand sq.km. The high and increased content of germanium in coals, a wide propagation of deposits in the territory of the region, larger prospects on opening of new local sites, blocks and beds allow to allocate the largest Transbaikal germanium-coal metallogeniс province in Russia [1, 2, 11]. Infrequent and diffuse elements Gallium and beryllium at infrequent elements' group is expedient to consider which are contained in local coals in the increased concentration. The average background content of gallium in coals is 10 g/t, local-high is 30 g/t and the limiting is 500 g /t of coal. The percent abundance of gallium in clay rocks is 30 g/t. Background coefficient of concentration to percent abundance is 0,3 g/t. Gallium can be considered as potentially valuable element in coal. It is the constant companion of germanium. Properties of germanium and gallium are very close. The minimum content of gallium is accepted 20 g/t counting on dry coal and in ash of coals is accepted 100 g/t for assessment of its use in the industry. Gallium it is propagate in coals extremely nonuniformly, on certain sites forming the local increased concentration also as other ore elements [13, 15,16]. Region's coals contain the increased concentration of gallium connected with a germanium. The average content of gallium in ashes of germanium-bearing coals is 77 g/t and in not germaniumbearing coals is 22,8 g/t. The largest content of gallium in coals belongs to the richest Tarbagatai germanium-coal deposit. It is established that the average ratio germanium-gallium to germaniumbearing coals is 1:0,2 and in not germanium-bearing coals is 1:3,5. Gallium differs in wide propagation in coals. The coefficient of occurrence is equal to 80-100% [1, 10, 18]. Brown coals of Transbaikal region have the increased content of beryllium. Presence of the last in coals isn't casual and caused by space exposure of coal and rare-metal deposits. The last include the production reserves of beryllium and they are a part of the Transbaikal beryllium-bearing province in Russia. Concentration of beryllium has practical value. Content of beryllium in ashes of germaniumbearing coals of the Tarbagatai and Mordoysky deposits is equal to 28 and 50 g/t. Content of beryllium in ashes of coals of certain local sites and beds of following deposits -Krasnochikoysky is 27 g/t, Zashulansky is 31-45 g/t, Pogranichny is 25-64 g/t, Badinsky is 38 g/t, Burtuysky is 21 g/t. High contents of beryllium are noted in coals of the Chitkandinsky deposit 300 g/t [25]. Uranium mineralization is typical for the Transbaikal metallogenic province. Industrial deposits and uranium manifestation are known in limits of the Transbaikal metallogenic province. As some large uranium deposits are closely connected with coals (the USA, Germany, Sweden, etc.), it is possible also in the Transbaikal region presence of high concentration of uranium in coal deposits. Urtuysky coal deposit located near Streltsovsky uranium unit includes coal reserves with the increased high content of germanium. The province has prospect for the uranium-coal deposits which are especially close industrial uranium deposits [14]. The increased high content of niobium, strontium, borum is noted on certain local sites of coal deposits in the region. However it is possible to draw any conclusions on their using only after conducting special researches because of low study of their distribution in coals, forms of stay and lack of technology solutions on extraction of these elements [3,14,20]. Rare-earth elements As a rule the average background content of rare-earth elements in coals, is less than their percent abundance in clay rocks. Background coefficient of concentration of the main rare-earth elements to percent abundance for scandium is 0,3 g/t for lanthanum is 0,04 g/t for yttrium is 0,3 g/t and for ytterbium is 0,3 g/t. Rare-earth elements are propagated widely in coals despite insignificant contents. Some coal deposits of Transbaikal region have the increased contents of rare-earth elements. Kharanorskoye and Urtuyskoye of brown coals deposits especially differ on the content of the basic rare-earth elements. The high contents of rare-earth elements are noted in ashes of coals of the Nerchugansky deposit and the sum of rare-earth elements is 800 g/t. It should be noted that rare-earth and other rare elements are still not revealed on many coal deposits. One of the reasons is the use not enough precise and sensible methods of the analysis [17,19]. Noble metals Au, Ag and platinoids are established in coals in group of noble metals. Data on determination their concentration and placement aren't enough and. The available analyses on noble metals confirm their propagation in coals extremely nonuniformly. It complicates identification of regularities of their placement, formation and the decision on the prospects of their use. The average background content of Au in coals is 0.01 g/t, local-high is 1-5 g/t and the limiting is 40 g /t of coal. The percent abundance of Au in clay rocks is 0.008 g/t., background coefficient of concentration to percent abundance is 0,3 g/t, Au can be considered as potentially valuable element in coal. Conditions for quantitative studying of aurum's propagation arise at his content in coals at 0,02 g/t and at 0,1 g/t in ashes of coals. The raised aurum-bearing is established in many assay of coal of various deposits that is caused by constant aurum content within the Transbaikal metallogenic province. 48 from 104 ore units have accurate geochemical specialization on Au. In Mordoysky deposit Au was defined by an assay method in 48 tests, in 37 tests (coefficient of aurum's occurrence in tests is 75%) and aurum content in ashes of coals is revealed from 0,04 to 0,6 g/t. On the Ureysky deposit aurum content in ashes of coals is revealed from 0,04 to 0,15 g/t in 52 tests from 96. It is revealed visible Au in samples from core a borehole in Tigninsky coal-bed on the southern flanks of the Tarbagatai deposit. To determine Au and Ag in coals of the region they selected tests of 10 coal deposits. In total it have been taken 28 samples which have shown the content from traces of Au and Ag is up to 34 g/t in ashes of coals and only two tests were ore-free from them. Neutron-activation and assay analyses have established high (0,1-0,85 g/t) concentration of Au in the integrated tests of coals which is associated with the increased content of rare elements of the Kharanorsky deposits. Here aurum's concentration is 10-86 times higher than average background values for coals of Transbaikal region. The average of argentum's content in coals of Kharanorsky deposit low also makes 0,03 g/t. Level of concentration and propagation of Au in coals nominates Transbaikal region as an object for carrying out check-auditing works to Au for the purpose of identification of the local enriched sites and their assessment for passing mining [2,5,6]. Conclusion The analysis of the data about content of valuable ore elements in coal deposits of the Transbaikal region, regularities of their placement depending on geochemical features of metallogenic provinces, conditions of accumulation and to form into various components of coals and also information about ore elements' localization and their associations gives geological and technological characteristic to metal-bearing coals as to perspective complex of mineral raw materials.
v3-fos-license
2018-12-20T14:03:11.199Z
2018-12-27T00:00:00.000
56477542
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/path.5201", "pdf_hash": "9b8e3485a05459121b57c050972bff6b2218fb0f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2869", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "9b8e3485a05459121b57c050972bff6b2218fb0f", "year": 2018 }
pes2o/s2orc
shRNA‐mediated PPARα knockdown in human glioma stem cells reduces in vitro proliferation and inhibits orthotopic xenograft tumour growth Abstract The overall survival for patients with primary glioblastoma is very poor. Glioblastoma contains a subpopulation of glioma stem cells (GSC) that are responsible for tumour initiation, treatment resistance and recurrence. PPARα is a transcription factor involved in the control of lipid, carbohydrate and amino acid metabolism. We have recently shown that PPARα gene and protein expression is increased in glioblastoma and has independent clinical prognostic significance in multivariate analyses. In this work, we report that PPARα is overexpressed in GSC compared to foetal neural stem cells. To investigate the role of PPARα in GSC, we knocked down its expression using lentiviral transduction with short hairpin RNA (shRNA). Transduced GSC were tagged with luciferase and stereotactically xenografted into the striatum of NOD‐SCID mice. Bioluminescent and magnetic resonance imaging showed that knockdown (KD) of PPARα reduced the tumourigenicity of GSC in vivo. PPARα‐expressing control GSC xenografts formed invasive histological phenocopies of human glioblastoma, whereas PPARα KD GSC xenografts failed to establish viable intracranial tumours. PPARα KD GSC showed significantly reduced proliferative capacity and clonogenic potential in vitro with an increase in cellular senescence. In addition, PPARα KD resulted in significant downregulation of the stem cell factors c‐Myc, nestin and SOX2. This was accompanied by downregulation of the PPARα‐target genes and key regulators of fatty acid oxygenation ACOX1 and CPT1A, with no compensatory increase in glycolytic flux. These data establish the aberrant overexpression of PPARα in GSC and demonstrate that this expression functions as an important regulator of tumourigenesis, linking self‐renewal and the malignant phenotype in this aggressive cancer stem cell subpopulation. We conclude that targeting GSC PPARα expression may be a therapeutically beneficial strategy with translational potential as an adjuvant treatment. © 2018 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of Pathological Society of Great Britain and Ireland. Introduction Gliomas form the most common group of primary central nervous system (CNS) tumours, with an incidence of 6.6 per 100 000 individuals/year [1]. A total of 50% of adult gliomas are glioblastomas, which are associated with poor clinical survival [2,3]. The median survival is 15 months in the setting of a clinical trial [4,5] and 12 months using current treatment regimens [1,6,7]. The peroxisome proliferator-activated receptors (PPARs) are ligand-activated transcription factors with diverse metabolic functions [8]. PPARα activates mitochondrial and peroxisomal fatty acid oxidation and ketogenesis and inhibits glycolysis and fatty acid synthesis [9][10][11]. Our previous work has shown that the PPARA gene and its protein product are significantly overexpressed in IDH-wild type primary glioblastomas and that high PPARA expression functions as an independent prognostic biomarker [12]. This finding has been independently cross-validated in the Chinese Glioma Genome Atlas [13]. Stem-like cells have been identified in glioma in vitro models [22,23] and glioma stem cells (GSC), with the defining properties of self-renewal, multi-potency and in vivo tumourigenicity being isolated from human glioblastoma samples [24][25][26]. GSC are considered responsible for tumour recurrence and treatment failure [27,28]. Karyotypically normal, untransformed (foetal) neural stem cells (NSC) share many features with patient-derived GSC [29] and are ideal experimental controls [30]. In order to improve our understanding of GSC biology, the key regulatory pathways driving the proliferation of this cancer stem cell population need to be understood. Identification of factors that distinguish NSC from transformed GSC may lead to new therapeutic agents designed to inhibit neoplastic growth with minimal toxicity to the (adult) NSC compartment [31]. Several studies to date suggest that PPARα signalling contributes to the proliferation of glioblastomas [12,32]. However, the role of PPARα expression in human GSC populations is unknown. In this study, we tested the hypothesis that PPARα expression contributes to the malignant phenotype of GSC. We used RNA interference approaches to establish the role of PPARα in maintaining the properties of GSC. Cell culture The human GSC (G144 and G26) and NSC (U5 and U3) cell lines (kind gifts from Dr Steve Pollard, University of Edinburgh) were cultured as monolayers in serum-free basal media [26,29]. HEK293T (human embryonal kidney) cells (Sigma, St. Louis, MO, USA) used for producing lentiviral particles were cultured in DMEM (10% FBS and 1× non-essential amino acids). All cell lines were cultured in 5% CO 2 at 37 ∘ C. Protein and RNA extraction Total protein was extracted from cell lines using Milliplex lysis buffer (Millipore, Burlington, MA, USA) and quantified using a Qubit ® Protein kit and fluorometer (Life Technologies, Carlsbad, CA, USA). RNA was extracted using an RNeasy ® Plus Mini Kit (Qiagen, Hilden, Germany) and the QIAcube ® platform. RNA was quantified using a NanoDrop1000 spectrophotometer (ThermoFisher Scientific, Waltham, MA, USA). Analysis of GSC and NSC accessioned microarray data Array data derived by Pollard et al (GSE15209) [26] was accessed from https://www.ncbi.nlm.nih.gov/geo/ query/acc.cgi. Data analysis was performed using Partek Genomics Suite v.6.16.0812 (Partek, St. Louis, MO, USA) and normalised using GC-RMA. Differentially expressed genes were analysed using an ANOVA. The false discovery rate was set at an FDR-corrected p value of <0.05 with a 1.5-fold expression change cut-off. In vitro cell proliferation studies Cells were plated at 420 cells/mm 2 and cultured for 72 h. The total cell number for each replicate for each line was counted. Cells were re-plated at 420 cells/mm 2 , and the experiment was repeated every 72 h for 15 days. The fold increase in cell number over day 0 was calculated using the mean value of each technical replicate for each cell line at each independent time point. Ki67 and caspase-3 fluorescence immunocytochemistry was carried out as described previously [36] using antibodies listed in supplementary material, Supplementary materials and methods. CellTrace™ Violet proliferation studies were carried out according to the manufacturer's instructions (Thermofisher). The proliferation control and experimental samples were acquired on a Novocyte 3000 Flow Cytometer (Acea Biosciences, San Diego, CA, USA). Data were analysed using ModFit LT v3.3 software (Verity Software House, Topsham, ME, USA). Cell cycle analysis was carried out on the platforms described above using 5 μM Draq5 nuclear stain (BioLegend, San Diego, CA, USA) (15 min incubation) and cells fixed in 4% paraformaldehyde (PFA). Colony-forming unit assay Cells were plated at 16 cells/mm 2 and cultured for 12 days. The cells were fixed (4% PFA) and then stained with 1% crystal violet (Sigma). Calculation of colony-forming unit (CFU) efficiency was determined as described previously [37]. Senescence-associated β-galactosidase staining Cells were plated at 520 cells/mm 2 and cultured for 5 days. Cells were stained for 12 h using a Senescence β-galactosidase Staining Kit (Cell Signalling Technologies, Danvers, MA, USA). Ten high-power fields (hpf) were examined per well and positive (cytoplasmic and nuclear blue) staining recorded as a percentage of total live cells per hpf. Intracranial xenografting procedure, bioluminescent imagining and MRI All animal-handling procedures and experiments were performed in accordance with the UK Animal Scientific Procedures Act 1986 and covered by UK Home Office licenses (University of Leeds ethics committee project license:PA5C8BDBF). KD and SCR stably transduced cells were injected into 7-week-old female NOD-SCID (NOD.CB17-Prkdc scid /NcrCrl) mice (Charles River, Wilmington, MA, USA); 30 000 cells were engrafted per animal (10 animals per cell line). Intracranial injection co-ordinates were 1 mm rostral to bregma, 1.5 mm lateral (right) and 4 mm deep. Intracranial tumour growth was analysed every 30 days using the Xenogen IVIS Spectrum in vivo imaging system and 60 mg/kg intraperitoneal D-luciferin (Perkin Elmer, Waltham, MA, USA). MRI data were acquired using a 7 T MRI System (AspectImaging, Watford, UK). NIfTI format images were analysed using MANGO (Mango Software, University of Texas, TX, USA). Animals that had lost ≥20% of body weight or showed persistent neurological signs were terminated by pentobarbitone overdose followed by transcardial 4% PFA perfusion. The brain was removed and fixed in 4% PFA. The experiment ran for 25 weeks. Immunohistochemistry (IHC) and immunofluorescence (IF) Murine brain tissue was processed on a Leica Peloris II histological platform (Leica, Wetzlar, Germany) and H&E stained using a Leica Autostainer XL platform (Leica). PPARα, Ki67 and EGFR IHC was carried out using a Leica Bond III automated immunostainer (Leica). IDH1, ATRX and GFAP IHC were carried out using a Ventana BenchMark ULTRA platform (Roche, Basel, Switzerland). Antigen retrieval techniques and antibody concentrations are detailed in supplementary material, Supplementary materials and methods and Table S4. EGFP immunofluorescence was carried out as described previously [38]. Western blotting and RT-qPCR (reverse transcription-quantitative PCR) Western blotting was carried out as described previously [36] (primary antibodies are listed in supplementary material, Table S2). Extracted total RNA was reverse transcribed to cDNA for quantitative real-time PCR using a High Capacity cDNA Reverse Transcription Kit (Applied Biosystems, Foster City, CA, USA); qPCR was performed using a StepOne Plus Real-Time PCR system and StepOne software v2.1 (Applied Biosystems) with Taqman ® Fast Gene Expression Mastermix (Applied Biosystems), and Assay On Demand (AOD) products as listed in supplementary material, Table S3. Lactate and glucose assays Cells were plated at 1000 cells/mm 2 and cultured for 72-96 h. Adherent cells were counted, and the culture media was collected, centrifuged at 160 × g and the supernatant kept on ice. Lactate and glucose supernatant concentrations were determined using a Cobas 8000 automated analyser (Roche) (lactate oxidase and hexokinase methods, respectively). Statistical analysis The normality of data distributions were tested using the Kolmogorov-Smirnov and D'Agostino and Pearson tests. A Wilcoxon matched pairs test or unpaired t-test was used as appropriate. A Friedman test with Dunn's multiple comparison test was used for paired non-parametric analysis of greater than two groups. A two-way repeated-measures ANOVA was used to compare in vitro cellular growth rates. All statistical tests were two-tailed. Differences with p < 0.05 (G166, G174, G179, G144, GliNS) versus NSC versus normal adult brain tissue. In the box plots, the upper and lower 'hinges' correspond to the 25th and 75th percentiles, respectively. The upper/lower whisker extends to the highest/lowest value that is within 1.5× interquartile range (IQR). Data beyond the end of the whiskers are outliers. Normalised and log-transformed mRNA gene-level summaries are shown. The test statistic was a Friedman test with Dunn's multiple comparison test (A and C) or a one-way ANOVA (D). Error bars show SEM. *p < 0.05, ** p < 0.01; ***p < 0.001; ns, non-significant; GAPDH, glyceraldehyde-3-phosphate dehydrogenase; 18S, 18 S ribosomal RNA. were considered statistically significant. Data are represented as mean ± SEM (geometric mean ± 95% CI for RT-qPCR data). Statistical tests were performed using GraphPad Prism v5 (GraphPad Inc., San Diego, CA, USA). Results PPARα protein and PPARA mRNA levels were greater in GSC PPARα protein expression was examined in three independent passages of the U3 and U5 NSC lines and G144 and G26 GSC lines. There was a significant increase in PPARα protein level in the G26 cell line compared to both U3 (p = 0.032) and U5 cell lines (p = 0.048) ( Figure 1A). Immunofluorescence microscopy showed a mixed nuclear/perinuclear and cytoplasmic expression of PPARα in the GSC ( Figure 1B). RT-qPCR was performed for the U3, U5, G144 and G26 cell lines: there was a significant increase in PPARA mRNA levels in the G26 cell line compared to the U3 cell line (p = 0.039) and the U5 cell line (p = 0.049) when normalised to GAPDH or 18S expression ( Figure 1C). PPARA gene expression was increased in whole transcriptome analysis of GSC versus NSC Whole transcriptome expression profile data (accession number GSE15209) were analysed. Using a 1.5-fold change cut-off value (FDR threshold of 0.05), analysis of PPARA expression showed that this transcript was significantly increased in GSC compared to NSC and normal adult brain tissue (p = 0.006, p = 0.001, respectively) ( Figure 1D). Increased expression of PPARA was noted to be within the second quintile of all overexpressed transcripts within the GSC versus NSC comparison (p = 0.006, 1.65-fold change). PPARα KD inhibited GSC proliferation and clonogenicity in vitro To investigate the role of PPARα expression in GSC, we generated a stable PPARα KD GSC cell line from the G26 parent line. A control scrambled (SCR) shRNA lentiviral construct was utilised. shRNA-mediated KD of PPARα was confirmed by western blotting 60 days after lentiviral transduction (see supplementary material, Figure S1). The addition of a luciferase cassette had no effect on shRNA PPARα KD efficiency. PPARα KD lead to a significant decrease in the PPARα KD cell population expansion compared to the SCR shRNA cell population (p = 0.021) (population doubling time: 2.3 days versus 1.3 days for KD shRNA and SCR shRNA, respectively) ( Figure 2A). There was a significant decrease in Ki67 nuclear positivity between SCR shRNA-versus PPARα KD shRNA-transduced cells (30.0% versus 15.1%) (p = 0.003) ( Figure 2B). A CellTrace Violet (CTV) cell proliferation assay was used to monitor cell divisions (generations) in PPARα KD shRNA-and SCR shRNA-transduced cells. In keeping with the population doubling studies described above, PPARα KD shRNA-transduced cells showed a significant reduction in proliferation index compared to SCR shRNA-transduced cells (p = 0.002) ( Figure 2C). In addition, the proportion of cells in the G1 phase of the cell cycle was shown to be significantly increased in the PPARα KD shRNA cell line compared to the SCR shRNA cell line (p = 0.034) ( Figure 2D). We also studied the effect of stable PPARα KD on clonogenicity. The mean number of colonies formed by PPARα KD cells was reduced by 53.5% relative to SCR shRNA cells (p = 0.029) ( Figure 2E). There was also a significant increase in β-galactosidase (pH 6.0) positivity, a known characteristic of senescent cells, between SCR shRNAversus PPARα KD shRNA-transduced cells (6.8% versus 16.4%, p = 0.008) ( Figure 2F). In conjunction with this, PPARα KD shRNA-transduced cells were found to have aberrant cytonuclear features compared to SCR shRNA controls: the cells were notably larger and flattened with a frequent loss of the spindle morphology. Increased intracytoplasmic vacuolation and multi-nucleation was also noted with strong perinuclear β-galactosidase positivity ( Figure 2F). PPARα kD suppressed the tumourigenicity of GSC orthotopic xenografts SCR and PPARα KD shRNA-transduced G26 cells were stereotactically implanted in a NOD SCID murine model, and the effect on tumour initiation and progression was monitored. Fourteen days after xenografting, all animals showed detectable bioluminescence (BLI) signal. There was significantly less BLI signal in the PPARα KD group compared to the SCR shRNA control group at each time point during the course of the experiment ( Figure 3A). Remaining animals (n = 8) were terminated after 25 weeks ( Figure 3B). T2-weighted MRI was performed 2 h antemortem. The SCR shRNA group showed evidence of right-sided hemispheric T2-hyperintense lesions with mass effect ( Figure 3C). The PPARα KD experimental group showed no MRI signs of intracranial abnormality ( Figure 3C). Twenty-five weeks after the xenograft procedure, low power histological examination of the brains from the control SCR shRNA xenograft arm (n = 4) demonstrated extensive tumour formation ( Figure 3D). All SCR shRNA xenograft experiments produced tumour masses with histological (H&E) evidence of non-circumscribed cellular tumours consisting of pleomorphic cells ( Figure 3D) with frequent atypical mitotic figures. Ki67 IHC showed variable nuclear positivity across the tumour field (focal areas of >50% Ki67 positivity) and diffuse infiltration by Ki67-positive cells into the adjacent host parenchyma ( Figure 4A). PPARα IHC showed extensive cytoplasmic and nuclear positivity ( Figure 4A). IHC performed on SCR shRNA xenografts showed the tumour cells to be negative for the expression of the IDH1R132H-mutated protein product with strong nuclear ATRX expression and GFAP and EGFR immunopositivity ( Figure 4B). EGFP expression examined by immunofluorescence recapitulated the malignant infiltration into the host parenchyma described above ( Figure 4B). In contrast, the KD shRNA xenograft arm of the experiment showed no histological evidence of tumour formation ( Figure 4C). Immunofluorescence microscopy of brains from the KD shRNA xenograft arm of the experiment demonstrated single cells with EGFP immunopositivity (negative for human-specific Ki67; Figure 4C). These cells were scattered at the lateral aspect of the right anterior commissure, an area just medial to the stereotactic injection site ( Figure 4D). No EGFP-positive cells were observed in any other brain regions. PPARα shRNA KD altered the protein and gene expression of stem cell and mitogenic markers Transduced G26 cells were examined by western blotting to assess any effects on the protein expression of key signalling mediators that occurred concomitantly with the stable KD of PPARα. The expression of c-Myc (p = 0.029) and Cyclin D1 (p = 0.035) proteins were significantly reduced ( Figure 5A). The stem cell markers nestin and SOX2 showed similarly decreased protein expression (p = 0.037, p = 0.023, respectively) ( Figure 5A). The expression of the astrocytic differentiation marker GFAP was increased (p = 0.022) ( Figure 5A). The PPARα transcription target EGFR showed a reduced protein expression ( Figure 5A). Across multiple independent passages, no PARP cleavage was observed by western blot in the KD shRNA cell lines, establishing that the reduced proliferation rates described were not due to increased apoptosis. Indeed, no increase in active caspase 3 was observed by immunofluorescence in the KD shRNA cell line ( Figure 5B). There was a significant fold decrease in proliferation in the PPARα KD GSC population compared to the SCR shRNA GSC population (population doubling time: 2.3 days versus 1.3 days, KD shRNA and SCR shRNA, respectively). The test statistic was a two-way repeated-measures ANOVA and Bonferroni post hoc test. Data were analysed using nonlinear regression with y = 0 (constrained). Increase (fold-change) in cell number shown on a logarithmic scale (to base2). (B) There was a significant reduction of the Ki67 index in the PPARα KD GSC population compared to the SCR shRNA GSC population. The proportion of Ki67 nuclear positivity was quantified as the proportion of total nuclei per high-power field (×200). Ten high-power fields were examined per slide/technical replicate. Nuclei labelled with DAPI nuclear dye. n = 3, three technical replicates per independent experiment. Representative Ki67 IF images shown. Scale bar = 50 μm. (C) CTV cell proliferation assay; PPARα KD GSC showed a reduction in proliferation index (sum of the cells in all generations divided by the computed number of original parent cells theoretically present at the start of the experiment, where each daughter cell has half the CTV fluorescence intensity of its parental cell). Analysis was carried out using a Novocyte 3000 Flow Cytometer with 405 nm excitation laser and 445/45 nm Band Pass (BP) filter. n = 3, three technical replicates per independent experiment. (D) There was a significant increase in G1 phase cells with PPARα KD. Draq5 analysis was carried out using a Novocyte 3000 Flow Cytometer with 640 nm excitation laser and 780/60 nm BP filter. n = 3, three technical replicates per independent experiment. (E) There was a significant reduction in the number of colonies in the PPARα KD GSC population compared to the SCR shRNA cell population. Representative images of clonogenic assays are shown. (F) There was a significant increase in senescence-associated β-galactosidase staining in the PPARα KD GSC population compared to the SCR shRNA cell population. Representative high-power images of β-galactosidase staining are shown. n = 3, three technical replicates per independent passage. The test statistic was a Wilcoxon matched pair test, two-tailed p value (B-F). Error bars show SEM. SCR, scrambled control; *p < 0.05, **p < 0.01. Using RT-qPCR we found a significant reduction in PPARA mRNA levels in the KD shRNA cell lines compared to the SCR shRNA lines when normalised to GAPDH (p = 0.022) and 18S expression (p = 0.001) ( Figure 5C). In keeping with the western blotting analysis of protein, there was a significant reduction in the expression of the stem cell markers NES and SOX2 in the KD shRNA cell lines compared to the SCR shRNA lines when normalised to GAPDH (p = 0.001, p = 0.002, respectively) and 18S expression (p = 0.01, p = 0.002, respectively) ( Figure 5C). There was also a reduction in cMYC expression in the KD shRNA cell lines when normalised to GAPDH and 18S expression (p = 0.025, p = 0.027, respectively) ( Figure 5C). The PPARα-regulated fatty acid oxidation enzymes ACOX1 and CPT1a were also examined by RT-qPCR. A reduction in ACOX1 was seen when normalised to 18S expression (p = 0.027) ( Figure 5C). There was a reduction in the expression of CPT1A in the KD shRNA cell lines compared to the SCR shRNA lines when normalised to GAPDH (p = 0.0002) and 18S expression (p = 0.004) ( Figure 5C). PPARα shRNA KD had no significant effect on lactate production or glucose consumption in vitro Biochemical analysis was performed on media harvested from shRNA-transduced cells after 72 and 96 h expansion in vitro. There was no difference in lactate production between SCR shRNA cells and KD shRNA cells after 72 or 96 h (p = 0.103; p = 0.092, respectively) ( Figure 5D). There was no significant difference in relative glucose concentration in the harvested media between SCR shRNA cells and KD shRNA cells after 72 or 96 h (p = 0.172, p = 0.087, respectively) ( Figure 5E). Discussion A key area of investigation in the search for more effective treatments for glioblastoma is the molecular manipulation of self-renewal and proliferation pathways in GSC [39]. Direct targeting of GSC may also improve the efficacy of conventional chemoand radiotherapy [40]. Transcription factors overexpressed in GSC could provide effective treatment targets for novel therapeutic agents. In this study, GSC were shown to express increased levels of PPARα protein and PPARA transcript when compared to NSC controls. NSC share key functional and genetic similarities to GSC and are considered an ideal experimental control in this setting [30]. The analyses of PPARA expression in accessioned microarray data cross-validated the findings derived from our in vitro models. Indeed, the increased expression of PPARA was suggested in this work to be a significant finding shared across multiple GSC cell lines. The molecular mechanisms underlying this increased expression remain to be elucidated and are an important area of future investigation. We selected the well-validated IDH1-wildtype, non-CpG island methylated G26 GSC line as a target for our lentiviral transduction work to best recapitulate a primary glioblastoma GSC subpopulation [41]. Stable KD of PPARα protein expression resulted in a significantly reduced in vitro growth rate. This was confirmed using flow cytometric generational tracing, which showed a decrease in the number of cell divisions per unit time. PPARα KD additionally reduced the clonogenicity of the GSC line. These results indicate that PPARα is required for, or plays a key role in, the maintenance of GSC proliferative capacity. Examination of the PPARα KD shRNA-transduced cells demonstrated a significant increase in senescence-associated β-gal staining in vitro, indicating the induction of senescence. Cellular senescence implies a stable and long-term loss of proliferation capacity with no loss of cellular viability or metabolic activity [42][43][44]. Long-term exit from the cell cycle has been suggested as a key marker of cellular senescence [42], and PPARα KD resulted in evidence of cell cycle arrest. Morphological changes consistent with a senescent phenotype were also observed [42]. It is noteworthy that this indicates that molecular senescence mechanisms may remain latently functional even in aggressive GSC populations. A defining functional characteristic of GSC is their ability to initiate and propagate histological phenocopies of human glioblastoma when xenografted intra-cranially in immunocompromised animals [45,46]. We used orthotopic xenotransplantation to investigate the functional requirement of PPARα to maintain the tumourigenic potential of human GSC. The xenograft brains in the control SCR shRNA experimental arm showed the key histological features of a human glioblastoma. Immunophenotyping demonstrated IDH1 Figure 5. Legend on next page mutation-negative tumour cells with strong nuclear ATRX expression [47,48] and EGFR overexpression [49,50], confirming an expression profile consistent with IDH-wildtype primary glioblastoma [51]. Conversely, radiological and histological examination showed that PPARα KD xenografts did not form significant tumour masses in vivo, indicating that GSC lacking PPARα expression have markedly reduced tumour-initiating capacity. Nevertheless, the immunofluorescence examination of PPARα KD GSC-engrafted brains demonstrated EGFP-positive cells at the injection sites, confirming successful cell engraftment. We concluded that these EGFP-positive cells have a significantly reduced proliferation rate but remain viable over an extended time course in vivo, in keeping with the hallmarks of senescent cells. Such scattered EGFP-positive cells may provide sufficient signal for BLI detection in the absence of an observed tumour mass, as has been previously reported [52]. It has been shown that both PPARα pharmacological antagonism and siRNA-mediated PPARα KD reduce the expression of c-Myc, cyclin D1 and CDK4 in renal cell carcinoma (RCC) in vitro models [53]. The PPARα agonist Wy-14643 has also been shown to decrease the expression of the let-7C miRNA in wild-type mice, with no similar repression seen in PPARα-null animals [54]. let-7C miRNA targets and represses c-Myc expression [54]. c-Myc plays a role in the initiation and proliferation of glial brain tumours, and there is evidence of deregulation of the c-Myc pathway in glioblastoma [55][56][57]. The full transcriptional functions of c-Myc remain to be elucidated [58], but the induction of cyclin D1 [59] and the repression of p21 WAF1/CIP1 expression have been previously reported [60,61]. We investigated a putative PPARα/c-Myc interaction in our PPARα KD in vitro model: c-Myc protein expression was found to be decreased in shRNA-mediated PPARα KD GSC. This was accompanied by a significant decrease in cyclin D1 expression and a concomitant G1 phase cell cycle arrest. PPARα has also been reported to play a role in EGFR phosphorylation and activation [62,63]. PPARα-LXRα/RXRα heterodimers positively regulate EGFR promotor activity, and a putative PPARα DNA response element has been described upstream of the EGFR promoter [63]. We have previously reported that EGFR mRNA expression significantly correlates with high PPARA mRNA expression in the TCGA primary glioblastoma dataset [12]. In keeping with these findings in surgical tumour specimens, PPARα KD in GSC was found to significantly reduce the protein expression of EGFR in vitro. EGFR activation and subsequent receptor dimerisation promote cellular proliferation via activation of the MAPK and PI3K-Akt pathways [64], and this reduction of EGFR expression may be an additional factor in the decreased expression of c-Myc, which is an immediate early-response gene downstream of many ligand-membrane receptor complexes [58]. PPARα KD also resulted in reduced expression of nestin and SOX2 proteins with an increase in GFAP protein expression. GFAP is a commonly used astrocyte maturation marker [65][66][67]. GSC populations are known to upregulate GFAP along with other astrocyte differentiation markers (AQP4 and ALDH1A1) following the induction of a differentiated and cell cycle-arrested state [26,68]. The altered expression of this differentiation marker was therefore in keeping with a reduction in GSC proliferative capacity and a senescent (post-mitotic) state. Whether this PPARα KD-driven cellular state is reversible or represents terminal differentiation warrants further investigation ( Figure 6) [69]. PPARα drives the transcription of key fatty acid oxidation (FAO) enzymes, including carnitine palmitoyltransferase 1 alpha (CPT1α; CPT1A) and acyl-coenzyme A oxidase 1 (ACOX1) [8]. Both murine sub-ventricular zone NSC and human GSC have been reported as being dependent on FAO [70,71]. In this study, PPARα KD reduced the gene expression of CPT1A and ACOX1, with a concomitant reduction in proliferation and clonogenic potential. PPARα antagonism in RCC models decreases FAO and enhances glycolysis [53]. We assayed in vitro lactate and glucose concentrations and showed that a compensatory increase in glycolysis (pyruvate to lactate conversion; the Warburg effect [72]) did not occur in GSC. This may be due to the reduction in c-Myc expression, which has been associated with decreased glycolytic rates [73][74][75]. In addition, we propose that FAO-dependent GSC have only a small requirement for glucose oxidation [70,76], and PPARα KD, through effects on FAO enzyme expression, may deplete GSC populations of their prime FAO bioenergetic source with no compensatory glycolytic flux, resulting in the anti-proliferative phenotype described. Interestingly, the unique metabolic requirements of GSC compared to the aberrantly differentiated cells of the tumour mass [40] may explain the paradox of increased PPARA expression in mediating prolonged clinical survival [12] versus KD of PPARα in GSC inhibiting tumour growth. We hypothesise that high PPARA exerts an inhibitory effect on glioblastoma glycolysis [77], an effect not seen in the GSC population. The differing Figure 5. PPARα KD reduced the protein and gene expression of stemness markers in vitro with no effect on glycolytic flux. (A) Protein expression was examined at three independent passages, n = 3. Protein expression values determined using densitometric analysis, with protein-integrated area density values expressed relative to the loading control β-actin values. Expression values were calculated relative to the PPARα expression values. Representative western blot shown. (B) There was no significant reduction of the active caspase 3 index in the PPARα KD GSC population compared to the SCR shRNA GSC population. The proportion of active caspase 3 cellular positivity was quantified as a proportion of total nuclei per high-power field (×200). Ten high-power fields were examined per slide/technical replicate. Nuclei were labelled with DAPI nuclear dye. n = 3, three technical replicates per independent experiment. Representative active caspase 3 IF images shown. (C) mRNA expression was examined in the PPARα KD GSC population compared to the WT and SCR shRNA GSC population by RT-qPCR, normalised to the reference genes 18S and GAPDH. Relative gene expression (expressed as a fold-difference compared to control samples) was calculated using the 2 −ΔΔCt method, and expression values were calculated relative to the WT control samples. The geometric mean and 95% confidence interval are shown on a logarithmic scale (to base2). n = 3 independent experiments; all samples analysed in triplicate. (D) Culture growth media lactate and glucose concentration was examined in three independent passages. Lactate/glucose concentrations were normalised to cell number at the time of media harvest. The concentration of analyte in blank control wells was subtracted from each assay output, which was then normalised to the total cell number in each well. The test statistic was a Wilcoxon matched pair test, two-tailed P value (A) or a Friedman test with Dunn's multiple comparison test (C-E). Error bars show SEM. WT, wild type; SCR, scrambled; GAPDH, glyceraldehyde-3-phosphate dehydrogenase; 18S, 18 S ribosomal RNA; *p < 0.05, **p < 0.01, ***p < 0.001; ns, non-significant. roles of molecular mediators of malignancy in disparate GSC and tumour mass cell populations is a key area for future investigation and has crucial implications when designing adjuvant treatment strategies to inhibit tumour recurrence. In summary, our study establishes the expression of PPARα in GSC. The stable KD of PPARα in GSC completely abolished intracranial tumour formation. This was associated with the induction of cellular senescence in vitro, driven by the reduced expression of mitogenic and stemness factors. These data provide evidence of the role of PPARα in GSC as an important molecular regulator, linking proliferation and self-renewal with a critical role in maintaining the malignant phenotype. Targeting PPARα in GSC populations may therefore have translational potential as a novel adjuvant therapeutic approach to abrogate the contribution of GSC to the poor overall clinical survival for glioblastoma patients. Table S1. PPARA shRNA primer sequences Table S2. Details of primary antibodies used for western blotting Table S3. Primer sets used for RT-qPCR assays Table S4. Details of primary antibodies and antigen retrieval used for immunohistochemistry
v3-fos-license
2023-06-14T13:04:41.417Z
2023-01-01T00:00:00.000
259149184
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1515/biol-2022-0592", "pdf_hash": "799706f62766af1637cff0dda099d612727241b8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2871", "s2fieldsofstudy": [ "Medicine" ], "sha1": "a6243482823a0a6d839133804984fe1d424172ef", "year": 2023 }
pes2o/s2orc
Case report of hepatic retiform hemangioendothelioma: A rare tumor treated with ultrasound-guided microwave ablation Abstract Retiform hemangioendothelioma (RH) is a type of low-grade malignant angiosarcoma. It commonly involves the skin and subcutaneous tissue of the lower extremities, but a few cases have been reported in the gut. However, hepatic RH has not been previously reported. This report presents the case of RH of the liver in a 61-year-old woman who was admitted to the hospital having presented with liver space–occupying lesions of 2 months evolution. The patient underwent an abdominal ultrasound examination, which indicated a hemangioma, but abdominal computed tomography diagnosed a liver abscess. In order to determine the nature of the lesion, an ultrasound-guided liver biopsy was performed, after which a pathological diagnosis confirmed the presence of RH in the liver. The patient underwent ultrasound-guided microwave ablation three times and has been followed up for 8 years with no tumor recurrence or metastasis. Surgical excision is still the first choice for the treatment of hepatic RH. As shown in this case, however, for patients who refuse to undergo surgery or have surgical contraindications, ultrasound-guided microwave ablation is an alternative treatment option. The report of this case expands the scope of liver tumors to a certain extent and provides a reference for clinical diagnosis and treatment. Introduction Retiform hemangioendothelioma (RH) is a rare vascular tumor of low-grade malignancy first described by Calonje in 1994, with less than 40 cases reported throughout the world [1]. Its prognosis is unpredictable because it can affect multiple organs [2]. Unfortunately, a consensus has not yet been reached regarding the effective treatment of this type of tumor. This article presents the first reported case of hepatic RH. This case is also the first patient with hepatic RH to have been treated successfully with ultrasound-guided microwave ablation. Case presentation A 61-year-old female patient was admitted to our hospital having presented with liver space-occupying lesions of 2 months evolution. The patient underwent a plain abdominal computed tomography (CT) scan and enhanced examination 2 months ago, which identified a round lowdensity shadow with fuzzy boundaries and uneven density, measuring 43 × 45 mm, in the right lobe of the liver. After contrast enhancement, the edge of the lesion in the arterial phase showed spot-like enhancement, and the contrast medium filled the lesion with time in the venous phase and the delayed phase, which indicated a hemangioma. Due to the non-special clinical manifestations, the patient was given no treatment. Before admission, the patient underwent another abdominal ultrasound examination in a different hospital, which also identified a hemangioma, but an abdominal CT scan diagnosed a liver abscess. The patient was admitted to our hospital for further diagnosis and treatment. The patient had no history of hepatitis or cirrhosis and no history of alcohol consumption. She also denied any history of drug use or exposure to chemical poisons. Physical and experimental examination revealed nothing of note other than slightly elevated serum levels of alkaline phosphatase and γ-glutamyl transpeptidase. A contrastenhanced ultrasound showed that there was no enhancement of the contrast medium at any time in the lesion, so it was considered to be focal necrosis with partial liquefaction ( Figure 1). In order to further determine the nature of the lesion, an ultrasound-guided liver biopsy was performed. Pathological examination showed a proliferation of interlobular fibrous tissue and infiltration of small focal lymphocytes, in which there were vascular lymphatics of irregular size. The endothelial cells were arranged in the shape of a shoe nail. Immunohistochemistry with a result of Cluster of Differentiation 31 (CD31) (+++), CD34 (+++), Factor VIII-Related Antigen (FVIII−RAg) (+), and Ki-67 (antigen identified by monoclonal antibody Ki-67) <1% led to a pathological diagnosis of hepatic RH ( Figure 2). After refusing surgical treatment, the patient underwent ultrasound-guided microwave ablation therapy and two supplementary treatments after 2 and 4 months. A biopsy was taken again after 6 months, and the pathological examination showed only an infiltration of focal lymphocytes and monocytes in the portal area ( Figure 3). The patient has been followed up for 8 years with no tumor recurrence or metastasis. Informed consent: Informed consent has been obtained from all individuals included in this study. Ethical approval: The research related to human use has been complied with all the relevant national regulations, institutional policies, and in accordance with the tenets of the Helsinki Declaration and was approved by the Ethics Committee of The 940 Hospital of Joint Logistic Support Force of People's Liberation Army. Discussion RH is a type of low-grade malignant angiosarcoma [3], and its causes are unknown. Although metastasis and malignancy are rare, RH is known to recur in approximately 50% of cases [4]. RH most commonly affects adult female patients and usually involves the skin and subcutaneous tissue of the lower extremities, but a few cases have been reported in the postauricular and gluteal regions, mandible, spleen, and jejunum [5][6][7][8][9]. The features of RH vary among patients, expressing as hyperhidrosis or erosion masses [4,10,11]. In the reported case, the lesions were surrounded by poor circumscription as a presentation of their histological character. The main clinical manifestations of RH are single, slowgrowing, and unclear local skin plaques. Its histopathology mainly manifests as unclear tumor tissue boundaries, the reticular distribution of blood vessels, a single layer of endothelial cells in the shape of shoe nails, and surrounding lymphocytes. In the present case, immunohistochemical staining showed that D2-40 (carcinoembryonic antigen M2A), CD31, CD34, and Ki-67 were positive [12]. The diagnosis of RH mainly relies on histopathological morphology and immunohistochemical markers. RH also needs to be differentiated from benign and malignant tumors with hobnail morphology, such as angiosarcoma and hobnail hemangioma. Angiosarcoma is a highly malignant tumor with high recurrence and mortality rates. Its pathological manifestations are vascular disorder, an irregularly shaped vascular cavity, poor differentiation of tumor cells, obvious heteromorphism, visible mitosis, and tumor cells of varying sizes with less cytoplasm, slight eosinophilia, light staining, and unclear boundaries. Hobnail hemangioma is a benign tumor often seen in children and teenagers that can be treated with surgical resection. Its pathological manifestation is expanding vessels, similar to a dermal tumor. In hobnail hemangioma, the papillary process can be seen in the lumen, the endothelial cells are shaped like protruding nails, an irregular and narrow vascular space can be seen in the deep layer, and it is surrounded by lymphocyte infiltration [13]. At present, the treatment of RH is a wide surgical excision with histopathologically confirmed tumor-free margins; long-term follow-up is essential. Radiotherapy, chemotherapy, pulsed dye laser therapy, and local corticosteroid injections have also been reported to be effective [14,15]. However, there have been no previous reports of the treatment of RH using ultrasound-guided microwave ablation. Microwave ablation is a local treatment method that uses the heat generated by high-frequency microwave electromagnetic energy to cause coagulation and necrosis of the pathological tissue, after which the necrotic tissue is absorbed by the body, thereby removing the local pathological tissue. This treatment has the advantages of minimal tissue damage and fast recovery [16]. It can be used to treat liver cancer, non-small-cell lung cancer, renal cancer, and other tumors [17]. For the treatment of small liver cancer, the five-year survival rate is similar to that of surgical resection, a radical operation. The patient in the reported case was a middle-aged woman, and the final diagnosis was confirmed by pathology. As the patient refused to undergo surgery, she received ultrasound-guided microwave ablation. Two and four months after the first course of treatment, ultrasound-guided microwave ablation was performed again to ensure the complete necrosis of the tumor tissue. A biopsy was taken for pathological examination six months after the initial treatment, showing only an infiltration of focal lymphocytes and monocytes in the portal area. The patient has been followed up for eight years and is generally in good health, with no tumor recurrence or metastasis. Conclusion This was a case of hepatic RH exhibiting peculiar pathological features. Relevant imaging examinations, such as abdominal CT and magnetic resonance imaging, did not lead to a clear diagnosis, but an ultrasound-guided liver biopsy combined with immunohistochemistry confirmed the final diagnosis. This is the first reported case of hepatic RH and the only case of a patient with hepatic RH being treated successfully with ultrasound-guided microwave ablation. Surgical excision is still the first choice for the treatment of hepatic RH. As shown in this case, however, for patients who refuse to undergo surgery or have surgical contraindications, ultrasoundguided microwave ablation is an alternative treatment option. This case can enrich clinicians' understanding of hepatic angiosarcoma to a certain extent and can provide a reference for their next treatment options. However, this is only one case, and it still needs the support of evidence-based medicine to prove its effectiveness.
v3-fos-license
2018-06-08T13:47:45.720Z
2018-06-05T00:00:00.000
46945197
{ "extfieldsofstudy": [ "Medicine", "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-018-26887-3.pdf", "pdf_hash": "d63485a30a7ff5deb851bb389ab7f23acecdc00d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2872", "s2fieldsofstudy": [ "Medicine" ], "sha1": "d63485a30a7ff5deb851bb389ab7f23acecdc00d", "year": 2018 }
pes2o/s2orc
Reinforcement magnitudes modulate subthalamic beta band activity in patients with Parkinson’s disease We set out to investigate whether beta oscillations in the human basal ganglia are modulated during reinforcement learning. Based on previous research, we assumed that beta activity might either reflect the magnitudes of individuals’ received reinforcements (reinforcement hypothesis), their reinforcement prediction errors (dopamine hypothesis) or their tendencies to repeat versus adapt responses based upon reinforcements (status-quo hypothesis). We tested these hypotheses by recording local field potentials (LFPs) from the subthalamic nuclei of 19 Parkinson’s disease patients engaged in a reinforcement-learning paradigm. We then correlated patients’ reinforcement magnitudes, reinforcement prediction errors and response repetition tendencies with task-related power changes in their LFP oscillations. During feedback presentation, activity in the frequency range of 14 to 27 Hz (beta spectrum) correlated positively with reinforcement magnitudes. During responding, alpha and low beta activity (6 to 18 Hz) was negatively correlated with previous reinforcement magnitudes. Reinforcement prediction errors and response repetition tendencies did not correlate significantly with LFP oscillations. These results suggest that alpha and beta oscillations during reinforcement learning reflect patients’ observed reinforcement magnitudes, rather than their reinforcement prediction errors or their tendencies to repeat versus adapt their responses, arguing both against an involvement of phasic dopamine and against applicability of the status-quo theory. Thirdly, beta oscillations co-vary with motor performance, both in the motor cortex and in the basal ganglia 2,[12][13][14][15] . Beta activity decreases between approximately one second before to one second after movement onset and rebounds afterwards for several seconds 16 . In Parkinson's disease, moreover, tonic dopamine loss goes along both with increased beta activity and with the motor-inhibitory symptoms of rigidity and bradykinesia [1][2][3] . Based on these findings, it has been hypothesized that beta activity signals the motor system's propensity to maintain (as opposed to adapt) its current state 17 . In reinforcement learning, there is a similar function involved: with each new response choice, subjects have to decide whether they maintain or adapt their response strategies based on previous reinforcements. To the best of our knowledge, it has not been previously shown whether the status quo theory 17 is applicable to the context of reinforcement learning. If it is, however, it will imply that during reinforcement learning, beta activity increases when subjects maintain their response strategies based on observed reinforcements, but decreases when they adapt these strategies. We here investigated this hypothesis. Taken together, therefore, we set out to test the three above-described hypotheses that beta activity is modulated by reinforcement prediction errors, by reinforcement magnitudes and/or by response maintenance versus adaptation. Methodically, we recorded intracranial LFPs from the STN in human Parkinson's disease patients who performed a reinforcement learning paradigm. Results Patients performed a reinforcement-based learning paradigm in which they were asked to maximize reinforcements by choosing appropriate responses (Fig. 1A). In each trial of this paradigm, they had to move a joystick to either the left, right or front based on their own decision. Afterwards, a reinforcement stimulus was presented (number between zero and ten). Reinforcement magnitudes were drawn from Gaussian probability curves, where each joystick movement was associated to a particular probability curve. Each 20 trials on average (SD: 3), probability curves were interchanged randomly among directions without prior notice to patients. Curves differed in means, but had equal standard deviations of one (see Fig. 1B). Behavioral findings. Patients reliably learned our task. Within episodes of constant response-reinforcement mappings, average obtained reinforcements increased in magnitude across trials ( Fig. 2A). Moreover, patients clearly based their response strategies on previous reinforcements: reinforcement magnitudes obtained in a given trial correlated significantly with patients' probabilities of repeating that same response in the following trial ( Fig. 2B; Pearson's r = 0.90, as averaged across patients' individual correlation values, p < 0.001, computed with a non-parametric sign permutation test across patients). Reinforcement magnitudes did not correlate significantly with the following trial's response latencies (r = −0.14, p = 0.09; Fig. 2C) or with response durations (r = −0.13, p = 0.22; Fig. 2D). At the beginning of each trial, a red fixation square was presented to patients until the joystick had not been moved for at least 1500 ms. This time period served as a baseline in all analyses. The fixation square then turned green, prompting patients to decide for a response and move the joystick accordingly. 500 ms after the decision, the red square was again presented until the joystick had not been moved for 1,500 ms (keep the upcoming period of feedback presentation uncontaminated by post-movement artefacts). Afterwards, the feedback stimulus (number between 0 and 10) was presented for 1,200 ms, followed by an inter-trial interval. (B) Feedback probability curves. Each movement direction was mapped onto a Gaussian feedback probability curve that defined the likelihood of different reinforcement magnitudes. Mappings between responses and probability curves remained constant for an average of 20 trials (SD: 3). Grand-average LFP findings. Response-and feedback-related changes in oscillatory power, relative to the baseline period, were analyzed using Morlet wavelets. Time-frequency analyses were performed with a time resolution of 50 ms and a frequency resolution of 1 Hz throughout our analyses. Figure 3 plots the grand-average results of these analyses, computed across patients (for corresponding t-maps see Supplementary Fig. S1). Our response-locked results are in line with previous findings. We observed a significant movement-related reduction in beta activity during patients' joystick movements (i.e., between approximately 1000 ms before and 800 ms after response onset; Fig. 3A), p = 0.008 (cluster-based statistic, family-wise-error-rate (FWER) corrected) 18 . Moreover, a significant post-movement increase in beta activity starting approximately 800 ms after the movement became apparent, p < 0.001. Finally, a significant increase in gamma activity around movement onset was observed, p = 0.006. Locking LFP data to response termination (instead of response onset) confirmed significance of all three clusters and moreover showed response termination to be approximately in line with the end of the beta decrease and the beginning of the beta increase ( Supplementary Fig. S2). In feedback-locked analyses, we again found the prominent post-movement increase in beta activity that stretched until approximately 2000 ms after feedback onset, p < 0.001 (Fig. 3B). Upon visual inspection, it appeared to consist of both the actual post-movement beta increase and of another, partly overlapping, but less powerful beta increase during feedback presentation that differed in spectral frequency. Correlations of feedback-locked LFPs with behavioral parameters. To investigate whether reinforcements modulate beta activity, we computed correlations between reinforcement magnitudes and baseline-corrected wavelet energy. For each reinforcement magnitude between the values of two and eight, we first performed a separate wavelet analysis; values below two and above eight (which were relatively rare due to Gaussian feedback probability curves) were included into the categories of two and eight, respectively. Across the resulting seven time-frequency plots, we correlated reinforcement magnitudes with LFP (wavelet) energy, separately for each patient. In a second step, we searched for significant clusters of correlations within time-frequency space across patients 18 . We observed a cluster of significant positive correlations between 500 and 1500 ms after feedback onset in the frequency range of 14 to 27 Hz (Fig. 4A), p = 0.049. The average correlation within this cluster (i.e., the mean of all individual correlation values within the cluster) was r = 0.30. Plotting average LFP power changes within this cluster separately for different reinforcement magnitudes, we observed increases in beta power relative to baseline in large-feedback trials, but no deviation from baseline in small-feedback trials (Fig. 4B). Moreover, we analyzed LFP power changes across trials within blocks of constant response-outcome mappings. That is, we cut our overall trial series into several sub-series starting after each switch in stimulus-outcome mappings. Within each sub-series, we numbered each trial in ascending order and then binned together all trials with the same number across sub-series. For each trial number, we performed a separate wavelet analysis and then averaged LFP power changes across all time-frequency data points that fell into the significant cluster of Fig. 4A (Supplementary Fig. S3; please compare to Fig. 2A). No significant correlation was found between trial number and beta activity, r = 0.12, p = 0.11, suggesting that average beta activity does not change significantly across learning. To rule out the possibility that our significant correlations between reinforcement magnitudes and LFP oscillations were confounded by movement parameters, we correlated our LFPs with response latencies and durations in equivalent ways. Neither response durations, p = 0.54 ( Supplementary Fig. S4A), nor response latencies, p = 0.26 ( Supplementary Fig. S4B) correlated significantly with baseline-corrected LFP oscillations, excluding these parameters as potential confounds. To investigate whether reinforcement prediction errors (which well reflect phasic dopamine signals) modulate STN oscillations, we correlated these prediction errors with baseline-corrected wavelet energy. A reinforcement learning model as detailed in section 4.5 was fitted to patients' individual behavioral performance, resulting in a separate estimation of the reinforcement prediction error for each trial and patient. For each patient, trials were then sorted into one of seven bins according to the magnitudes of the prediction errors. For each of these bins, a separate wavelet analysis was performed. Across the resulting seven power plots, reinforcement prediction errors were correlated with LFP energy, separately for each time-frequency bin (resolution of 1 Hz and 50 ms). Afterwards, we searched for significant clusters of correlations across patients (second-level analysis) 18 . In the resulting correlation plot (Fig. 5A), we did not observe any significant cluster of correlations in time-frequency space, p = 0.52, arguing against the assumption that reinforcement prediction errors, and therefore phasic changes in dopamine, modulate beta activity. To investigate whether STN oscillations were modulated by patients' tendencies to maintain versus adapt their responses (status quo theory), we compared LFP oscillations of trials in which responses were switched to those in which responses were repeated (Fig. 6A). When looking for significant differences between these conditions 18 , we did not find any significant cluster, p = 0.09. Correlations of response-locked LFPs with behavioral parameters. Next, we investigated whether reinforcement magnitudes modulated STN oscillatory activity during subsequent joystick movements (Fig. 7A). We observed a significant negative correlation in the alpha/low beta spectrum in the frequency range of 6 to 18 Hz between response onset and approximately 1200 ms afterwards, p = 0.02. The larger the reinforcement obtained in a given trial, the lower the alpha/low beta activity during the following trial's joystick movement. The average correlation within this cluster was r = −0.22. By plotting average LFP power changes within this significant cluster for different reinforcement magnitudes, we observed a decrease in alpha/low beta power for highly reinforced trials relative to baseline, and an increase for trials with small reinforcements (Fig. 7B). Computing separate time-frequency plots for the different reinforcement magnitudes ( Supplementary Fig. S5), the significant correlation appeared to result from the peri-movement beta decrease stretching into lower frequencies between response onset and 1,200 ms afterwards for large, but not for small reinforcement magnitudes. Again, we investigated whether these results could be explained by response latencies, durations or choices. For response durations, we observed a significant correlation with baseline-corrected LFP power between 500 and 1,200 ms after response onset in the frequency range of 11 to 27 Hz, p = 0.01 ( Supplementary Fig. S6A). The average correlation within this cluster was r = −0.32. Though significant, however, this cluster does not overlap in time-frequency space with the cluster related to reinforcement magnitudes. Response durations, therefore, did not likely impact on these results. For response latencies, we indeed observed a significant correlation with beta oscillations in the time interval prior to 200 ms before response onset in the frequency range of 9 to 31 Hz, p < 0.001 ( Supplementary Fig. S6B). The average correlation within this significant cluster was r = −0.28. Again, the cluster differed in time-frequency space from our cluster related to reinforcement magnitudes, arguing against response durations having impacted on these results. Next, we investigated correlations between reinforcement prediction errors (which well reflect phasic dopamine levels) and subsequent response-locked LFP oscillations (Fig. 5B). These analyses did not produce a significant cluster in time-frequency space, p = 0.10. To investigate whether LFP oscillations were modulated by patients' tendencies to maintain versus adapt their response strategies (status quo theory), finally, we contrasted LFP power for trials in which patients switched versus repeated the previous trial's response (Fig. 6B). We did not find a significant cluster of differences, p = 0.06. Discussion We showed a task-related modulation of response-and feedback-locked STN oscillations by reinforcement magnitudes in human Parkinson's disease patients during reinforcement learning. We did not, in contrast, find a modulation of these oscillations by reinforcement prediction errors (related to phasic dopamine signals) or by patients' propensities to repeat versus adapt their responses based upon reinforcements (status quo theory). Effects of reinforcement magnitudes on LFP oscillations. During feedback presentation, the power of oscillations in the frequency range of 14 to 27 Hz was positively correlated with reinforcement magnitudes. These results were not due to confounding effects by response latencies or durations. During responding, moreover, the power of oscillations in the frequency range of 6 to 18 Hz (alpha and low beta spectrum) was negatively correlated with previous reinforcement magnitudes. Although for these response-locked results we observed significant correlations of LFP oscillations with response latencies and response durations, these did not overlap in time-frequency space with our significant clusters, arguing against confounding (or mediating) effects by these movement parameters. Still, however, our significant correlations might have been related to the adaptation of other types of response parameters that we did not record in our study. In fact, correlations between response-locked LFPs and previous reinforcement magnitudes might rather favor such an interpretation. Response parameters that might be of relevance here, but that we did not record, are the balance between response speed and accuracy 19,20 , motor effort 21 and gripping force 22 -where the time interval of our significant correlations particularly favors the latter. A role of beta activity in feedback processing had been previously suggested based upon EEG data 6,23 . Large reinforcements were shown to go along with phasic increases in beta activity at frontocentral EEG electrodes in a gambling task, while large losses were accompanied by decreases in theta power at frontocentral sites 6 . These EEG effects agree with our intracranial results that large reinforcements cause phasic increases in beta activity in a reinforcement learning paradigm, while small reinforcements do not cause deviations of beta from baseline. Our findings extend these previous results by showing a reinforcement-based modulation of beta activity subcortically, i.e. in the STN, and by showing that reinforcements modulate alpha and low beta activity during subsequent responses. In a previous LFP study in human Parkinson's disease patients, reinforcements modulated oscillations in the frequency range below 10 Hz in the STN, thus not including beta activity 7 . In this study, an effort-based decision task was used, rather than a reinforcement learning paradigm as in our study. A role of STN beta activity in reinforcement learning has not been shown previous to our results. However, reinforcements have been repeatedly observed to modulate gamma oscillations in the ventral striatum of rats [8][9][10][11] . Because of a different target nucleus, however, these results cannot be easily compared to our findings. Potential effects of dopamine. Although we did not directly measure dopamine in this study due to obvious technical difficulties, phasic dopamine levels are well reflected by reinforcement prediction errors 4 . Jenkinson and Brown 5 had hypothesized that the effects of phasic dopamine on beta activity would most likely be equivalent to known effects of tonic dopamine 3,[24][25][26] . This assumption implies that large reinforcements which phasically increase dopamine emission in the basal ganglia for several seconds [27][28][29] should decrease beta activity, while small reinforcements which phasically decrease dopamine should increase beta activity 5 . These predictions, however, do not match with our results which instead suggest the opposite relationship: large reinforcements increased beta activity, while small reinforcements did not cause deviations of beta from baseline. Either, our results therefore argue for opposite effects of tonic and phasic dopamine levels on beta activity or they are unrelated to dopamine. Maintenance of the status quo. Increases in beta activity have been implicated with neuronal commands to maintain the status quo, i.e. the current sensorimotor or cognitive state of the brain 17 . This theory is based on evidence that phasic decreases in STN beta activity occur during movements (Fig. 3) 12 , while phasic increases in beta activity can be observed directly after movement execution (see also Fig. 3) and under circumstances where intended movements are withheld 12,30 . Applied to the context of reinforcement learning, this theory would predict that beta activity is higher in trials in which patients repeat previous responses (i.e., maintain the status quo) than in trials in which responses are adapted. We could not confirm this prediction based on our results. Overall therefore, our results do not provide support for the status quo theory. Limitations. Our LFPs were recorded from Parkinson's disease patients who are known to suffer not only from motor, but also from cognitive and motivational dysfunctions [e.g. [31][32][33][34]). Of particular interest to the interpretation of our results, they are known to be impaired in the evaluation of feedback 35 , learning more easily from negative, but less easily from positive outcomes than healthy control subjects 36 . Therefore, it remains speculative whether our findings generalize to healthy participants from whom such intracranial LFPs cannot be recorded. In favor of generalizability, however, we want to point out that our patients readily learned the paradigm without any observable impairments and that our results are in line with previous EEG findings from healthy participants as discussed above. Similarly, the effects of dopamine medication on phasic beta activity remain unknown. We cannot exclude the possibility that our results would have turned out different with unmedicated patients. We chose to record our patients on medication both, because our paradigm was easier to grasp and perform for patients in that state and because tonic beta activity in this state is thought to resemble healthy subjects' beta activity more closely 24 . In comparison to unmedicated patients, however, tonic beta activity is suppressed 1-3 . It would thus be important to study reinforcement-related beta band modulation in unmedicated patients in the future. Moreover, the reinforcement learning model used in our analyses is not specifically tuned to the reinforcement characteristics of our task, i.e., the fact that reinforcement probability curves have fixed means of 2.5, 5.0 and 7.5 and standard deviations of 1.0. Our patients probably found out these characteristics as they became familiar with the task and made use of this knowledge when adapting to contingency changes. The model, in contrast, is incapable of learning (and remembering) such task characteristics. Also, the model has fixed learning rates across all trials (one learning rate for trials with positive reinforcement prediction errors and another for trials with negative prediction errors). Empirically, however, there has been evidence arguing for a dynamic adaptation of learning rates according to the volatility of reinforcement contingencies in humans 37 . Finally, all reinforcement learning paradigms are inherently non-experimental in nature (i.e., they do not allow for an active manipulation of variables independent of subjects' behavior). As a consequence, all observed correlations between LFP oscillations and behavioral parameters could have in principle been confounded by other variables that covary with these parameters. In our analyses, we tested whether response latencies and durations had confounded our results, but did not find any such evidence. These analyses do not exclude other potential confounds. Conclusions. Our results suggest that in Parkinson's disease patients, STN alpha and beta oscillations during reinforcement learning reflect these patients' evaluation of reinforcement magnitudes and their subsequent adaptation of response parameters based on this evaluation. We did not find evidence for a modulation of beta activity by reinforcement prediction errors or by patients' tendencies to repeat versus adapt their response choices. We therefore conclude that alpha and beta activity in reinforcement learning truly reflects patients' processing of reinforcement magnitudes, but does not reflect the effects of phasic dopamine signals or patients' tendencies to maintain the status quo. Material and Methods The experimental protocol was approved by the local ethics committee (Charité -University Hospital Berlin). The study was carried out in accordance with all relevant guidelines and regulations. Informed consent was obtained from all patients. Patients and surgery. 19 patients suffering from idiopathic Parkinson's disease were included in our analyses (mean age: 59.7 years, SD: 9.2 years; mean onset age: 48.7 years, SD: 8.8 years). Detailed patient characteristics are given in Table 1. Two additional patients had quit the investigation due to difficulties in concentrating after performing only a few trials; these were excluded from all analyses. All patients had undergone surgery for implantation of DBS electrodes into the STN between one and four days before participating in our study. The pulse generator, however, had not yet been implanted and electrode leads were still externalized, giving us the opportunity to record LFPs. Twelve patients were operated at Charité -University Medicine Berlin, seven at Medical University Hanover. Patients were implanted with macro-electrodes model 3389 (Medtronic Neurological Division, MN, USA). This electrode contains four cylindrically shaped platinum-iridium contacts (diameter: 1.27 mm, length: 1.5 mm) with a distance between contacts of 0.5 mm. 18 patients were implanted bilaterally, a single patient unilaterally in the left hemisphere. Electrode positions of 17 patients were localized post-operatively using LEAD-DBS software (www.lead-dbs.org) 37 . Electrode localization and mapping of electrophysiological values on MNI space. Electrode leads were localized using Lead-DBS software 38 . Postoperative stereotactic CT images (Hanover patients) or MRI images (Berlin patients) were co-registered to preoperative MRI images using SPM12 (MR modality) and BRAINSFit software (CT modality) with an affine transform. Images were then nonlinearly warped into standard stereotactic (MNI; ICBM 2009 non-linear) space using a fast diffeomorphic image registration algorithm (DARTEL) 39 . Finally, electrode trajectories were automatically pre-localized and results were manually refined in MNI space using Lead-DBS. All electrodes were confirmed to correctly lie within the STN. Figure 8 depicts the spatial locations of all individual channels from which our data were recorded. Each recording channel was localized at the center of the two contacts from which bipolar recordings were taken. For comparison, motor, associative and limbic STN sub-regions are shown as well, as based on an atlas by Accolla 40 . Recordings channels can be seen to cluster around the STN's motor-associative border region. Recording setup and procedure. Patients were seated in a comfortable chair in a well-lit recording room. A joystick was placed on a desk in front of participants such that they could comfortably move it with the hand of their choice. 13 patients used their right hand, four patients their left hand and two patients alternated between left and right hands. Stimuli were presented on a laptop computer screen that was placed behind the joystick, approximately 100 cm away from patients' eyes. Patients gave informed consent prior to participation. Behavioral paradigm. Patients performed a computer-based reinforcement learning game that involved frequent reinforcement reversals. They were asked to maximize reinforcements by choosing appropriate responses (Fig. 1A). In each trial, a red square was presented first. The square remained on the screen until patients had not moved the joystick for a continuous interval of 1,500 ms, allowing us to obtain an uninterrupted 1,500 ms interval without motor artefacts. This interval served as a baseline period for LFP analyses. As soon as the 1,500 ms were complete, the fixation square changed its color to green, prompting patients to move the joystick into one of three directions (left, right or front). Afterwards, the red square appeared again and remained on the screen until the joystick had not been moved from its center position for another 1,500 ms. In other words, the red square was presented for at least 1,500 ms, but if patients had moved the joystick within this interval (against instructions), the interval was extended until 1,500 ms without motor artefacts had been obtained. This ensured that the subsequent interval of feedback presentation was unaffected by any movement-related or post-movement changes in beta activity. Patients were then presented with a number between 0 and 10 (reinforcement magnitude) for 1,200 ms, followed by an inter-trial-interval (ITI) with an average duration of 1,500 ms (SD: 250 ms, min: 1,000 ms). Reinforcement magnitudes presented to patients were determined via Gaussian probability distributions, where each movement direction was associated to one particular distribution (Fig. 1B). Based on the chosen direction, a reinforcement magnitude was randomly drawn from the corresponding distribution in each trial. The three distributions differed in means (2.5, 5 and 7.5), but had equal standard deviations of 1. Every 20 trials on average (SD: 3), probability distributions were randomly interchanged between movement directions without notice to patients. Behavioral analyses and reinforcement-learning model. Patients' response latencies and durations were recorded and analyzed. Response latencies were defined as the interval between the 'Go' signal prompting patients to perform their response (green fixation square) and response onset (joystick deflection to at least 80% of its maximal value). Response durations were defined as the time interval between response onset and the joystick's return to its center position. To estimate reinforcement prediction errors from patients' behavior, we fitted a canonical reinforcement-learning model to each patient's individual response timeline [see [41][42][43][44]. The model was based on the following equations: In these equations, err t signifies the prediction error at time t, rew t the actual reward at time t, pred r,t denotes the reinforcement prediction for selecting the selected response r at time t and ∂ pos and ∂ neg are the learning rates for trials with positive and negative reinforcement prediction errors, respectively. Reinforcement predictions for unselected responses • were changed in opposite ways via: In each new trial of the paradigm, reinforcement predictions and reinforcement prediction errors were updated according to patients' selected responses and received reinforcements. The parameters ∂ pos and ∂ neg were fitted to patients' performance across trials such that the model correctly predicted patients' selected responses in the largest possible number of trials (where the model was assumed to predict the response which was associated with the largest reward prediction value of all response options at the relevant time point). For both ∂ pos and ∂ neg , a full search in the parameter space between 0.01 and 1.00 with a step size of 0.01 was performed. If there was more than one combination of ∂ pos and ∂ neg that produced equally good fits, that combination was selected whose distribution of reinforcement prediction values differed the least from a uniform distribution of [2.5, 7.5]. Prediction errors were then correlated with STN activity as detailed in the following sub-section. Recording and analyses of LFP data. LFPs were recorded bipolarly (online) from all adjacent contact pairs of each electrode. No offline re-referencing was done. Sampling frequency was 5,000 Hz. LFPs were amplified with a factor of 50,000 and bandpass filtered between 0.5 and 1,000 Hz using a Digitimer D360 (Digitimer Ltd., Welwyn Garden City, Hertfortshire, UK). All recordings were initially saved in Spike 2 (Cambridge Electronic Design). Off-line, they were filtered with a 50 Hz notch filter to remove powerline noise and then exported to Matlab ® (The Mathworks) for all analyses. LFP data were pre-processed for artefacts in a two-step procedure. First, channels in which voltages repeatedly reached the recording boundaries of ±100 µV were completely excluded from all analyses based upon visual inspection. Secondly, we excluded all trials in which the voltage in one of the remaining channels exceeded ±90 µV. Artefact trials were excluded from analyses of both LFP and behavioral data. LFP data were then cut into trial-related epochs relative to response and feedback onsets. LFP data were analyzed with regard to task-related changes in spectral power. Using the FieldTrip toolbox 45 in Matlab ® (The Mathworks), we computed wavelet energy using Morlet wavelets with seven wavelet cycles. The wavelets' length was chosen as three times the standard deviation of the implicit Gaussian kernel. Frequencies were sampled between 5 and 80 Hz with a step-size of 1 Hz. To compute grand-average time-frequency plots (as presented in Fig. 3), wavelet energy was computed separately for the response-locked time window of interest, the feedback-locked time window of interest and for the baseline interval, each at a step-size of 50 ms and separately for each contact pair. Task-related changes in wavelet energy within the time windows of interest were then computed relative to the average baseline energy (i.e., the mean energy across all time-points of the baseline interval, separately for each frequency bin). Next, task-related changes in energy were averaged across contact pairs within patients. Clusters of power changes were then tested for significance across patients with the non-parametric, permutation-based technique described by Maris and Oostenveld 18 . Grand-average time-frequency plots were computed by averaging individual time-frequency plots across patients. Correlations between task-related changes in wavelet energy and behavioral parameters (e.g., reinforcement magnitudes, reinforcement prediction errors, response latencies or response durations) were computed with the following procedure. In a first step, separate wavelet analyses were performed for each patient, recording channel and behavioral parameter value of interest, as outlined in the preceding paragraph. For reinforcement magnitudes, parameter values of interest were [<=2, 3, 4, 5, 6, 7, > = 8], while reinforcement prediction errors, response latencies and response durations were binned into 7 bins of increasing parameter values (the lowest 14% of values went into the first bin, the next-lowest 14% into the second bin, etc.). For each combination of patient, recording channel and parameter value of interest, this resulted in a separate time-frequency matrix of changes in LFP power relative to the baseline period. After averaging across recording channels, behavioral parameter values (or, for binned data, average values of the different bins) were correlated with baseline-corrected LFP power separately for each time-frequency bin and for each patient (Pearson coefficient). Finally, the resulting time-frequency specific correlation values were tested for significance across patients (second-level analysis) with the non-parametric, permutation-based technique described by Maris and Oostenveld 18 . Statistics. For inference testing of time-frequency data, we used the cluster-based permutation statistics developed by Maris and Oostenveld 18 . The approach makes no assumptions on the distribution of the underlying data and offers full control of the family-wise error rate (FWER) at the relevant alpha level of 0.05. In brief, the approach involves the following steps. First, we processed the original data by computing a separate t-test (18 degrees of freedom) for each value in time-frequency space across patients. As outlined in the preceding sub-section, dependent values within the time-frequency space were either LFP power (see Fig. 3) or correlations between LFP power and behavioral parameters (see Figs 4,5 and 7). Afterwards, all t-values above a threshold of t = 2.10, corresponding to a probability of 0.05 with 18 degrees of freedom, were identified. Please note that this probability is not related to the critical alpha level of the hypothesis test (also set to 0.05 in our analyses), but that it defines the sensitivity of the cluster threshold, i.e., the threshold that defines cluster boundaries for subsequent cluster analyses. For all clusters of neighboring above-threshold t-values, subsequently, the t-values within the respective cluster were summed up and this sum served as a test statistic for that cluster in subsequent analyses. Now, the original time-frequency data were permuted 20,000 times to establish a distribution of data that the original data's test statistic could be compared against. For each of the 20,000 permutations, each patient's dependent values in time-frequency space were, randomly, either left unchanged or multiplied by −1 (uniformly across all dependent values of that patient). Afterwards, the across-patient t statistic was computed again, exactly as for the original data. For each permutation, only the most powerful cluster, i.e., the largest sum of neighboring t values was identified and saved, resulting in a distribution of 20,000 values. For each cluster of the original data set, finally, the rank of its sum of t-values within the distribution of summed t-values from the permuted data sets was established. This rank defined the p value for that cluster (see 18 ). For inference tests of all non-time-frequency data (i.e. all data that comprised one single dependent value per patient), we used a sign permutation test [see 46 ]: to establish a p value, we computed the mean dependent value of the original data across patients and ranked it within the distribution of mean dependent values derived from a large number of permuted data sets. For each permutation, each of the 18 dependent values was either left unchanged or multiplied by −1. We evaluated all 524,288 possible permutations, since this procedure is not overly computationally intensive.
v3-fos-license
2021-10-15T00:09:37.370Z
2021-07-01T00:00:00.000
238777524
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "https://ijma.journals.ekb.eg/article_180982_e278a8190759584db0c7d37b954c7e05.pdf", "pdf_hash": "f28d31901b940fe74a8f3b7b51c500b8de9f758d", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2874", "s2fieldsofstudy": [ "Medicine" ], "sha1": "b2b254fed6afde49eaa0039e0b6ba9b9c6c905fe", "year": 2021 }
pes2o/s2orc
Role of Magnetic Resonance Imaging and Ultrasonography in Evaluation of Chronic Non osseous Shoulder Pain Background: Chronic shoulder pain is a common clinical presentation. It is of osseous or non-osseous origin. In non-osseous shoulder pain, proper diagnosis is critical. Magnetic resonance imaging [MRI] is the standard diagnostic modality. However, it is expensive and not available in many medical centers. Thus, the availability of cheap alternative is crucial. Aim of the work: The current research aimed to assess the diagnostic performance of ultrasonography versus conventional MRI in different causes of chronic non-osseous shoulder pain. Patients and Methods: Fourty patients with chronic shoulder pain due to different causes were participated in the current work. They were selected from Al-Azhar University Hospital [Damietta]. All were assessed on clinical basis [history, physical examination and laboratory investigations]. Then, all were submitted to radiological investigations [Plain X-ray, shoulder ultrasound, and Magnetic resonance imaging]. The diagnostic value of ultrasound was estimated versus that of magnetic resonance imaging. Results: by ultrasound, tendinosis was reported in 55.0%, partial thickness tear in 27.5%, articular surface in 20.0%, full thickness tear in 12.5%, bursal surface [7.5%], neoplastic [2.5%] and infraspinatus tendon full thickness tear [2.5%]. Ultrasound able to diagnose supraspinatus tendinopathy [91.7%], full thickness complete tear [83.3%], supraspinatus impingement [85.3%], subacromial subdeltoid bursitis [92.0%] and long head biceps tenosynovitis [84.2%]. Otherwise, ultrasound specificity is over its sensitivity power for partial thinness tear on articular [80.0%] or bursal surfaces [85.3%], full thickness complete tear [94.1%], shoulder joint effusion [92.3%], LHB tenosynovitis [85.7%] and labral tears [100.0%]. Conclusion: Shoulder ultrasound could be considered as a reasonable alternative to magnetic resonance imaging in the diagnosis of different causes of chronic shoulder pain. However, its value widely different from condition to another. Thus, it could be used as a rapid screening tool, and the use of MRI could be ascribed for specific conditions [cases with lower ultrasound sensitivity]. INTRODUCTION Among musculoskeletal disorders, shoulder pain is a common complaint [represents about 20.0%] and usually associated with disability [1] . Shoulder pain is described as a chronic condition when it lasts for more than six months, irrespective of seeking previous treatment or not [2] . Shoulder pain is of osseous and non-osseous origins. Rotator cuff, acromioclavicular joint [ACJ] and glenohumeral joint [GHJ] conditions are among the commonest nonosseous causes of shoulder pain [3] . Causes of shoulder pain usually affected by patient age. Younger patients are usually present with shoulder instability or mild rotator cuff disease [impingement, tendinopathy], whereas older patients are at usually presented by advanced chronic rotator cuff conditions [partial or complete tear], adhesive capsulitis, or glenohumeral osteoarthritis. The age of 40 years old is the cut off point for determination for younger and older subjects [4] . Imaging studies for shoulder disorders generally include plain radiographs, ultrasonography, computed tomography scans and magnetic resonance imaging. Plain radiographs may help diagnose shoulder instability, and shoulder arthritis [5] . Once satisfactory radiographs have been gained to exclude bone disorders, high-resolution ultrasound [HRUS] should be the first modality in the evaluation of shoulder disorders [6] . Ultrasonography is a cheap, fast, and provides dynamic abilities to examine the patient in multiple scanning planes without specific positions or movements of the arm. In addition, ultrasound had the ability to focus the examination on the accurate region with a maximum discomfort [7] . Therefore, Ultrasound should be the primary diagnostic and screening modality of shoulder pain. It is cost-effective and fast [8] . Magnetic resonance imaging [MRI] is currently the reference standard imaging modality for shoulder disorders. MRI had the potential to assess areas not accessed by ultrasound such as the bone marrow, labral cartilage, and deep parts of various ligaments, capsule, and areas masked by bone [9] . MRI is an ideal modality for different shoulder pathologies and significantly influences the clinician's diagnostic decisions for shoulder lesions. MRI permits free access to the different imaging planes. It also suppresses the fat signal and increase imaging speed, sensitivity and specificity of the shoulder [10] . In cases of non-osseous shoulder pain, the definite diagnosis is of utmost importance. Early diagnosis usually leads to a better outcome. However, there is no consensus on the ideal diagnostic modality [other than MRI, which is expensive and not available in all medical centers] in such cases. Here, we intended to investigate the role of two imaging modalities; the ultrasonography and magnetic resonance imaging. We propose that, if ultrasound could perform like or near MRI, it may represent a reasonable, rapid, readily available alternative, which could help in good prognosis of cases with non-osseous shoulder pain. AIM OF THE WORK The aim of this study is to evaluate the role of ultrasonography versus conventional magnetic resonance imaging in the diagnosis of different causes of chronic nonosseous shoulder pain. PATIENTS AND METHODS The current work was designed as a prospective, cross sectional study, where 40 patients with chronic shoulder pain of non-osseous causes were recruited. They were refereed from the orthopedic or rheumatology outpatient clinics, Al-Azhar University Hospital [Damietta]. Patients were selected from March 2020 to February 2021. All patients, of both sexes were eligible for participation in the current work if they had a clinical suspicious chronic non-osseous shoulder pain. On the other side, patients with osseous causes, previous surgery at shoulder joint, shoulder pain duration less than 6 months and patients who were known to have contraindication for MRI [e.g., implanted magnetic device, pacemakers, etc..] were excluded from the study. After the approval of the institutional review board [IRB] [IRP number: #00012367-20-02-010], and obtaining patient consent, all participants were inquired about their medical history in full details. The results of the clinical examination by referring physician and results of necessary investigations were reviewed. Then, all patients were examined by plain-X ray [anteroposterior, lateral and axial views to exclude osseous origin of should pain]. After that, ultrasound examination of the shoulder had been performed by ultrasound machine using superficial 7-10 MHz transducer [GE Voluson 6], according to the protocol described by Jacobson [11] . Ultrasound [US] assessment of rotator cuff had been completed by an experienced radiologist [general radiologist of more than 15 years of experience] using high frequency small part probe ultrasound machine. Finally, MRI examination had been performed after the removal of all metallic objects with the patient. The machine used was [Philips, Achieva 1.5 Tesla-XR-Netherlands 2010 magnet was used -surface coil-]. The procedure completed as described by Farber et al. [12] . The MRI was evaluated by the same radiologist after concealment of patient name and any data refer to his/her identity. The diagnosis was acromioclavicular joint arthropathy with supraspinatus impingement, supraspinatus tendinopathy, and subacromial-subdeltoid bursitis. Regarding The second patient [ Figure 2] was 62-year-old male diabetic patient, complains of right chronic shoulder pain and inability to fully abduct his right arm for two years. The diagnosis was a full thickness tear of the supraspinatus with tendon retraction, no muscle atrophy, ACJ osteoarthritis, mild shoulder joint effusion, and subcoracoid bursitis. The third patient [ Figure 3] was a female patient, a 68year-old, complaint of chronic right shoulder pain for 7 months. She had left breast cancer, and on chemotherapy. The diagnosis was bone marrow post chemotherapy changes, glenohumeral synovitis with joint effusion, and ACJ osteoarthritis. DISCUSSION Tendinopathy of the supraspinatus muscle was the commonest diagnoses [n = 24, 60 %]. Then, partial thickness tear of the same muscle [n = 21; 52 %] followed by a full thickness tear of the supraspinatus [n = 6; 15%]. Among other diagnoses subacromial-subdeltoid bursitis was most common and effusion of the bicep's tendon sheath [BTS] as will be thoroughly discussed. The commonest cause of referral to radiological investigation in the current study was rotator cuff pathologies which was found to agree with Vijayan et al. [13] and Singh et al. [14] Supraspinatus tendon was the commonest affected tendon in the current work. Concordant to our study, studies done by Vijayan et al. [13] , Singh et al. [14] , and Netam et al. [15] have also demonstrated supra-spinatus to be the commonest tendon involved and the tendon of teres minor was the least one. In the current trial, MRI was used as a reference standard, and supraspinatus partial thickness tears of were found to be more common than full thickness tears. This agrees with the Vijayan et al. [13] , Netam et al. [15] and Thakker et al. [17] In our study among 21 patients with supraspinatus tendon partial thickness tears [n = 21] as diagnosed by MRI, ultrasound correctly picked up 11 cases [eight had articular surface tendon tear whereas three had bursal surface tendon tear]. This shows that the articular surface tear was more common than the bursal surface tear. Similar results were reported by Vijayan et al. [13] and Netam et al. [15] . For partial thickness tears articular surface our results show sensitivity and specificity 53.3% and 80% respectively had 61.5 % PPV, 74.1% NPV and was 70% accurate in diagnosing partial tendon tear articular surface. Vijayan et al. [13] show Sensitivity, Specificity, Positive Predictive Value [PPV] and Negative Predictive Value [NPV] of US in evaluating rotator cuff tendons partial tears is 64.5%, 95.8%, 66.6% and 96.4% respectively. On US there were 5 false positive cases that were normal on MRI probably due to anisotropy related artefacts. A total of five patients had complete tendon tear on US where another one had complete tendon tear on MRI. Thus, US was 83.3% sensitive, 94.1% specific, had 71.4% PPV, 96.9% NPV, and was 92.5% accurate in the diagnosis of complete tear which in line with studies performed by Vijayan et al. [13] who show for complete tears 70.4% Sensitivity, 100% Specificity, 100% PPV and 97.2% NPV. It is also closely agreed with Singh et al. [14] , who reported results of US having a sensitivity of 88.9%, specificity of 100%, PPV of 100% and NPV of 98.07% in recognition of full thickness tears. The results of our study were in correspondence to the meta-analysis done by Netam et al. [15] who observed that US showed a sensitivity of 91% and specificity of 93 % for full thickness tears. On US there were two false positive cases that were normal on MRI probably due to anisotropy related artefacts. The overall accuracy of US in the identification of any tear was above 80%. Ultrasound is accurate when used for the identification of full thickness tears; although sensitivity is lower 53.6 % for the diagnosis of partial thickness tear, specificity remained high in both conditions, being above 80%. These were concordant with Khanduri et al. [16] . Relation of US and MRI with clinical diagnosis revealed that clinical diagnosis failed to identify the tears, especially supraspinatus impingement which was later identified as full/partial thickness tear and tendinosis by MRI and ultrasound. Thus, these imaging modalities helped to recognize the underlying pathologies in a clearer way. However, MRI diagnosed shoulder pathologies in relatively a greater number as compared to ultrasound. Ultimately, MRI was more sensitive and specific for most underlying pathologies than ultrasound. The specificity and sensitivity for supraspinatus impingement by ultrasound was 58.3% and 78.6 % respectively, NPV 81.5% and PPV of 53.9% with accuracy 72.5% which in line with Biswas et al. [3] who showed that, ultrasound had sensitivity of 66.67%, specificity of 94.12%, positive predictive value of 50% and negative predictive value of 88.89%. Six cases of our study show false positive due to inaccurate measuring of distance by posterior shadow of acromioclavicular joint which is seen clearly by MRI, however in some cases, ultrasound has clear advantages over MRI about dynamic imaging. These include situations where a specific maneuver or position is needed to provoke symptoms. Many such abnormalities are not seen with static MRI. With ultrasound, virtually any dynamic maneuver can be assessed in real time as tolerated by the patient. A total of 23 patients had fluid in the subacromialsubdeltoid [SASD] bursa on ultrasound whereas 25 were confirmed to have fluid in SASD bursa on MRI. This showed that, ultrasound had 92% sensitivity, 80% specificity and 88.5% PPV, 85.7% NPV, and 87.5 % accuracy in identification of SADB fluid in comparison to MRI; which is in line with Singh et al. [14] who reported that US showed a sensitivity of 90% and specificity of 88 % in the diagnosis of SASD. Hence, MRI proved to be a better modality in detection of bursal effusion. Joint effusion was seen in 14 cases in MRI, with 3 cases only diagnosed by US thus, 11 cases false negative cases could not be assessed by ultrasound because the patients could not maintain the position for examination. Our study revealed that, the sensitivity and specificity for the detection of joint effusion were lower being 21.4% and 92.3% respectively. Our results are in agreement with Bruyn et al. [18] study that was performed on 10 patients examined by 11 observers to compare ultrasound and MRI while the sensitivity and specificity for the detection of joint effusion were lower being 35% and 92% respectively. Our results were not concordant with those of Maravi et al. [19] study in assessing sonography vs. MRI in detection of glenohumeral effusion, the study documented effusion at the glenohumeral area in 26 cases on MRI, of them 19 were identified on ultra-sound. The sensitivity, specificity, PPV, and NPV of US were 73.1%, 100%, 100%, and 26.9% respectively. The study shows glenohumeral joint effusion can be identified by a reliable ultrasound but only in a few places. The most reliable site to identify effusion was the posterior recess of the glenohumeral joint space with external rotation of the upper arm. The recognition rate of effusion by ultrasound in this study shows sensitivity of 73.1% compared to only 21.4 % in our study which could be attributed to most of the patients were unable to attain the position for examination. Tendon sheath effusion along the bicep's tendon was the second most common imaging finding in association with the rotator cuff tears on US. This was pertaining to the synovial sheath of the biceps as an extension of the glenohumeral synovial membrane. Out of 19 patients who were detected to have fluid in BTS on MRI, 16 patients were correctly detected to have fluid in BTS on US. In our results showing the specificity and sensitivity for BTS effusion by US as sensitivity and specificity 84.2 % and 85.7% respectively, negative predictive value of 85.7 % and positive predictive value of 84.2 % with 85 % accuracy. Our results are agreed with Singh et al. [14] reported that US had 90% sensitivity, 83 % specificity and 55 % PPV, 100% NPV. Yet, Maravi et al. [19] who reported Sensitivity, specificity, PPV, and NPV of US for identification of biceps tendon sheath effusion were 37.5%, 100%, 100%, and 92% respectively with accuracy 92%. Six cases of suspected labral derangements were diagnosed by MRI but were not detected by US as they were involving the anterior labrum and to evaluate the glenoid labrum, which is not located superficially and is surrounded by the rotator cuff musculature, and to diagnose anterior labral tears, experience in shoulder US is required. MRI is superior in detection of labral tears with accuracy as high as 85 % which is well correlated with the study conducted by Netam et al. [15] which stated an accuracy of 98 %. The value of the current study: the results of the current work reflected the ability of ultrasound to diagnose with high sensitivity different non-osseous conditions of chronic shoulder pain [e.g., supraspinatus tendinopathy, full thickness complete tear, supra-spinatus impingement, subacromial subdeltoid and long head biceps tenosynovitis]. In conditions with lower sensitivity, it provides high specificity, where it could be used to exclude such conditions [e.g., partial thickness tear on articular [ The current trial is unique in its nature as it addressed many non-osseous causes of chronic shoulder pain, where previous studies addressed a single condition. However, the small number of included subjects represented a limiting step of the current study. Thus, future wide scale studies are recommended. Financial and Non-financial Relationships and Activities of Interest None
v3-fos-license
2016-03-14T22:51:50.573Z
2015-05-11T00:00:00.000
8520221
{ "extfieldsofstudy": [ "Economics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1016/j.respol.2015.10.001", "pdf_hash": "364f733df1cc3cde855f3dba56ee4ddabb73e179", "pdf_src": "ElsevierPush", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2876", "s2fieldsofstudy": [ "Business", "Economics" ], "sha1": "e504ec0cefd9122e3a0f2781cd3c401b0bcccc80", "year": 2015 }
pes2o/s2orc
Venture Capital Investments and the Technological Performance of Portfolio Firms What is the relationship between venture capitalists’ selection of investment targets and the effects of these investments on the patenting performance of portfolio companies? In this paper, we set out a modelling and estimation framework designed to discover whether venture capital (VC) increases the patenting performance of firms or whether this effect is a consequence of prior investment selection based on firms’ patent output. We develop simultaneous models predicting the likelihood that firms attract VC financing, the likelihood that they patent, and the number of patents applied for and granted. Fully accounting for the endogeneity of investment, we find that the effect of VC on patenting is insignificant or negative, in contrast to the results generated by simpler models with independent equations. Our findings show that venture capitalists follow patent signals to invest in companies with commercially viable know-how and suggest that they are more likely to rationalise, rather than increase, the patenting output of portfolio firms. Introduction New firms can rarely rely on internal cash flows in their pursuit of entrepreneurial opportunities. Among the sources of external finance available to entrepreneurs, venture capital (VC) can provide not only the financial resources they require, but also assistance to enhance the design, development, and performance of portfolio companies (Lerner, 1995;Bergemann and Hege, 1998;Gompers and Lerner, 2001;De Clercq et al., 2006;Schwienbacher, 2008;Cumming, 2010). Among the different dimensions of entrepreneurial growth that the literature has noted, a strong association has been identified between VC investments and innovation, often measured by the firm's patenting output. A prominent thesis is that venture capitalists improve investee firms' innovative performance through their ability to 'coach' new businesses and to nurture them to produce greater technological output (Kortum and Lerner, 2000;Popov and Roosenboom, 2012). An alternative argument has received relatively less attention as yet, although its validity may lead to a different conclusion: that venture capitalists are exceptionally good at identifying new firms with superior technological capabilities, which they see as the best investment opportunities. Seen from this angle, the most distinctive trait of VC, and therefore the most salient explanation for the stronger technological performance of VC-backed firms relative to other firms, would be the venture capitalists' superior selection capabilities (Baum and Silverman, 2004). Venture capitalists face a resource allocation problem characterised by high risk and strong information asymmetries. In order to decrease these information asymmetries -given that potential investees have little or no track records of market performance -investors have to rely on other signals of firm quality. These include the ex ante patenting performance of potential investees (Häussler et al., 2012;Conti et al., 2013b;Hsu and Ziedonis, 2013), so patenting can be seen as an antecedent of VC investment decisions, as well as a likely consequence. Disentangling the relationship between VC investment and firms' technological performance involves a significant theoretical as well as empirical challenge because of endogeneity and reverse causation between the investment and innovation processes. This is an important problem, not only from a scholarly perspective but also from a policy viewpoint. Even though the VC sector finances only a minority of new firms, it plays a very prominent role in policies designed to overcome finance gaps and to grow entrepreneurial, innovation-driven economies (OECD, 2014). This role has not gone unquestioned: critical issues have been raised about scale and skills in the demand and supply of venture finance (Nightingale et al. 2009), governance (Lerner, 2009), cyclicality and stage distribution of investments (Kaplan and Schoar, 2005;Cumming et al., 2005;Lahr and Mina, 2014), and the overall returns and long-term sustainability of the VC investment model (Mason, 2009;Lerner, 2011;Mulcahy et al. 2012). These make it even more important to gain a clear and accurate understanding of the VC-innovation nexus. In this paper we model the relation between VC and patenting using simultaneous equations to consider both the determinants of VC investments, including patents as signals of firm quality, and the effect of VC on firms' post-investment patenting performance, controlling for their prior performance. We use data from an original survey of 3,669 US and UK companies. We extract information on the 940 firms that sought finance between the years 2002 and 2004 and match these records with patent data extracted from the European Patent Office's Worldwide Patent Statistical Database (PatStat) for the periods concurrent to and following the survey years. Controlling for other firm characteristics (e.g. size, age, R&D expenditure, and market size), we estimate simultaneous models for 1) the likelihood that firms' patenting activities predict VC investments and 2) the likelihood that such investments lead to patenting in the following period. We employ a bivariate recursive probit model and develop a simultaneous zero-inflated Poisson model for count data, using both to control for the endogenous nature of the selection and coaching processes. We demonstrate that, once we account for endogeneity, the effect of VC on the subsequent patenting output of portfolio companies is either negative or insignificant. These results indicate that, while venture capitalists positively react to patents as signals of companies with potentially valuable knowledge, confirming the 'selection' hypothesis, there is no evidence of a positive effect of VC investment on firms' subsequent patenting performance. It is plausible that VC will positively influence other aspects of new business growth (i.e. commercialisation, marketing, scaling up, etc.), but the contribution of VC does not seem to involve increasing investee firms' technological outputs. Importantly, the fact that the technological productivity of a firm may slow down after VC investment does not imply that the firm would be better off without VC: on the contrary, an insignificant or negative effect of VC on firm patenting suggests that venture capitalists rationalise technological searches and focus the firm's finite resources, including managerial attention, on the exploitation of existing intellectual property (IP) rather than further technological exploration. This paper advances our understanding of the financing of innovative firms by modelling the determinants of investment choices by VC and the patenting output of their portfolio companies at the time of and after VC investment. In so doing, the paper also introduces an original methodology that can disentangle the endogenous relationship between VC and patenting efficiently, and has the potential for further uses in treating analogous theoretical structures. VC investments and patenting: Theory and evidence Investments in small and medium-sized businesses, and in particular new technology-based firms, pose specific challenges to capital markets because they involve high risks and strong information asymmetries (Lerner, 1995;Hall, 2002). From an investor's viewpoint, the economic potential of these firms is difficult to assess given their short history and the lack of external signals about their quality (e.g. audited financial statements, credit ratings), or of market feedback about new products and services at the time of investment. Only few investors are able and willing to back these businesses. They do so with the expectation of satisfactory returns by applying a specific set of capabilities, and often sector-specific business knowledge, that enable them to make better choices relative to competing investors, handle technological and market uncertainty, and actively influence the outcome of their investments (Sahlman, 1990, Gompers, 1995Hellmann, 1998;Gompers andLerner, 1999, 2001;Kaplan andStrömberg, 2003, 2004). In the extant studies that have addressed the links between VC and innovation, one stream has focused on the ability of venture capitalists to assist portfolio companies by giving them formal and informal advice, thus adding value in excess of their financial contributions (Gorman and Sahlman, 1989;Sapienza, 1992;Busenitz et al., 2004;Park and Steensma, 2012). A second and more recent stream has instead emphasised the ability of VCs to use patents as signals of firm quality and to make superior choices, relative to other investors, among the investment options that are available to them. If what matters for the subsequent performance of portfolio companies is the quality of the initial investment decision, the source of venture capitalists' competitive advantage rests on their selection capabilities, defined as their ability to identify the investee companies with the greatest growth potential (Dimov et al., 2007;Yang et al., 2009;Fitza et al., 2009;Park and Steensma, 2012). In the following two sections we review the arguments and evidence behind these two perspectives. The effects of VC on patenting The proposition that venture capitalists are able to increase firm value beyond the provision of financial resources has gained considerable support in the literature (Gorman and Sahlman, 1989;Sahlman, 1990;Bygrave and Timmons, 1992;Lerner, 1995;Keuschnigg and Nielsen, 6 2004;Croce et al., 2013), and is especially clear when they are compared, for example, to banks in the supply of external financing to small and medium-sized enterprises (Ueda, 2004). Venture capitalists can take active roles in many aspects of the strategic and operational conduct of their portfolio firms, including the recruitment of key personnel, business plan development, and networking with other firms, clients and investors, often on the basis of in-depth knowledge of the industry (Florida and Kenney, 1988;Hellmann andPuri, 2000, 2002;Hsu, 2004;Sørensen, 2007). Several studies find links between VC investments and firms' patenting performance, and generally interpret a positive association between the two as a result of the 'value-adding' or 'coaching' effects of VC. One of the most prominent studies on this topic is Kortum and Lerner's (2000) paper, in which the authors model and estimate a patent production function in an investment framework. Aggregating patent numbers by industry, they find a positive and significant effect of VC financing on (log) patent grants. 1 Ueda and Hirukawa (2008) show that these findings become even more significant during the venture capital boom in the late 1990s. However, estimations of total factor productivity (TFP) growth reveal that this was not affected by VC investment, a result that contrasts with Chemmanur et al.'s (2011) study, which reveals a positive effect of VC on TFP. Popov and Roosenboom (2012) also find similar positive, although weaker, results for such effects in European countries and industries. Further estimations of autoregressive models for TFP growth and patent counts by industry seem to suggest that TFP growth is positively related to future VC investment, but there is weaker evidence that VC investments precede an increase in patenting at the industry level, and there are indications that lagged VC investments are often negatively related to 1 Both patenting and venture funding could be related to unobserved technological opportunities, thereby causing an upward bias in the coefficient on venture capital, but regressions that use information about policy shifts in venture fund legislation to construct an instrumental variable also show positive impacts of VC investments on patenting (Kortum and Lerner, 2000). both TFP growth and patent counts . Empirical firm-level studies on venture capital investments tend to confirm the existence of a positive relation between VC and patenting performance (Arqué-Castells, 2012;Bertoni et al., 2010;Zhang, 2009). This pattern is not only found for independent but also for corporate venture capital (Alvarez-Garrido and Dushnitsky, 2012;Park and Steensma, 2012). Lerner et al. (2011) estimate various models, including Poisson and negative binomial models, for patents granted and patent citations in firms that experienced private equity-backed leveraged buyouts (LBOs). They find an increased number of citations for patent applications post-LBO and no decrease in patent originality and generality after such investments. Patent counts do not seem to vary in a uniform direction. A study by Engel and Keilbach (2007) found that VC-backed firms apply for ten times as many patents as matched non-VC backed firms: the authors use propensity and balanced score matching to compare venture-funded to non-VC funded German firms in terms of their technological outputs and growth, although this difference was only weakly significant. Caselli et al. (2009) use a similar matching procedure to assess the difference in the patenting and growth performances in the venture-backed IPOs of Italian firms. Their results show a higher average number of patents in the venture-backed firms than in their control group. Importantly, however, none of these studies provides solutions to the fundamental problem of the endogeneity of investment relative to firms' technological performance. 2 Investment selection The second hypothesis that might explain the correlation between VC investment and firms' technological performance is that venture capitalists have distinctive selection capabilities. 2 To the best of our knowledge, Croce et al. (2013) is the closest attempt to date to address sample selection problems in the context of the value-added hypothesis. However, this interesting study does not consider any innovation indicators and its analysis of portfolio companies' productivity growth is limited by the use of only a small set of basic firm characteristics. Our paper does not focus on TFP estimates -instead we explore in some detail the technological output of firms in relation to entrepreneurial finance decisions. This implies a modelling framework in which the innovative profiles of potential investee firms affect the probability that they receive VC investment. From this perspective, patents can function as signals to investors about firm quality (Baum and Silverman, 2004;Mann and Sager, 2007;Häussler et al., 2012;Audretsch et al., 2012;Conti et al. 2013aConti et al. , 2013bHsu and Ziedonis, 2013). There are several dimensions to the investment evaluation process employed by venture capitalists (Shepherd 1999), and there is growing interest in the technological determinants of venture financing. Baum and Silverman (2004) explored the links between VC financing, patent applications, and patents granted. Their findings suggest that the amount of VC finance obtained depends on lagged patents granted and applied for, R&D expenditures, R&D employees, government research assistance, the amount of sector-specific venture capital, horizontal and vertical alliances, and the investee firm being a university spin-off. Age is negatively related to venture capital, as are net cash flows, diversification, and industry concentration. Mann and Sager (2007) Spence's (2002) signalling theory to argue that the founders of entrepreneurial firms are better informed about the quality of the venture than are potential investors, and that they use patents as communication devices to bridge this information gap. Patents are effective signals of quality on the grounds that, as they are produced at a cost (in this case the fees associated with the patenting process), low-quality agents will tend to be weeded out. Their empirical analysis confirms that patent applications have a positive effect on the hazard rate of VC funding in a sample of British and German biotech companies. These findings resonate with prior results presented by Engel and Keilbach (2007), whose probit modelling of VC investment reveals a positive association with patents and the founder's human capital. Along a similar line of enquiry -albeit set in a broader Penrosian framework than that used by Häussler et al. (2012) -Hsu andZiedonis (2013) analyse VC-financed start-ups in the US semiconductor sector. They show that, by bridging information asymmetries, patents increase the likelihood of obtaining initial capital from a prominent VC, and have positive effects on fundraising and IPO pricing (conditional on IPO exit). By bringing these streams of contributions together, we aim to answer the question: Does venture capital positively contribute to the patenting performance of firms or is it a consequence of venture capitalists' ability to identify the best companies at the time of investment? Results based on firm-level information are mixed, which suggests that positive findings could be at least partially driven by VC's selection of companies on the basis of their current patent output. We can only shed light on the effects of coaching vis-à-vis the selection function of VC if we take into account the endogeneity of the relation between VC investment and the technological performance of firms. Our research strategy is therefore to model, test, and evaluate in a simultaneous setting 1) the effect of VC on the patenting performance of portfolio firms post investment and 2) the effect of firms' patenting performance on the probability of attracting VC. Data This paper builds on a unique comparative survey of U.K. and U.S. businesses carried out jointly by the Centre for Business Research at the University of Cambridge and the Industrial Performance Center at MIT in [2004][2005]. The basis for the sampling was the Dun & Bradsheet (D&B) database, which contains company-specific information drawn from various sources, including Companies House, Thomson Financial, and press and trade journals. 3 The sample covered all manufacturing and business service sectors, and was stratified by sector and employment size (10-19; 20-49; 50-99; 100-499; 500-999; 1,000-2,999; and 3,000+), with larger proportions taken in the smaller size bands, as in both countries the vast majority (over 98%) of firms employ fewer than 100 people. The data were collected via telephone surveys between March and November 2004 (response rates: 18.7 percent for the U.S. and 17.5 percent for the U.K.), followed by a postal survey of large firms in spring 2005 leading to a total sample of 1,540 U.S. firms and 2,129 U.K. firms. We restrict our sample to firms that actively sought finance during the two years prior to being interviewed, which produces a working sample of 940 firms (513 in the U.S. and 427 in the U.K.). The survey lists VC funds and business angels as possible sources of external finance. Despite some differences in stages and sizes of investments, geographic proximity, and motivation for investments (Ehrlich et al. 1994), both venture capitalists and angels spend considerable time with firms' management teams, and make substantial nonfinancial contributions in addition to their financial commitments, including working handson in their day-today operations (Kerr et al., 2014;Haines et al., 2003;Harrison and Mason, 2000). Since both formal and informal venture capitalists can perform similar functions from our study's viewpoint, we pool observations for these two classes of active investors, although when we perform robustness tests with separate samples, the results are consistent with our main analyses (section 5.3). Information about the event of a VC investment enters our models as an endogenous binary variable. Firms answered the survey questions almost completely, although minor gaps in the data would have prevented us from using about 10 percent of the survey responses. In order to avoid the loss of observations due to missing values, we use random regression imputation to approximate them (Gelman and Hill, 2006). The number of such imputations is generally very low -always less than 2 percent per variable. Where dependent variable values are missing, we drop those observations. Patent data are taken from the European Patent Office's (EPO) Worldwide Patent Statistical Database (PatStat), which contains information on 68.5 million patent applications by 17.3 million assignees and inventors from 1790 to 2010. Since there are no firm identifiers available in PatStat, we match patent information to our survey data by firm name. We consider firms' global patent portfolios, and count all applications to different patent authorities for similar or overlapping know-how as multiple patenting events. To align patent data with the three year period addressed in the survey, we count the number of patents applied for and granted within a three-year period prior to the interview (calculated from exact survey response dates), and determine each firm's patenting status from this number. More specifically, we use application filing and publications dates for the first grant of an application to determine the timings of patenting events. For our dependent variables, we count applications and grants for the whole post-survey period in order to capture the longterm effects of VC investment. Finally, we include a dummy variable for the firm being based in the US or the UK to control for different propensities to patent -and likelihood of grant -in different domestic institutional environments. We abstain from using forward citation-weighted indicators for patents, a control for patent quality that is especially useful in studies of performance, because such citations may be affected by the likelihood of investment, and may thus introduce a further source of endogeneity into this analytical context that would be important to avoid. Table 1 here Table 1 shows descriptive statistics for our sample firms' patenting activities and our independent variables. 146 firms from our sample applied for patents during the three-year survey period (t), and 168 filed patent applications in the next period (t+1). We identified patent grants in 115 and 141 firms in these respective periods. 96 firms gained venture capital or business angel financing in about equal proportions in the two years prior to the survey. A simple cross-tabulation of indicators for VC financing and for patenting activity at t (see Table 2) highlights the strong link between venture capital and patenting. It shows that 56.2 percent of VC-financed firms applied for patents in any of the periods, whereas only 18.2 percent of those without VC funding did so. But this picture begins to look different when we consider changes in the patenting status across periods. In the non-VC financed group, firms seem to start patenting at time t+1 more often than they stop applying after patenting at time t. In contrast, the numbers of firms in the VC-financed group that start patenting at time t+1 balance those that discontinue their patenting activities after period t. 4 Including additional control variables in our multivariate analyses gives us a much more precise assessment of these state transitions. Insert Table 2 here The inclusion of explanatory variables builds on prior studies into the relationship between VC and patenting, which have often used a very limited number of co-determinants, sometimes only R&D expenditures. We extend the scope of the relevant predictors for the propensity to patent, of which R&D intensity is the preferred choice according to standard practice in the literature (Scherer, 1965(Scherer, , 1983Pakes and Griliches, 1980;Pakes, 1981;Hausman et al., 1984). Since prior research has used various measures for this intensity, including the log of R&D expenditures, R&D expenditures scaled by size variables, or the number of R&D employees, we choose a suitable combination of these indicators. We proxy for size by taking the logarithm of employment and control for R&D intensity by including the percentage of R&D staff and a dummy indicating the presence of R&D expenditures. This allows us to avoid the use of multiple size-dependent measures, since variables enter the expected mean in Poisson specifications multiplicatively. Further variables control for age, country and industry. Following Scherer (1983), we use the amount of international sales to measure market size and control for industry concentration by the number of competitors. We measure CEO education by a dummy variable indicating whether the CEO has a university degree or not. The length of the average product development time in the firms' principal product market is also controlled for, since it arguably plays a role in attracting investment (Hellmann and Puri, 2000). Finally, given the highly cumulative nature of technical change (Dosi, 1988) we include lagged patent applications and grants as proxies for the firm's knowledge stocks that it uses to produce new patents. 5 Models and estimation The structure of firms' patenting decisions presents several econometric challenges. Previous research shows that the vast majority of firms do not patent, which causes observations of zero patents in a large proportion of firms leading in turn to model instability and error distributions that do not meet the model's assumptions if these excess zeroes are not properly addressed (Bound et al., 1984;Hausman et al., 1984). At the same time, unobservable heterogeneity is highly likely to be correlated between VC investment and patenting performance: for example, firms might disclose patenting activities to prospective investors, which increases the likelihood that we observe VC investments in combination with more patenting in the future. When using VC investment to explain patenting, this endogeneity 14 complicates model estimation and may make it analytically intractable. 6 We suggest that patenting involves a two-step process, in which firms first decide whether to use patenting as a suitable IP protection strategy and then produce patents according to a Poisson or similar distribution (see Figure 1). Following this logic, we model patenting activity as a binary variable that depends on firm and industry characteristics and augment our models with an endogenous binary variable that indicates whether or not a firm receives venture capital financing. Instead of relying on propensity score matching or comparable algorithms to identify a control group of non-VC backed firms, we have the advantage that we can work with 'treatment' and 'control' data that are generated contemporaneously by the survey. 7 Our data allow us to identify firms that sought external finance, and those of them that obtained venture finance. The explicit consideration of finance-seeking behaviours, usually neglected in the literature, strengthens the quality of our sample and the precision of our results. Insert Figure 1 here We estimate two sets of simultaneous equations: In the first set, which contains two probit equations for patenting and venture capital investments, we ignore information about the number of patents and treat firms' patenting behaviours as binary outcomes. In the second set we introduce the number of patents in a zero-inflated Poisson model. The patenting equation in the recursive bivariate system is: where Patit is a dummy variable indicating whether firm i applied for one or more patents or, depending on context, was granted at least one patent period t. PatNitdenotes the number of patent applications or patents granted. The indicator function I(·) equals one if the condition in parentheses holds and zero otherwise. Since patent applications and grants can be zero -in which case the natural logarithm would not exist -we set ln(PatNit) to zero and use a dummy variable (Patit) to indicate patenting status. Endogenous venture capital investment is captured by an indicator variable (VCit), and Xit represents exogenous variables. The simultaneously determined venture capital investment is: where Zit is a vector of exogenous explanatory variables which can contain some or all of the elements inXit. Endogeneity of venture capital financing is accounted for by allowing arbitrary correlation between the error terms. Since variance of error terms is not identified in binary models, the error terms εit and νit are normalised to have a variance of unity. A similar simultaneous model structure can be used to predict the number of patents. Since patent data show a large number of non-patenting firms, we model this empirical regularity using a zero-inflated Poisson distribution. In this model, firms self-select into the patenting regime, and a third equation models the number of patent applications or grants produced according to a Poisson distribution. As in Lambert's (1992) zero-inflated Poisson model, the number of patents is distributed as: The likelihood that a firm chooses not to patent in the next period is: while the conditional mean of the Poisson process in the patenting state is: A novel feature of our model is that a firm's likelihood of obtaining venture capital is determined by an additional equation: as in the bivariate Probit case above. We allow for arbitrary contemporaneous correlation between νit and εit, as well as between νit and ωit, which are assumed to follow bivariate normal distributions. Specifying the model in this way allows for correlation between heterogeneity in expected means of patent counts, the decision to patent and VC financing. The variance of individual-level errors (ωit) introduces a free parameter that accounts for over-dispersion in Poisson models (Miranda and Rabe-Hesketh, 2006). Identification in semiparametric models of binary choice variables often relies on exclusion restrictions (Heckman, 1990;Taber, 2000) -in our parametric case, however, the functional form is sufficient for identification. In fact, imposing additional restrictions on our model could cause spurious results, since variables included in the VC equation but excluded from the patenting equations would affect the outcome equation through VCit if those variables were not truly independent from patenting. We therefore choose the exogenous variables to be identical in all equations (Xit = Zit). Section 5.4 describes additional results obtained from robustness tests that use exclusion restrictions in the patenting equation(s). In the following section we present the results of bivariate recursive probit models for VC financing and patenting (i.e., results for the simultaneous estimation of equations (1) and (2) and then the results we obtain from the system of equations complete with the zeroinflated Poisson model (i.e., equations (3) to (6)), estimated by maximum simulated likelihood (Gouriéroux and Monfort, 1996;Train, 2009). 8 We also report results of zeroinflated Poisson models that include information about the number of patents, but exclude simultaneous VC investment as our baseline results for the complete system of equations (see Table 5). As terms of comparisons and to show how key results differ when endogeneity is not taken into account, we include results of independent (single-equation) probit models as robustness checks (Table 6). Results We find that the correlations between venture capital investment and subsequent patenting are substantial and highly significant, ranging between 0.21 for (log) patent applications and 0.26 for a dummy variable measuring whether a firm was granted any patents after receiving VC investment. This positive link could be due to technological coaching or selection. As we construct increasingly complete models for the relation between VC investment and patenting, the coaching effect disappears. Table 3 presents the results from our simultaneous model that jointly predicts patenting and venture capital investment. Insert Table 3 here Patenting Venture capital does not increase patenting activity (Table 3 - We find strong persistence in patenting, for both applications and grants. If firms patent in one period, it tends to do so in the next. An indicator for prior-period patenting is significant in all specifications, while applying for or receiving a large number of patents in one period increases the likelihood of observing at least one patent in the next. These effects can be interpreted in two ways: On the one hand, prior patenting can be a proxy for unobserved heterogeneity between firms in their ability to produce innovations (other variables in our models might not capture all aspects of firms' internal processes and external market characteristics that lead to patenting behaviour). On the other hand, knowledge -in the form of existing patents -is an input for new patents. Existing patents can signal the size of firms' knowledge stocks, which are otherwise difficult to measure. As these productive capacity stocks depreciate over time, it is reasonable to assume that recent additions to the patent stock are the best predictors of present and future patenting activities, which is essentially what we find. Strong evidence of the productivity effects of R&D expenditures is consistent with prior studies (Cohen, 2010). The percentage of R&D staff weakly predicts patenting activity, only showing positive coefficients in model 2. Human capital -as measured by the CEO's education -does not appear to increase the likelihood of patent applications or grants. Firm age does not seem to affect patenting, 9 while firm size has a positive effect on future applications, but no effect on grants, as Bound et al. (1984) found. Other variables do not explain the variations in patenting that size would explain if they were excluded. Collinearity is low in our models (variance inflation factors are well below 5), and dropping significant variables from the models does not significantly change the effect of size. Industry effects collectively explain patenting, but on their own are only weak predictors. Significant Wald tests confirm the importance of controlling for industry effects. However, individual effects are rarely significant in our patenting models, as might be expected, since our estimations include detailed firm-level variables such as R&D and human capital. Unsurprisingly, firms categorised as medium-high technology manufacturing tend to apply for patents more often and obtain grants more frequently than low-tech manufacturing and service firms. Firms based in the U.S. exhibit a higher chance of success (in terms of their applications being granted) than those located in the U.K., an indication of known institutional differences between the two countries' patenting regimes. 10 As expected, patenting activity is strongly associated with product market characteristics. Firms that operate nationally or internationally are more likely to engage in formal IP protection than local or regional firms. There is little difference between models for future applications and grants. Products that need long lead development times are more often protected by patents than those with a short time to market. Again, this is reasonable from the viewpoint of a firm that needs more protection over longer R&D cycles. Finally, protection from imitation should be more prominent in industries characterised by intense competition, although it is possible that firms in concentrated markets try to deter entry through the strategic use of patenting (Scherer, 1983). While Scherer (1983) only finds evidence for a link between industry concentration and the number of patents in models that do not control for sectors, Baum and Silverman (2004) find fewer patents in concentrated industries. In contrast, the effect of high 10 The non-obviousness standard in U.S. patent law at the application stage has been weakened, leading to the grant of patents on increasing numbers of trivial inventions (Barton, 2003;Gallini, 2002). Structural differences in patenting processes also affect patent opposition, re-examination and revocation rates, which are significantly higher for European and U.K. patents than for U.S. patents (Harhoff and Reitzig, 2004;Graham et al., 2002). competition on patenting is negative in our models. 11 VC investment A firm's knowledge stock is a good predictor of venture capital investment (see Table 3). Patenting attracts VC investmentsmore specifically, it is the fact that a company is patentactive, not the number of applications or grants, that predicts VC investment. Results are particularly strong for the application indicator, which signals strong innovation potential in portfolio companies. R&D expenditures and R&D staff levels are both strong predictors of VC investments, as is the CEO's education level. VC involvement is more likely to be found in young firms, echoing the findings of prior research. Interestingly, however, venture capital funds appear to invest in larger firms more often than in smaller ones. This finding can be explained by interpreting size as a measure of investment risk, with very small firms typically being more opaque than larger ones. But it is also important to bear in mind that our sample includes firms with 10 to 1000 employees, and is therefore a sample of SMEs, as demanded for a study of VC investment. Industry effects point to a preference among venture investors for R&D services or software. Firms operating in larger (international) markets seem to be attractive investments, while coefficients for the intensity of competition are insignificant. Firms with long product development times are neither more nor less likely to gain venture capital. 12 4.3. Two-stage patenting -Patent counts, patenting, and venture capital 11 We also test the hypothesis that competition is more relevant if the firm operates internationally, but do not find that this interaction effect is significant. Prior studies have found conflicting evidence on the impact of profitability on patenting: Bertoni et al. (2010) show a positive relation between net cash flow and patents, whereas Baum and Silverman (2004) report a negative one. We also tried using a proxy for profitability constructed from pre-tax profits scaled by assets, but did not find significant results. Consequently, we decided to drop this variable from our models, due to the large amount of missing values in survey responses on profits. The large number of zeroes in patent counts suggests that patenting is a two-stage process, consisting of the binary decision whether to use patenting as an IP protection strategy and a decision about how many patents to apply for. Two popular methods used to model the number of patents produced by such processes are based on a zero-inflated Poisson distribution and a zero-inflated negative binomial distribution. In order to further refine our findings we integrate a zero-inflated Poisson process in our system of equations, which now includes an equation for patent counts, one for patenting, and one for venture capital investment. Table 4 presents the results for these estimations. For completeness, we include results from a model with only the patent counts and patenting equations as Table 5, which provides the baseline for the full (three-equation) model discussed in this section. Insert Tables 4 and 5 here The positive effect on firms' latent patenting states, which would be expected if venture capitalists added to their patenting performance, disappears across all models, while the negative effect on the number of granted patents remains. Moreover -and as in the simultaneous binary patenting models -the number of patent applications drops after VC investments. If we look at the number of patents being applied for or granted, our results support the view that VC finance follows patent signals to invest in companies with existing commercially viable know-how. While the effect of VC investment on both the use of patents and on their numbers seems negligible, VC has a negative impact on patent grants and applications in some models. Venture capitalists are attracted to firms that produce patents, but do not contribute to the expansion of firms' knowledge stocks -instead, they are likely to shift firm resources from producing new patent applications to exploiting existing knowledge. Control variables for future patent counts behave mostly as expected, and give further insights into firms' patenting decisions. While manufacturing firms and service firms appear -perhaps counterintuitively -equally likely to patent (see model 1 in Table 3), we find that being a manufacturing or R&D firm increases the number of patent applications relative to other service firms (see model 1 in Table 4). The estimation algorithm for three simultaneous equations picks the relevant equations for our two R&D variables: The existence of R&D programmes mainly predicts patenting in general, while the proportion of R&D staff explains the number of applications and grants awarded. In line with Baum and Silverman's (2004) results, we find that competition has a negative impact on the decision to patent (in bivariate models in Table 3) and the number of patents applied for or granted (in trivariate models in Table 4). However, firms tend to protect their position in the market by choosing to patent if their markets are large or have long product development times. Estimated model parameters provide support for modelling VC investment, patenting, and the number of patents simultaneously. In most of the models tested for patent applications and grants, error correlations between the first (VC) equation and the second and third are substantial and significant. External shocks leading to VC investment correlate with the likelihood to patent with the expected positive sign (and negative sign for not patenting). Estimated error correlations between VC investment and patent numbers are again large and significant. We also test model stability by checking influential observations and crosstabulations for firms that start or stop their patenting activities depending on VC investment, but find no abnormalities. Independent equations Our first robustness check shows the advantages of our estimation strategy over simpler alternatives. We compare the results of our simultaneous estimations with those obtained from independent regressions for VC investment and patenting (see Table 6). While there is no change in the VC equation, there is a striking difference in the patenting equations (Table 6, columns 1-3): Without controlling for endogeneity, venture capital appears to increase the likelihood of patents being granted. This arguably biased result not only disappears in the simultaneous estimation strategy, but the finding emerges that VC investment can reduce the likelihood that firms apply for new patents in the period immediately after the investment. We can rule out that reduced effects of venture capital are caused by estimation uncertainty due to the additional parameter for error correlation between equations. Wald tests of the joint significance of this correlation and the coefficient of venture capital on patenting are significant at the five percent level in all models in Table 3. The impact of positive error correlations can be seen in the coefficients for venture capital, which change considerably when estimated simultaneously. Introducing cross-equation correlation also harmonises coefficients for some variables across models and causes no major changes in the results for control variables. Insert Table 6 here Whether or not a firm obtains finance could have an impact on its ability to start or sustain patenting activities. Since we perform our regressions on a sample of firms that sought external finance, rather than only on those that obtained it, we add a set of robustness tests for this subsample. Results of separate regressions (not reported here) confirm our findings in Table 3. However, two small changes appear in the patenting equations. First, the effect size of development time decreases slightly and loses its significance. Second, coefficients on product market competition all increase in magnitude, and the one predicting future applications becomes slightly significant. Estimating our models on the full dataset (including those 96 firms that did not obtain external finance) has two advantages over the smaller sample. First, adding these observations increases the statistical precision of our results. Second, our findings are conservative, that is, the effect of obtaining venture capital on patenting can be upward biased if it includes a (positive) effect of obtaining any kind of finance, which would be ignored if firms gaining no finance were excluded. In this sense, the negative or insignificant coefficients for venture capital represent upper bounds for the 'true' effect. Sample attrition bias Sample attrition can be a problem if firms disappearing in period t+1 are systematically those for which venture capital investment had a positive effect on patenting. To limit the risk of attrition bias we investigate the merger and acquisition history of the firms in our sample by retrieving the relevant records from Thomson Reuters' SDC Platinum database. We use this information to estimate the probability that firms are acquired depending on whether they patent and whether they receive VC funding. Table 7 shows that firms are more likely to be acquired if they patent in period t and if they experience VC investment. This result is not surprising since acquirers may follow the same patent signals that cause venture capitalists to invest -or may even be venture capitalists themselves. Note, however, that the interaction effects for VC investment and patenting are not significant. Sample attrition due to mergers and acquisitions, if these firms actually leave the sample, is thus unlikely to systematically reduce the measured effect of VC on patenting. Insert Table 7 here After inspecting the M&A records in the SDC database, if a firm is the target of any kind of transaction, we correct the number of patent applications and grants in the following way. If a firm retains its identity and remains active under its name in the sample, we keep the original number of patents. An example of this type of transaction is a leveraged buyout, in which the firm's management acquires the firm, but the operating business remains unchanged. If, instead, the firm is merged into the acquirer, we check whether it is still patenting at the original location, but under the acquirer's name, and if this is the case we add these patents to the sample. If the firm disappears as a legal entity after the merger, we add all the acquirer's patents to the original firm's patents, which then establishes an upper boundary for the firms' patenting activity in t+1. Of 128 transactions we identified, we adjusted the patent application or grant numbers for 15 firms. The results of the binary models presented in Table 8 are unchanged, while those of the zero-inflated Poisson models presented in Table 9 show an insignificant effect of VC on patenting in models 1 and 3, as one might confidently expect after attributing merged firms' technological outputs into those of acquiring firms. Formal vs. informal venture capital Pooling formal and informal VC investments in our model may affect results if the two classes of investors behave differently in relation to the technological profiles of their investee firms. Out of 96 firms that obtain VC financing in our sample, 66 firms receive formal VC funding, while 41 firms attract VC investors. We therefore disaggregate formal VC investments from those of informal venture capitalists (such as business angels) in Table 10, but find the results for the two sub-samples to be very similar. Effects of VC involvement are negative in both cases, albeit insignificant, while the effect of formal VC on patenting seems to be more negative than that of informal VC. While these effects do not seem to differ between types of investors, their lack of significance is an expected consequence of the reduced number of observations in each binary model estimation. It should also be pointed out that the effects observed in Table 10 may be due to the fact that the control groups for firms that gained formal (or informal) venture capital include firms financed by informal (or formal) venture capital. Hence, we would (to some extent) be comparing firms receiving formal and informal venture capital and not, for example, firms gaining formal venture capital with those receiving neither type of investment. If we exclude firms funded by informal VC from the control group for the formal VC models, and vice versa, we find that these effects remain insignificant and qualitatively unchanged from those presented in Table 10. These robustness checks, which take into account the type of VC (2012), whose estimations of probit models for the probability of receiving formal or informal VC investments show no substantial differences between the reactions of the two to patent signals. At this stage, this remains an interesting question for further investigation. Identification and alternative control variables Estimation results for the patenting equations in our bivariate and trivariate models depend on the correct specification of the venture capital equation describing the selection of investments by VC investors. We test two alternative specifications to address potential model misspecification. First, we replace logarithmic age and size with quadratic specifications, as is customary in some of the literature on small and medium-sized enterprises. Second, we test our main models with two extra regressors in the venture capital equation to strengthen identification of the model. When we re-run the models in Table 3 and Table 4, all results for the effect of VC investment on patent applications and grants continue to hold. Adding four extra terms to the bivariate model (squared age and size in both equations) and six new terms to the trivariate models may be a cause for concern about over-specification, and about whether these extra terms add explanatory power. A direct comparison of quadratic specifications against the baseline models using Akaike's information criterion (AIC) suggests that quadratic specifications perform either as well as (in models with patent grants) or worse (in models with patent applications) than the baseline model with logarithmic controls. Models identified by functional form, such as the simultaneous models tested in this paper, may be sensitive to error terms' deviations from normality. An exclusion restriction in the patenting equation(s) can help to identify the coefficients in the model, if a variable can be found that explains VC investment but not patenting outcomes. The survey dataset used in this paper includes two such variables. Respondents are asked on a five-point Likert scale whether they expect the firm's turnover will be smaller or larger in ten years' time, and a similar question is asked about the firm's market value. When we add both variables measuring expectations about future growth to the VC equations in Table 3 and Table 4, they jointly explain VC at the 5 percent significance level, which is plausible given that highgrowth firms are likely investment targets of VC funds. A firm's patenting behaviour is not expected to be driven by growth options, but rather by the appropriability of its technology and the indicators for market structure that we use in our models. Results including these two regressors in the VC equation are qualitatively identical to our main results. Almost all coefficients that were originally significant at the 5 percent level remain significant at that level. Only two differences can be found: market size in the patenting equation in model 1 in Table 3 and product development time in model 1 in Table 4 are both now significant at the 10 percent level. We conclude that our results are robust against the two risks of model misspecification tested here. 13 13 The results from all these robustness tests are available from the authors on request. Conclusion The mechanisms by which firms signal their quality to investors through patents and how venture capital funds influence these firms' patenting behaviours have been studied in the literature, but have rarely been linked to one another. We argue that, as firms' patenting activities might depend on venture capitalists' decisions to invest based on patent signals, these two decisions should be investigated simultaneously instead of separately. In this paper, we model firms' patenting behaviours explicitly allowing for the endogeneity of VC investments. Incorporating investors' decisions into a simultaneous model is necessary to disentangle investment selection from technological value-adding (or coaching) effects. In contrast to the findings of studies on aggregate patenting and VC investment, we find that the causal link between VC and patenting is weak, at best. A positive effect can only be found if the potential endogeneity of VC financing is ignored. Instead, we find that VC even exerts a negative influence on investee firms' future patent applications and grants. This suggests that, by limiting the dispersion of inventive efforts that often characterise inexperienced firms, venture capitalists help portfolio companies to rationalise their technology searches and focus on the opportunities with the highest commercial potential. This result is plausible and compatible with the expectations of behavioural theories of the firm that take into account the cognitive limitations of economic agents and stress the importance of the allocation of resources -such as managerial attention -to selected aspect of the business (Simon, 1947;Weick, 1979;Ocasio, 1997Ocasio, , 2011 When we consider the co-determinants of patenting, we find that firm size is positively related to future patent applications. R&D efforts measured by the existence of R&D expenses and the percentage of R&D staff are highly significant. Where Baum and Silverman (2004) find mixed evidence for an age effect on applications and grants, we decompose this effect into a non-significant one on the likelihood to patent, and a potentially negative one on the number of grants obtained. We find that having an R&D programme determines whether a firm patents at all, while the proportion of scientific staff explains the number of patent applications and grants. Finally, the effect of industry competition on the intensity of patenting is negative, and product development times and market size are both positive 14 We are grateful to an anonymous referee for pressing us on this issue and for suggesting the alternative explanation. The problem of VC as 'impatient capital' has been recently discussed by Mazzucato (2013) and Crafts and Hughes (2013). The stage distribution of VC investments is analysed in some detail by Lahr and Mina (2014). predictors of patenting activity. VC funds select portfolio companies based on the signalling function of patents. Interestingly, while such investors are attracted to patent-active firms, they show only weak sensitivity to the number of patents, which might again indicate a preference for focus rather than (possibly over-dispersed) search activities. By modelling the venture capitalists' decision to invest and the portfolio company's patenting activity simultaneously, we find that patenting has much sharper effects on VC investments than the other way round. The coaching function of VC concerns the commercialisation of a firm's existing patents or contributes to the rationalization of its patenting activities. This also indicates that, in this context, innovation -interpreted as the Schumpeterian application of invention to market need -may be promoted not by more, but by less patenting after external investment. From a technical viewpoint, our models greatly reduce the chances that selection by venture capitalists might drive a change in observed patenting behaviours, because estimating the correlation between the error terms in both equations controls for unobserved simultaneous variance in VC financing and patenting. If VC reacts to some unobserved company characteristic that can be subsumed within the error term of the switching equation, this unobserved heterogeneity is taken into account when estimating the outcome model for patenting activity. Error correlations between the venture capital and patenting equations are significant and substantial, which supports our estimation strategy and further strengthens the case for this study's methodological approach. Further research -possibly with larger samples, while controlling for selection effects -could generate additional quantitative evidence on the effect of a VC's coaching function on different aspects -or stages -of the technology life cycle. This may also include identifying implications for short and long-run firm performance, conditional on the joint dynamics of patenting and financing. Figure 1 Model framework Notes. Dependent variables are venture capital investment at time t and the number of patent applications or grants at time t+1. In the binary bivariate case, "Patents (yes/no):" measures whether we observe any number of patents for the firm at time t+1. In zero-inflated Poisson models that also include the number of patents at t+1, this variable indicates firms' latent patenting status. ( ( This table presents zero-inflated Poisson models for patent applications and patent grants during the period after the survey. When comparing coefficients from the patenting equation with prior models for the likelihood to patent, all signs must be reversed as the "patenting" equation in this table predicts the likelihood of not patenting. As a robustness test, we tried zero-inflated negative binomial models. Tests for overdispersion are all insignificant in these models, while Vuong tests against the alternative hypothesis of a standard Poisson process are highly significant. Robust standard errors are in parentheses. Significance levels: *** p<0.01; ** p<0.05; * p<0.1. ( This table presents probit models for the likelihood of observing any number of patent applications or grants, respectively, in columns 1-3 and probit models for the likelihood of observing venture capital investments in columns 4-6. Robust standard errors are in parentheses. Significance levels: *** p<0.01; ** p<0.05; * p<0.1. ( This table presents bivariate recursive probit models for patent applications, patent grants and for the likelihood of observing venture capital or business angel investments. Robust standard errors are in parentheses. Significance levels: *** p<0.01; ** p<0.05; * p<0.1.
v3-fos-license
2019-09-08T13:05:55.062Z
2019-08-27T00:00:00.000
201868366
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1016/j.ijpddr.2019.08.007", "pdf_hash": "78d24a496d00bd24f06af13b5f4a89df74e1ce76", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2877", "s2fieldsofstudy": [ "Medicine" ], "sha1": "6d6e05c1305293cb9639a5e550d2eea65c00c042", "year": 2019 }
pes2o/s2orc
Successful treatment by adding thalidomide to meglumine antimoniate in a case of refractory anthroponotic mucocutaneous leishmaniasis Mucosal leishmaniasis (ML) is mostly associated with Leishmania braziliensis; however, a few cases of Leishmania tropica induced mucocutaneous leishmaniasis have been reported. The standard treatment for leishmaniasis is pentavalent antimonials, but several other drugs for resistant cases have been proposed including amphotericin and miltefosine. Here we present a case of multiple treatment resistant mucocutaneous leishmaniasis with nasal involvement caused by L. tropica; cure was not achieved by multiple treatments and was eventually improved by adding thalidomide to Meglumine Antimoniate (Glucantime). To the best of our knowledge use of thalidomide in humans for leishmaniasis treatment is reported here for the first time. Summary Most common causes of leishmaniasis in Iran are Leishmania major and Leishmania tropica that usually cause cutaneous leishmaniasis, however a few cases of L. tropica induced mucocutaneous leishmaniasis with oro-mucosal lesions have been reported that was treated by intravenous infusion of amphotericin B. Here we present a case of refractory mucocutaneous leishmaniasis with nasal involvement caused by L. tropica, which was eventually improved with thalidomide. Case presentation A 20-year-old man, accounting student, presented to cutaneous leishmaniasis clinic in Imam-Reza hospital Mashhad, Iran with an edematous mass in the right nasal nare. The patient had a history of cutaneous leishmaniasis 9 years ago (2009) on his chin and left forearm that was improved as a local outpatient treatment (with intralesional injection of meglumine antimoniate). However, after a short time, indurated mass on his left nasal nare developed. The new lesion showed leishmania parasite in the direct smear and was treated with one course of intramuscular meglumine antimoniate with remission of this lesion. Nevertheless, after one month, an indurated right nasal nare mass plus dyspnea and nocturnal snoring appeared. Nasal endoscopy sampling confirmed mucosal involvement by leishmaniasis. This bothersome mucocutaneous lesion (Fig. 1) did not cure over the last few years in spite of size reduction and temporary remission during various treatments, including a few course of intramuscular meglumine antimoniate (glucantime), amphotericin B deoxycholate and amphotericin B liposomal. His Immune competency laboratory test included the nitroblue tetrazolium (NBT), the dihydrorhodamine (DHR) test, blood flow cytometry T cell subtype and serum hemolytic complement (CH50) activity were within normal limit except serum Immunoglobulin E, which was more than 500. Direct smears of the lesions were repeatedly positive and the polymerase chain reaction (PCR) reported the responsible parasite to be L. tropica. He also had anti-leishmania antibodies in his serum, but abdominal Ultrasound and bone marrow aspiration were negative and ruled out visceral leishmaniasis. Considering the location of the lesion, sinuses and nasal coronal CT scans reported no pathologies. In 2016, he underwent surgical excision of the lesion and received oral miltefosine 150 mg daily for two months but the lesion did not resolve completely and recurred after 3 month. Finally, in March 2018 the patient was readmitted to the dermatology department of Imam-Reza hospital with the diagnosis of multiple treatment resistant, L. tropica induced mucocutaneous leishmaniasis.At the time of admission, the patient weighed 66 kg and the presence of Leishman bodies was confirmed in the direct smear of the lesion. He was treated with combination of intramuscular glucantime (850 mg daily for 28 days) and oral thalidomide (100 mg daily for two months).Given that the first experience of this combination therapy, the minimum therapeutic dose of these drugs was used. During the first T month of treatment the size of the lesion decreased significantly and during the second month the induration of the lesion disappeared and the atrophic and some retractive scar remained.No adverse effects observed during this combination therapy and the direct smears from the lesion were negative after discharge and at third and sixth months after the beginning of the treatment. No signs or symptoms of recurrence until now (February 2019) were observed during the one-year followup (Fig. 2). Discussion Leishmaniasis in Iran is mainly caused by three species; L. major, L. tropica and L. infantum. L. major and L. tropica are associated with cutaneous leishmaniasis (CL) whereas L. infantum can cause visceral leishmaniasis (VL) (Cincurá et al., 2017). Mucosal leishmaniasis (ML) is mostly associated with L. brazilliens (Cincurá et al., 2017). However, several cases have been reported with other species. L. tropica has also been linked to mucosal leishmaniasis by a few case reports. It was first described in Saudi Arabia (Morsy et al., 1995), and two cases of mucosal leishmaniasis by L. tropica have been reported in Iran. These cases of mucosal leishmaniasis responded to intravenous amphotericin and resolved completely (Shirian et al., 2013). Mucosal leishmaniasis in new world is a devastating form of leishmaniasis that commonly affects the nasal and oral cavity and may lead to nasal deformity and even destruction of nasal septum (Frischtak et al., 2018). Moreover, it may extend further and involve epiglottis, vocal cord and even trachea and bronchi leading to respiratory failure and death (Carvalho et al., 2018). It may be preceded by the cutaneous lesions or occur as a primary lesion. The standard treatment for ML is pentavalent antimonials; however, its success rate is limited. Other proposed therapies include amphotericin B, miltefosine (Ventin et al., 2018). Here we reported a L. tropica induced mucocutaneous leishmaniasis, resistant to already known treatments; meglumine antimoniate, amphotericin and miltefosine. Thalidomide as an immunomodulator was added to intramuscular meglumine antimoniate (glucantime) therapy, and resulted in complete remission. No recurrences, was observed during the one-year follow-up. Although production of proinflammatory cytokines, such as Interferon gamma (IFN-γ) and tumor necrosis factor-α (TNF-α), is important for Leishmania killing, an overproduction of these proinflammatory cytokines as well as a decreased ability of IL-10 and TGF-β to modulate this response may lead to severe tissue damage that have been seen in mucosal leishmaniasis patients compared to patients with classical cutaneous leishmaniasis. These abnormalities may be the basis for the pathological findings and the therapeutic challenge observed in this disease, and accordingly, the rational clinical application of immunomodulatory drugs for ML (Ventin et al., 2018). According to immunomodulatory and anti TNF-α activity, adding pentoxifylline to meglumine antimoniate in the treatment of refractory cases of mucosal leishmaniasis is currently recommended (Lessa et al., 2001). However, in our patient, a combination therapy course with oral pentoxifylline and intramuscular meglumine antimoniate did not provide more efficacy than monotherapy with intramuscular meglumine antimoniate. Thalidomide, once a known drug, deserted due to a series of adverse effects related to its teratogenicity. However, in recent years it has begun to attract attention especially in the field of dermatology for immunomodulatory, anti-inflammatory, and antiangiogenic properties (Franks et al., 2004). Levels of TNF-α are increased in serum and cultures from patients with mucosal disease that leads to an intense inflammatory reaction, so thalidomide can be useful as TNF-α inhibitor drug (Lessa et al., 2001). Verbon et al. showed that ingestion of thalidomide did not change the level of TNF-α in healthy people (9). About the effects of thalidomide on Interferon gamma (IFN-γ), an important cytokine in leishmaniasis pathogenesis, some studies demonstrated that an increase in IFN-γ can be observed in response to Thalidomide (Partida-Sanchez et al., 1998). Another study elicited that a single oral dose of thalidomide in healthy people enhances the ability of peripheral blood mononuclear cells (PBMCs) to secrete IFN-γ (Verbon et al., 2000). In the case of leishmaniasis, no report exists on the use of thalidomide in humans however, an animal study showed treatment potentials of thalidomide in leishmaniasis. In this study, BALB/c mice were infected with L. major and were later treated with a combination of glucantime and thalidomide. Results showed a superior efficacy of combination therapy to glucantime, thalidomide or carrier alone. They also showed that thalidomide effects were mainly due to up-regulation of IFN-γ and down-regulation of IL-10 (Solgi et al., 2006). Considering the contradictory roles of thalidomide regarding IFN-γ and TNF-α, the results of related studies, and observations from our study, more research is needed to further understand the role of thalidomide in leishmaniasis and its effects on IFN-γ, TNF-α and immune responses of patients with leishmaniasis. Despite various applications proposed for thalidomide in dermatology, its use is limited by an infamous profile of side effects mainly its teratogenic effects that led to its withdrawal in 1962 (Franks et al., 2004). However many of these adverse effects can be avoided by adhering to guidelines designed in order to decrease its risks especially about teratogenicity. To the best of our knowledge, this is the first report of the use of thalidomide in the treatment of leishmaniasis in humans. Although it showed promising results in our patient, further studies are required for establishing its efficacy in treatment of leishmaniasis. Conflicts of interest No Conflicts of interest. Funding source No financial support.
v3-fos-license
2024-05-11T15:10:55.976Z
2024-05-09T00:00:00.000
269675352
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1333677/pdf", "pdf_hash": "33ed12780f93c24cf02dde29af76581520c7e75c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2878", "s2fieldsofstudy": [ "Psychology", "Education" ], "sha1": "c6364527b4e7e8511e88867348807bf3a4495679", "year": 2024 }
pes2o/s2orc
Effect of proactive personality on career adaptability of higher vocational college students: the mediating role of college experience For higher vocational students, the college stage is an important period in their career development, and the college experience plays an important role in the relationship between their proactive personality and career adaptability, which in turn has a significant impact on their future career development. From the perspective of social cognitive career theory and taking 476 vocational students as samples, this paper explores the mediating role of college experience between proactive personality and career adaptability of vocational college students. The college experience scale is revised for higher vocational students, and it is verified to have good reliability and validity. SPSS and Amos were used to conduct correlation analysis,and the PROCESS macro was used for mediating effect analysis. The results show that the college experience of vocational students plays a partial mediating role in the effect of proactive personality on career adaptability. This work innovatively uses social cognitive career theory to explore the role of college experience in the relationship between proactive personality and career adaptability among vocational students. The theoretical models are established and empirical verification is conducted, confirming that higher vocational students’ college experience can affect their career adaptability. These results provide empirical evidence for vocational colleges to improve the career guidance of college students, and intervention measures are proposed to enhance students’ career adaptability during school years, thus promoting their sustainable development. Introduction Because of the highly complex environment caused by informatization and globalization, today's career development is highly unpredictable and uncertain (Bright and Pryor, 2005), leading to uncertainty in future career development for college students (Lechner et al., 2016).Regarding career development, a college student cannot guarantee that they will find a job that matches their major after graduation, and even if they can find such a job, they will not necessarily have it for life.Faced with this changing social environment, it is difficult to succeed Fang et al. 10.3389/fpsyg.2024.1333677Frontiers in Psychology 02 frontiersin.org in the workplace without strong adaptability.As the founder of career construction theory, Savickas proposed that its core element is career adaptability (CA), which refers to individuals' readiness and resources to deal with current and upcoming career development tasks, career transformation, and career-related trauma (Savickas and Porfeli, 2012;Savickas, 2020).As an important concept in the career education of college students, CA is a physical and mental quality that students should have (Li et al., 2021) and is closely related to their future career development (Jiang, 2016;Rudolph et al., 2017).CA can not only increase the opportunities for individuals to find suitable jobs, but also help them to adapt successfully to all aspects of the workplace (Su et al., 2016).Overall, CA is considered to be an important prerequisite for career success (Koen et al., 2012), and can help college students to transition successfully into professionals (Maggiori et al., 2017). With the advent of boundaryless careers, dynamic and continuous environmental changes have led to a more complex career environment, and researchers have also paid increasing attention to proactive personality (PP).Individuals in boundaryless careers should be more active in both career management and continuous lifelong learning (Jackson, 1996).Bateman and Crant (1993) proposed PP as a stable tendency and trend whereby individuals act to influence their external environment; i.e., individuals with PP are not easily affected by situational resistance but rather act actively to change their external environment.Some scholars have studied the relationship between PP and CA. Brown et al. (2006) found that individuals with PP are not only more successful in their career but also more able to adapt to their external environment.Li et al. (2013) showed that the higher the level of PP, the more actively individuals pay attention to their career development and the more they explore and attempt, and thus develop higher CA.McArdle et al. (2007) found that PP was significantly positively correlated with CA, and Cai et al. (2015) showed that selfesteem and PP positively predicted CA.Individuals with PP are more successful in developing their own CA resources than are individuals with inactive personality (Tolentino et al., 2014). In the social cognitive career theory (SCCT) model of personal, environmental, and empirical factors that affect career-related choice behavior, learning experience is an important intermediate variable, one that plays a mediating role between personal and environmental variables and self-efficacy expectations and outcome expectations (Lent et al., 2013).Although the learning experience in SCCT differs in concept and perspective from the college experience (CE) in this study, it still has some similarities.In SCCT, learning experience is generally regarded by researchers as an important variable in individual career development.Although CE is only a part of learning experience, it is CE that can be concerned about and intervened with for a college.Teenagers are at an important stage of career exploration (Jiang et al., 2019), which helps individuals to clarify their future career development goals and make more-proactive career adaptation behaviors (Kaminsky and Behrend, 2015).Empirical research also indicates that career exploration positively predicts college students' CA (Guan et al., 2015;Yang et al., 2021).Hu et al. (2021) noted that the college stage is the key period for students' professional learning and the cultivation and development of various qualities and abilities, as well as being an important stage for career exploration.Hirschi (2009) showed that teenagers improve their CA through career exploration during school.Some research results have shown that there are significant differences in CA among groups of college students with different extents of participation in social practice activities: the more that college students participate in social practice activities, the higher their CA (Zhao, 2015). The above literature review shows that CA and PP are attracting increasing research attention.However, although previous studies have shown that CE has a significant impact on career development (Akos et al., 2021), there has been little research to date on the relationship between CE and CA, especially the impact of specific dimensions of CE (e.g., professional learning experience, employment practice experience, and project learning experience) on CA.In particular, there is still a need for in-depth research-from theoretical models to CE scales to empirical verification-on the relationship among PP, CE, and career development abilities (such as CA). The study reported herein is aimed at higher vocational students, who differ significantly from ordinary undergraduate students in that most vocational students begin working in society as soon as they graduate and have shorter school hours compared with ordinary undergraduates.They are relatively lacking in active self-planning consciousness, and most lack long-term goals and corresponding learning and development plans.Therefore, it is even more important to guide higher vocational college students to set goals, take positive actions, and use their limited school time and the resources around them efficiently in order to enhance their CA. From the perspective of SCCT, this study explores further the mediating role of higher vocational students' CE in the relationship between their PP and CA by establishing theoretical models and conducting empirical verification, based on exploring the impact of their PP on CA.In this study, because there is currently no readily available CE scale for vocational college students, the existing CE scale is revised and validated to support the present research.Based on these research results, this paper proposes intervention measures for vocational colleges on vocational students during their school years, thus better leveraging the role of CE in their career development and enhancing their CA. Theoretical framework Developed based on Bandura's social cognitive theory, SCCT (Lent, 2013;Lent and Brown, 2019) is now applied mainly in the field of career development, being used to complement the basic theoretical methods of career development and to establish links among them.SCCT inherits the success of existing career theories, combines their highly similar elements as much as possible, and takes social cognitive theory as its main idea to integrate other relevant research results.SCCT combines personal characteristics, social background, and learning experience by virtue of submodels such as career interest, career choice, and job performance, and it discusses the process of career choice, adaptation, and development.It emphasizes the two-way and complex interaction among individuals, behaviors, and environment, and it points out that an individual's unique learning experience plays an important role in determining and planning their career development path.SCCT is a relatively open theory, reasoning that the process from the formation of individual internal learning experience to the choice of careers is affected by various factors, which also provides various possibilities for effective intervention of career education in higher vocational colleges (Liu and Yao, 2015). Based on SCCT (see Figure 1 for the specific theoretical model) (Lent and Brown, 2019), this study mainly explores the mediating role Research hypotheses Tolentino et al. (2014) showed that individuals with PP are not only more successful in their career but also more able to adapt to their environment.Some studies have found that college students' PP can positively predict their CA level (Hou et al., 2014).Ling et al. (2022) showed that teenagers' PP has a positive predictive effect on the CA of Chinese adolescents, i.e., their PP affects their future work selfsignificance and then affects their future time perspective and CA.Zhao et al. (2022)showed that PP has a positive impact on CA.According to these findings, Hypothesis 1 is proposed as follows. Hypothesis 1: There is a significant positive correlation between the proactive personality and career adaptability of vocational college students.Brown et al. (2006) showed that PP has a positive impact on career success, and individuals with strong CA are more likely to succeed in their career.Hirschi (2009) showed that the perceived social support of adolescents can enhance their CA during school.Hu et al. (2021) showed that college students with more PP can perceive more emotional support and work harder, thus improving their professional identity and CA.Based on these findings, Hypothesis 2 is proposed as follows. Hypothesis 2: Higher vocational students' college experience plays a mediating role in the effect of proactive personality on career adaptability. Based on Hypothesis 1 and 2, the proposed hypothetical model is shown in Figure 2. Regarding college students in mainland China, Zhao (2015) showed that their CA differed significantly with gender and grade but not with place of origin.Fu et al. (2022) showed that there are significant individual differences in the initial level of college students' CA.Regarding high school students in Hong Kong, Leung et al. (2022) found that their CA differed significantly with gender and grade.As a further test of those dissimilarities in higher vocational students, Hypothesis 3 is proposed as follows. Hypothesis 3: Career adaptability differs significantly with gender and grade.Restubog et al. (2010) showed that the support of classmates and parents and the amount of career counseling are related to career selfefficacy and career decision-making.Zhao (2015) showed that in terms of the extent to which students participate in social practice activities, there are very significant differences in CA and its various dimensions among college students: the more that students participate in social practice activities, the higher their CA.College students are about to face the transition from school to work, which is also their first career transition.Active career exploration is an extremely important coping behavior that can help individuals to obtain relevant career information, accumulate more adaptive psychological resources, and finally achieve their career goals (Sonnentag et al., 2017).Other studies have found that training college students in vocational interests and skills can significantly promote individual career exploration (Owens et al., 2016).In view of this, Hypothesis 4 is proposed as follows. Hypothesis 4: College experience differs significantly with grade. 2 Materials and methods Participants and procedures A prior power analysis was performed for sample size estimation by G*power 3.1.A sample size of 305 participants was needed for achieving a power of 0.95 to detect a medium effect size of f = 0.25 and an α level of 0.05.A convenience sampling method was used to select 570 vocational students from three ordinary colleges in Hangzhou and Shaoxing, which are convenient for research and are representative in Zhejiang Province.We had obtained permission from the participants before they completed the questionnaire during the class.SPSS 26.0 and Amos 24.0 were used for correlation analysis, and the PROCESS macro (Hayes and Scharkow, 2013;Hayes, 2018) was used for mediating effect analysis.We applied the criteria proposed by Hu and Bentler (1999) to evaluate the model fit: a combination of CFI > 0.90, TLI > 0.90, RMSEA <0.08, and SRMR <0.08 indicates a good model fit. Selection of career adaptability and proactive personality scales We used the Career Adapt-Abilities Scale-China Form (CAAS-China) revised by Hou et al. (2012).This scale involves 24 items, each of which describes an ability from "not strong" to "very strong" and scored on a five-point Likert scale; the higher the score, the stronger the degree of development of the ability.The items cover four dimensions: (i) career concern, (ii) career control, (iii) career curiosity, and (iv) career confidence.Each dimension involves six questions scored on a five-point Likert scale; the higher the score, the stronger the CA.After testing for internal consistency, the overall value of Cronbach's alpha is 0.89, and the individual values for the four dimensions are between 0.64 and 0.79, indicating good reliability. We also used the Proactive Personality Scale (PPS) developed by Bateman and Crant (1993) and revised by Shang and Gan (2009).This scale involves 11 items, each of which is scored on a seven-point Likert scale; 1 represents "strongly disagree" and 7 represents "strongly agree, " and the higher the score, the higher the PP.The retest reliability of the original scale is 0.72, and the internal consistency reliability is between 0.87 and 0.89.The internal consistency of the revised scale is 0.87. Selection and revision of college experience scale As for the CE scale, there is no ready-made CE scale for higher vocational students, so we referred to the CE scale compiled by Jin et al. (2013) and revised it into one suitable for higher vocational students.First, we interviewed 10 vocational students (including graduates and students in school) and 10 vocational college teachers engaged in student employment, and we used the first stage of the questionnaire to solicit their opinions on the language expression of the questionnaire.After comprehensive consideration, the original question bank of the scale was formed, which we divided into five dimensions.Second, to improve the content validity of the items of the scale, we invited experts in psychology to discuss together, interpret each item, and delete items with duplicate content.Third, based on the actual CE of higher vocational students during the school period, we added items such as "obtaining vocational qualification certificates related to your current major during the school period, " "participating in practical training (short for practical training of vocational skills), "and "participating in the employment practice (including post placement practice) provided by the school, " among others.Finally, through comprehensive interviews and previous studies, we determined 35 items, which are divided into five dimensions of the typical CE of vocational college students, including professional learning experience, social work experience, employment practice experience, social internship experience, and project learning experience.The specific explanations for each dimension are as follows. 1) Professional learning experience: refers to academic performance, performance of acquiring scholarship, and participation in disciplinary competitions such as vocational skills competitions and career planning competitions.2) Social work experience: refers to experience mainly involving participation in student organizations and clubs, including experience in club activities, student cadres, class assistance, and volunteering.3) Employment practice experience: refers to participation in employment practice organized by the school.4) Social internship experience: refers to individual participation in paid social internships and part-time work.5) Project learning experience: refers to involvement in employment and entrepreneurship platforms, projects, or activities organized or provided by the school. 2.2.2.2 Reliability and validity of college experience scale 243 students from three vocational colleges were selected for the preliminary test.We conducted confirmatory factor analysis (CFA) on the CE scale.Because the scale was revised based on the relevant theories and studies mentioned above and has a theoretical structure, we conducted CFA directly on the scale with the Amos software (ver.24.0) to analyze whether the items used conform to the theoretical construction.The fitting index demonstrated by this model (CFI = 0.88, TLI = 0.87, RMSEA = 0.08, SRMR = 0.06) does not meet the criteria proposed by Hu and Bentler (1999).To improve the model fit, we addressed the modification output of Amos by allowing a correlation between the residuals of item 1 (after hard study, all course grades ranked in the top 10 of the class) and item 2 [excellent in professional courses with a score of 85 points or above (maximum score is 100 points)], a correlation between the residuals of item 1 (after hard study, all course grades ranked in the top 10 of the class) and item 3 [excellent in elective courses with a score of 85 points or above (maximum score is 100 points)], between item 4 (obtaining comprehensive scholarships or scholarships during school) and item 5 (obtaining academic excellence scholarship during school), between item 8 (obtaining vocational qualification certificates related to your current major during school) and item 9 (vocational qualification certificates unrelated to your current major but related to future employment obtained during school), as well as between 21 (participation in summer social practice activities) and 22 (participation in social research activities).These correlations are reasonable, as item 1 and 2, as well as item 1 and 3, all involve course grades during school; item 4 and 5 both involve scholarships during school years; item 8 and 9 both involve professional qualification certificates; item 21 and 22 both involve social practice activities. According to Byrne et al. (1989), it is frequently needed to account for correlated errors to achieve a good model fit in structural equation modeling.Such adjustments are reasonable because they commonly reflect nonrandom measurement error stemming from factors like the item format similarity within a subscale.The results are given in Table 1.The model demonstrates improved and acceptable model fit indices.The analysis results show that the model's fit is good, indicating that so is the validity structure of the scale.Finally, we conducted reliability analysis on the CE scale.The scale involves 5 dimensions and totally 35 items, and each item is scored on a five-point Likert scale; the higher the score, the richer the CE.As given in Table 2, the value of Cronbach's alpha for the scale is 0.97, indicating that the overall consistency reliability is relatively high.The values of Cronbach's alpha for the various factors are 0.91, 0.88, 0.86, 0.86, and 0.96, respectively.After 2 weeks, 88 people were retested, and 52 valid copies of the questionnaire were received.The reliability analysis results show that the retest correlation coefficients of the five dimensions are 0.56, 0.70, 0.29, 0.63, and 0.61, respectively (see Table 2 for details), indicating that the retest reliability of the questionnaire is good. Career adaptability The CA of higher vocational students and their scores in various dimensions are given in Table 3.As can be seen, the CA of the surveyed higher vocational students was generally at the upper middle level, with the highest score in the dimension of "career control" and the lowest in "career concern." Independent-samples t-tests and analysis of variance (ANOVA) showed that the total score of CA did not differ significantly with gender, grade, and place of origin. Proactive personality Analyzing the data for the PP of the surveyed higher vocational students showed that their PP status was generally in the upper middle level (M = 5.40, SD = 1.06), and independent-samples t-tests and ANOVA showed that their PP did not differ significantly with gender, grade, and place of origin. College experience The CE scores of the surveyed higher vocational students are given in Table 4.As can be seen, their CE status was generally above the middle level, with the highest score in the dimension of "professional learning experience" and the lowest score in the dimension of "social work experience." Independent-samples t-tests and ANOVA showed that their CE did not differ significantly with gender and place of origin but did differ significantly with grade.The LSD analysis of various dimensions showed that there were significant differences in professional learning between freshmen and sophomores (p < 0.01) as well as between freshmen and juniors (p < 0.01), in social work between freshmen and sophomores (p < 0.01) as well as between freshmen and juniors (p < 0.05), and in project learning between freshmen and sophomores (p < 0.01) as well as between freshmen and juniors (p < 0.01), while there were no significant differences in these dimensions between sophomores and juniors; however, employment practice differed significantly between any two of freshmen, sophomores and juniors (p < 0.01, p < 0.05), and social internship differed significantly between any two of freshmen, sophomores and juniors(p < 0.01).As shown in Table 5, gender differences in social work, employment practice, and project learning were significant, with males scoring higher than females. Relationships among proactive personality, college experience, and career adaptability To study further the relationships among the PP, CE, and CA of vocational college students, correlation analysis was conducted for each variable (see Table 6 for details).The results show that the CA of vocational college students is significantly positively correlated with their PP, which is as hypothesized and consistent with previous research results; furthermore, PP, CA, and CE are all significantly positively correlated. Partial mediating role of college experience We explored the mediating effect of CE in the relationship between PP and CA (The analysis results are shown in Tables 7, 8).The test analysis was conducted based on controlling for gender, grade, and place of origin, with PP as the independent variable, CA as the dependent variable, and CE as the mediating variable.The result showed that the indirect effect of CE [0.03, 95% CI = (0.02, 0.05)] explains 9.60% of the total effect [0.30, 95%CI = (0.26, 0.35)].This indicates that CE plays a mediating role between PP and CA, which supports Hypothesis 2 (see Figure 3). Findings Main findings of this study are summarized as follows.First, the CA and PP of the vocational college students were generally at the upper middle level.The research results showed that as hypothesized, their PP and CA had a significant positive correlation.Second, in terms of CA, "career control" scored the highest, followed by "career curiosity, " "career confidence, " and "career concern." CA did not differ significantly with gender, grade, or place of origin, which is not as hypothesized.Third, the research results showed that as hypothesized, CE plays a partial mediating role between the PP and CA of vocational college students.Fourth, it was found that the CE of vocational college students is generally at the upper middle level, with the highest score in the dimension of "professional learning experience" and the lowest score in the dimension of "social work experience." CE does not differ significantly with gender or place of origin.The effect of grade is reflected mainly in the differences between freshmen and sophomores/ juniors, and the differences between sophomores and juniors are not significant, which are not completely as hypothesized. College experience and its dimensions and demographic variables This study showed that the level of the CE of the surveyed vocational college students was generally above the middle level, with the highest score in the dimension of "professional learning experience, " followed by "social internship experience, " "employment practice experience, " "project learning experience, " and "social work experience." The low score of social work experience is due to the limited opportunities and platforms for students to participate in social work, as well as due to the lack of relevant systematic Regarding demographic variables, CE did not differ significantly with gender and place of origin.In terms of grade, the differences between freshmen and sophomores or between freshmen and juniors, while the differences between sophomores and juniors were not significant, which are not entirely consistent with our hypothesis. When students enter the college as freshmen, they have just come into contact with college studies.Therefore, CE differs significantly with grade.On the other hand, some majors in higher vocational education are two-year ones, or students face graduation internships and other related matters in their junior year and the related activities in school are mainly completed before their sophomore year, resulting in an insignificant difference between sophomores and juniors. Mediating role of college experience We found that the mediating effect was significant, but the mediating effect value was low.We believe that first, CE is likely to rely mainly on opportunities provided by higher vocational colleges.For example, people with high PP may have very little CE, not because they do not want to have these experiences, but because they do not have the opportunity to have these experiences.This also requires higher vocational colleges to provide more platforms and opportunities for vocational college students to enhance their CA through their CE.Second, we believe that PP is not the only influencing factor for CE and may be influenced by other factors such The boot standard error, boot CI lower limit, and boot CI upper limit refer to the standard error, 95% confidence interval lower limit, and upper limit of the indirect effects estimated using the bias corrected percentile bootstrap method, respectively; all values are rounded to 2 decimal digits. Implications The theoretical significance of this study lies in revealing the mediating role of CE in the influence of PP on CA, and deepening the mechanism by which PP influences CA.This study innovatively used SCCT to explore the mediating role of CE in the relationship between PP and CA among vocational college students.The theoretical models were established and empirical verification was conducted, confirming that higher vocational students' CE can affect their CA. In practice, our findings provide effective reference for career education in higher vocational colleges, and we make the following suggestions. First, the research results show that there is room for vocational college students to improve their CA.The score of the "career concern" dimension is the lowest, which indicates that vocational college students are relatively lacking in the ability of independent learning and the awareness of independent self-planning, and most of them have neither goals nor corresponding academic planning and development planning.On one hand, this requires higher vocational colleges to integrate the content of CA into their career education to inspire students to establish life design thinking and improve their CA.On the other hand, this requires higher vocational colleges to establish a three-level progressive-goal education model of "goal setting-goal implementation-goal assessment and evaluation" to guide students to independently plan their studies and establish a scientific and flexible goal management mechanism.Furthermore, we also need to build an integrated career education system for primary school, junior high school, senior high school and college. Second, the results of this study show that higher vocational students lack social work experience.Therefore, vocational colleges should deepen the collaborative education of "college, government, administration, and enterprise" and improve the practicality of career guidance.For instance, vocational colleges could implement career education models such as "one-on-one guidance of career mentors" and "career mentors entering the classroom." On one hand, vocational colleges should encourage teachers to connect with industry enterprises, make full use of school and government resources, introduce real enterprise projects into the school, lead student teams to participate in enterprise projects, and promote students' growth in the process of "learning to do, learning by doing." On the other hand, vocational colleges should arrange practical training and participate in enterprise projects and other activities in a planned, step-by-step, hierarchical, and classified way. Finally, the research results show that vocational college students have differences in their CE and various dimensions, indicating that vocational colleges need to strengthen personalized career guidance.Higher vocational colleges should not only strive to balance the needs of enterprises and students and change the main position of teaching from public compulsory courses to public elective courses, but also set up group counseling courses with the themes of "improving CA, " "improving self-efficacy, " and "improving job skills" that students focus on.In addition, vocational colleges should integrate the resources of "schools, governments, industries, and enterprises, " create career group counseling courses and individual counseling platforms, and work together to pay attention to and effectively help the groups with employment difficulties. Limitations and future research directions This study had the following limitations.First, social desirability may have affected the subjects' reports.They were asked to report their awards, professional achievements, etc., and their reports were easily disturbed by social expectations.Therefore, in data investigation, we should try to reduce this impact, for example by repeatedly emphasizing anonymity and academic significance and ensuring that their personal information will not be disclosed.Second, this study was a correlational study and so could not fully explain the causal relationship between variables.In the future, longitudinal study design could be conducted to clarify the causal relationship between variables. Our future research directions include the following.First, we will expand the representativeness of the sample.In this study, the vocational college students were all from Zhejiang Province, and their homogeneity were relatively strong.In the future, we will expand the sample to other provinces to enhance the heterogeneity.Second, we will establish appropriate intervention programs to enhance the CA of vocational college students.Third, we can further optimize the dimensions of the CE scale, for instance adding dimensions such as club activity experience based on the original 5 dimensions. Conclusion This study explores the role of vocational college students' CE in the relationship between their PP and CA from the perspective of social cognitive career theory.The results indicate that CE plays a mediating role between PP and CA.Based on these research results, this paper proposes intervention measures for vocational colleges on vocational students during their school years, thus better leveraging the role of CE in their career development and enhancing their CA. FIGURE 1 Fang FIGURE 1 Theoretical model based on social cognitive career theory (SCCT).Solid lines correspond to direct relations between variables, and dashed lines correspond to moderator effects (where a given variable strengthens or weakens the relation between two other variables).Copyright 1993 by R.W. Lent, S.D. Brown, and G. Hackett.Reprinted by permission. TABLE 1 Fitting indices of confirmatory factor analysis of CE scale. TABLE 2 Reliability analysis of CE scale. TABLE 3 Descriptive statistical results for CA. TABLE 7 Test of mediating model of CE (N = 476).p < 0.05; ***p < 0.001.Using proactive personality as the independent variable, college experience as the mediating variable, career adaptability as the dependent variable, and gender, grade, and place of origin as control variables, SPSS with the PROCESS macro was used to test the mediating effect, and the mediating effect was significant. * TABLE 8 Decomposition table of total effect, direct effect, and mediating effect. career social support, but the results of this study at least partially explain the effect.Third, we need to further optimize the dimensions of the CE scale, for instance adding dimensions such as club activity experience based on the original 5 dimensions. as
v3-fos-license
2022-04-30T06:24:41.142Z
2022-04-28T00:00:00.000
248431952
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "e15c9d9456aa68527ea621fac3755d0c48d66dff", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2879", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "sha1": "b89d0e701e165b1c9a74cc8a4ab29bfd4f4659c3", "year": 2022 }
pes2o/s2orc
Chronic inflammatory arthritis drives systemic changes in circadian energy metabolism Significance Rheumatoid arthritis (RA) is a debilitating chronic inflammatory disease in which symptoms exhibit a strong time-of-day rhythmicity. RA is commonly associated with metabolic disturbance and increased incidence of diabetes and cardiovascular disease, yet the mechanisms underlying this metabolic dysregulation remain unclear. Here, we demonstrate that rhythmic inflammation drives reorganization of metabolic programs in distal liver and muscle tissues. Chronic inflammation leads to mitochondrial dysfunction and dysregulation of fatty acid metabolism, including accumulation of inflammation-associated ceramide species in a time-of-day–dependent manner. These findings reveal multiple points for therapeutic intervention centered on the circadian clock, metabolic dysregulation, and inflammatory signaling. joint) (1). Tissue collection occurred at ZT0, 4,8,12,16 or 20 on day 7 after the development of disease symptoms in CIA mice, or at matched time points in naive control mice. Terminal blood was collected in MiniCollect K3EDTA tubes (Greiner Bio-One) and stored on ice prior to centrifugation at 3000 x g for 10 minutes at 4ºC. Plasma was transferred to a cryovial and flash frozen in liquid nitrogen. Tissue was transferred to cryovials and flash frozen in liquid nitrogen immediately after dissection. Lipopolysaccharide (LPS) challenge Female C57BL6 Kmt2c flox mice (aged 12-22 weeks) were housed in light controlled cages and exposed to 12h:12h L:D cycles for three weeks prior to the experiment. In vivo phenotyping Body composition was measured using an EchoMRI Body Composition Analyzer E26-258-MT machine (Echo Medical Systems). Activity and body temperature measurements were made following surgical implantation of radio telemetry devices (TA-F10, Data Sciences International) i.p. 7 to 10 days after the first CIA immunisation. Mice recovered for at least 4 days before being singly housed for telemetry recording. Measurements were recorded from day 14 after initial CIA immunisation for an asymptomatic period of up to one week, until booster immunisation on day 21. Recording was then resumed upon the development of symptoms. Food intake was monitored from day 14 by weighing remaining food pellets of singly housed mice at the start and end of each light phase. Blood samples for measurement of fasting insulin and glucose measurement were collected eight days after the development of CIA symptoms. Food was withdrawn from the mice at ZT0. Blood was collected following removal of the tail tip at ZT8. Glucose concentration was measured immediately using an Aviva Accu-Chek meter (Roche). Remaining blood was centrifuged at 3000 x g for 10 minutes at 4ºC then stored at -80ºC for later analysis. Insulin level was measured by ELISA (Merck Millipore, EZRMI-13K) according to the manufacturer's instructions. Histology Paws for histological analysis were skinned, fixed overnight in formalin and then decalcified by incubation in Osteosoft Mild Decalcifier solution (VWR) for two weeks. Tissue was then paraffin embedded, sectioned and stained with either H&E (for cellular structures) or Safranin O (for cartilage) according to standard protocols. Slides were imaged by the University of Manchester Bioimaging Core Facility using a Pannoramic 250 Flash slide scanner (3DHistech) using a 20x/0.80 Plan Apochromat objective (Zeiss). Cytokine analysis For corticosterone measurements, serial tail blood samples were obtained over 48 hours. Blood was immediately centrifuged at 3000 x g for 10 minutes at 4ºC, then plasma was diluted 100-fold in PBS and frozen for later analysis using the Corticosterone ELISA kit (Abcam, ab108821). Terminal plasma samples were analysed using the BioPlex Pro Mouse Chemokine 33-plex panel (BioRad, catalogue reference 12002231). Plasma was diluted 1 in 4 in standard diluent buffer before mixing with assay beads. Samples were analysed on a BioPlex 200 machine. IL6 level was measured using the mouse IL6 ELISA kit (Abcam, ab100712). Plasma was diluted 1 in 4 in dilution buffer prior to application to the ELISA plate. Absorbance was measured using a GloMax Multi Detection System plate reader (Promega). RNA extraction Joint tissue was ground with liquid nitrogen using a pestle and mortar, then transferred to a Lysing Matrix D tube containing Trizol. Tissue was homogenised using a BeadMill homogeniser (3 x 4 m/s for 40s). RNA was extracted using chloroform then precipitated using isopropanol. After washing with 75% ethanol, the RNA pellet was resuspended in RNase-free water. RNA was purified from liver samples using the SV Total RNA kit (Promega), according to the manufacturer's instructions. Tissue was homogenised using Lysing Matrix D tubes loaded into a BeadMill homogeniser (4 m/s for 20s). RNA was eluted in RNase-free water. RNA was purified from muscle samples using the ReliaPrep Tissue kit (Promega), following the manufacturer's instructions for the purification of RNA from fibrous tissue. Muscle tissue was homogenised using the same protocol as for joint, and eluted in RNase-free water. RNA-seq Sequencing library preparation and sequencing was performed by the University of Manchester Genomic Technologies Core Facility. Sample quality was determined using a 2200 TapeStation (Agilent Technologies). Libraries were generated using the TruSeq Stranded mRNA assay (Illumina, Inc.) according to the manufacturer's protocol. The multiplexed libraries were analysed by pairedend sequencing on a HiSeq 4000 instrument (76 + 76 cycles, plus indices), then demultiplexed and converted using bcl2fastq software (v2.17.1.14, Illumina). Data analysis Differential expression analysis was run in R (2) using edgeR (v3.30.3). Genes were considered to the differentially expressed (DE) if the false discovery rate (FDR) was less than 1 x 10 -20 (for joint) or less than 0.001 (for muscle and liver). Differential rhythmicity analysis was performed using compareRhythms R package (v0.99.0, (3)). A model selection approach was used with genes being assigned to either arrhythmic, gain of rhythm, loss of rhythm, same rhythm in both, or a change in rhythm. A probability of being in a category of at least 0.6 was required for assignment. To avoid losing genes that were clearly differential rhythmic an extra category was used where the probability of either being a gain, loss or change in rhythm was greater than 0.6. Additional rhythmicity analysis was run using the JTK-cycle functionality of MetaCycle (v1.2.0), with period length fixed to 24 hours. Genes were considered to oscillate in naïve and/or CIA if the JTK-cycle adjP < 0.05 for one or both conditions. For comparison of JTK-cycle and compareRhythms analysis of genes classified as losing or gaining rhythmicity with CIA, raw counts were normalised by subtracting the mean of each treatment (naïve or CIA), and dividing by the standard deviation across both treatments. Acrophase was calculated for each gene by fitting a sine wave (period constrained to 24 hours) to the normalised counts from the rhythmic group, and genes were aligned to this acrophase. Pathway analysis of the gene lists defined above used the Enrichr web tool (4,5) to detect significantly enriched pathways within the WikiPathways Mouse database Supplementary Table S2. Phosphoenrichment Phosphoenrichment was done on an Agilent Bravo AssayMAP robot using Fe(III)-NTA cartridges (7) with slight adaptations. Cartridges were primed in ACN with 0.1% TFA and equilibrated with 80% ACN in 0.1% TFA. Peptides were loaded onto the cartridges followed by a wash with 80% ACN 0.1% TFA. Phosphopeptides were eluted with 1% NH3 and dried down in a vacuum centrifuge. Mass spectrometry Peptides were injected into a liquid chromatography-mass spectrometry (LC-MS) system comprised of a Dionex Ultimate 3000 nano LC and a Thermo Fusion Lumos. Peptides were separated on a 50-cm-long EasySpray column (ES803; Thermo Fisher) with a 75-µm inner diameter and a 60 minute gradient of 2 to 35% acetonitrile in 0.1% formic acid and 5% DMSO at a flow rate of 250 nL/min. Data was acquired with the APD peak picking algorithm at a resolution of 120,000 and AGC target of 4e5 ions for a maximum injection time of 50 ms for MS1 spectra. The most abundant peaks were fragmented after isolation with a mass window of 1.6 Th with normalized collision energy 28% (HCD). MS2 spectra were acquired in the ion trap in rapid scan mode for a maximum injection time of 35 ms. Phosphoproteome data analysis The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PXD032723 (8). RAW files were processed in Maxquant. Identified phosphosites (phospho(STY).txt), were initially viewed and filtered in using the Perseus Framework. Potential contaminants and reverse peptides were removed. Phosphosites were filtered using a localisation probability of x>0.75, log2 transformed and further filtered to remove missing values, where sites with fewer than 15 valid (not N/A) values in a group (either CIA or naïve) were excluded. Missing values were imputed using random numbers drawn from a normal distribution with a width of 0.3 and down shift of 1.8. Ion intensities of identified phosphopeptides were normalized between each sample using trimmed means of M-values function from the edgeR (v3.30.3) R package. Differential phosphorylation analyses between groups were conducted using edgeR using a 5% FDR (9). Protein kinases analysis was performed using kinswingR package (10), using the curated kinase substrate sequences mouse dataset from PhosphoSitePlus (11). Phosphopeptide differential rhythmicity analysis was performed using compareRhythms (v0.99.0, (3)) in the same way as previously described for the RNA-seq data. Western blotting Mouse livers were lysed with western blot lysis buffer (NaCl 150mM (Sigma Metabolomic analysis Global metabolite profiling of liver, muscle and plasma samples was performed by Metabolon (Durham, NC, USA). Samples were analysed using the HD4 platform, which uses ultrahigh performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) methods for metabolite detection. Peaks were quantified by area under the curve, and normalised to set metabolite median value to one. Analysis by two-way ANOVA was used to identify metabolites showing altered level with treatment and/or time. Metabolites were considered to be significantly altered at a time point if the one-way ANOVA contrast false discovery rate (q-value) was < 0.05. Metabolomic summary data and details of statistical tests are provided in Supplementary Dataset S2. Metabolite differential rhythmicity analysis was performed using CompareRhythms (v0.99.0,(3)) in the same way as previously described for the RNA-seq data. For inter and intra tissue metabolite correlation plots, Pearson's linear correlation was calculated for data averaged at each time point for every pair of metabolites. Using a stringent threshold (P<0.001), every metabolite in a given family that correlated with at least one metabolite in a target family contributed to ribbon width between the two families. Segment size reflects the number of detected metabolites in each tissue. Ribbons with width <10% of family size at both ends were omitted for clarity. Data was processed in MATLAB R2019a (Mathworks, US) and plots made using the Circlize visualisation package in R (12). Statistics Statistical tests and sample numbers are specified in figure legends where appropriate. Full details of statistical tests are provided in Supplementary Dataset S1. Statistical tests were conducted in GraphPad Prism. Throughout, * denotes p<0.05, ** denotes p<0.01, *** denotes p<0.001 and **** denotes p<0.0001. Plots were produced in GraphPad Prism, using the R package ggplot2 (13), or using Matlab. SUPPLEMENTAL DISCUSSION AND STUDY LIMITATIONS Characterising rhythmic processes is complex with numerous methods available, each with strengths and caveats. Here we use a comparative analytical approach (CompareRhythms, (3)), which has improved implementation of cosinor regression for comparing rhythmicity directly between conditions. Utilising cosinor analysis and model selection, as done within CompareRhythms, is a much needed development in differential rhythmicity analysis. JTK-cycle (14) and similar methods such as RAIN (15) are still commonly used for this kind of comparison. However, these methods are not built for identifying differences in rhythmicity between conditions. JTK-cycle, RAIN and similar methods are built for rhythm detection based on samples from one condition. Using them for differential rhythm detection by comparing 'rhythmic' gene lists from each condition (analysed separately) leads to a high rate of false discovery (16); for example, just achieving significance in one sample, and just failing to reach significance in another, results in the incorrect inference that they are different from each other. Model selection methods such as CompareRhythms (3) and dryR (17) classify paired gene profiles into distinct categories: arrhythmic, loss of rhythm, gain of rhythm, same rhythm or changed rhythm. The gene profiles are then modelled jointly across the two conditions and probability scores for membership of each category obtained. Whilst we have predominantly used this comparative analysis approach, we have also included analyses with more traditional methods (JTK cycle) to make our study fully comparable with previous work. Both methods estimate similar numbers of genes to be rhythmic overall, and identify similar changes in rhythmicity, functional pathway enrichment and potential upstream regulators. This is robust across different expression and probability thresholds. Nevertheless, we must acknowledge that using different thresholds and/or analysis approaches will change the number of rhythmic genes detected within a tissue and given condition. Indeed, drawing conclusions based on absolute gene numbers assigned to any given category (rhythmic, gain, loss etc) should be avoided. Importantly, our gene ontology analyses and upstream regulator analyses were consistent across multiple methods, expression level cut-offs and probability thresholds. In this study, we analyse rhythmic changes in a complex disease model which can show considerable differences between individual mice. Therefore, variability in disease state between mice could influence the assessment of differential expression and rhythmicity between naïve and CIA mice. To mitigate this, we implemented strict criteria regarding disease severity for sample inclusion ( Figure 1A), and include robust numbers of replicate samples collected over multiple independent experimental runs for each time point. Our transcriptomic and metabolomic samples were collected from four independent experimental mouse cohorts, and standardised to ensure disease severity was evenly distributed across time points and replicates. We characterised five samples at each time point to minimise the risk of false positive detection due to noise and biological variability (18). We cannot rule out that behavioural effects associated with CIA (such as reduced locomotor activity) may have contributed to transcriptional and/or metabolic differences between naïve and CIA mice. Our telemetric assessment of activity and body temperature suggests that overall levels of activity are reduced in CIA mice once they develop symptomatic disease; however, there remains a significant difference in activity between the dark and light phases. Due to the method of activity measure (radio telemetry using DSI TA-F10 remotes), we cannot determine absolute activity levels in the animals (as the activity counts generated are not directly proportional to distance travelled). Importantly, we show that physiological measures (body temperature) and peripheral entraining signals (corticosterone) remain robustly rhythmic in these animals even during symptomatic disease. Previous studies have found that exercise can entrain the mouse circadian clock (15), and can alter rhythmic physiology and clock gene expression in peripheral tissues (16), including skeletal muscle (17). In light of these findings, it is notable that we do not observe changes in rhythmicity of the core clock genes in muscle tissue with disease (Supplementary Figure S3C). We therefore consider it to be much more likely that the changes we observe in clock gene expression in the joint, the primary site of inflammation, are attributable to disease processes rather than any loss of activity-related entrainment. *** * ** ** *** **** **** *** **** **** **** **** **** *** **** **** **** **** Venn diagram demonstrating the overlap between genes differentially expressed in CIA mice and targets of STAT3 binding, as determined by chromatin immunoprecipitation (19). Cistrome data was extracted from CistromeDB (20). STAT3 targets were defined as genes for which the cistromeDB score was greater than zero in at least one IL6-treated hepatocyte sample.
v3-fos-license
2022-06-01T09:41:18.771Z
2017-10-30T00:00:00.000
149302572
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.lectitopublishing.nl/download/SYL88YK9.pdf", "pdf_hash": "23071c9ca745146bb2788086f312d980da3ef7fa", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2880", "s2fieldsofstudy": [ "Philosophy" ], "sha1": "23071c9ca745146bb2788086f312d980da3ef7fa", "year": 2017 }
pes2o/s2orc
Feminist Encounters: A Journal of Critical Studies Mood, Method and Affect: Current Shifts in Feminist Theory Epistemic habits in feminist research are constantly changing in scope and emphasis. One of the most striking ruptures that we can observe these days, at least in the humanities, is a renewed epistemic interest among feminists in the question of mood, where both positive and negative affects come into play. Mood figures in a number of theoretical traditions, ranging from the hermeneutics of Heidegger, Gadamer and Ricoeur, as well as in phenomenology, psychoanalytic theories of affect and in Deleuzian affect theory. In the article I want to explore two different approaches to the question of mood in feminist theory. In the first part, I will investigate Rita Felski’s treatment of mood in her recent attack on ‘critique’ as well as in her proposed alternative, her ‘post-critical’ approach to reading and interpretation. In so doing, I will formulate some questions that have emerged in my attempt to grapple with Felski’s post-critical approach. In the second part of this essay, I will delve into another understanding of the concept of mood, namely Deleuzian affect, and more specifically, as it has been embraced by feminist theorists such as Rosi Braidotti and Elizabeth Grosz in their respective theoretical works. In the concluding part of this article, I will discuss some of the implications of the different takes on mood for feminist epistemic habits. rhetoric and form, affect and argument. She concludes that, as a distinct mode of interpretation, critique revels in dis-enchantment and ironic distance; it favours the archeological method, which requires that the critic 'delves deep' and 'stands back,' while mistrusting the text's surface level in order to wrestle from the text that which it allegedly attempts to withhold from its readers. Hence, Felski is able to identify, through her descriptive approach, what turns out to be 'a quite stable repertoire of stories, similies, tropes, verbal gambits and rhetorical ploys' (Felski 2015: 7). Critique is thus approached as a genre and an ethos, and Felski regroups a vast range of interpretive strategies within this particular style of thought: symptomatic reading in general, be it Marxist ideology critique, psychoanalysis, deconstruction, Foucault's historicism, or any other reading approach which scans texts for signs of transgression or resistance; in short, every school of thought that has been embraced by feminists during the last three decades. Felski also includes what she calls a practice that she identifies as 'critique of critique,' referring to recent texts, such as Robyn Weigman and Elizabeth Wilson's essay on 'anti-normativity's queer conventions' (Felski 2015: 146). Despite using a broad brush in her attempt to conjure up a picture of critique, Felski by and large gives a persuasive account of the practices of critique. She thus succeeds in showing not only the main characteristics in mood and method of critique, but in addition, that it is only one style of thought used in literary and cultural studies among many others, irrespective of its own attempt to appear as all-encompassing and without limit. Felski convincingly reveals that -contrary to its own ethos of impersonality, detachment and distance -critique has its own distinct affects, even if these are for the most part steeped in a negative mood of disenchantment, mistrust, melancholy and gloom. She is overtly skeptical towards critique's inflated belief in and claims to being inherently progressive or emancipatory, and she has grown weary of critical thought's claim to inherent radicalism. Likewise, she is critical of the inability of certain actors to acknowledge how feminist academics engaging in critique are implicated -and may well serve as complicit actors -in the workings of contemporary capitalism, both within and outside the academic institutions. Felski's Post-Critical Approach While aiming at challenging the hegemonic position of critique, Felski also proposes a distinct alternative to this style of thought. In so doing, she makes a concerted effort not to revert to critique herself in her own treatment of critique. Instead, she makes use of the phenomenological method of 'thick' description of critique as a distinct interpretive strategy. In part, she bases her account of critique on Eve Kosofsky Sedgwick's early intervention, 'Paranoid Reading and Reparative Reading' (Sedgwick 1997), where Sedgwick takes issue with the claims and mood of critique in an attack that was primarily aimed at Judith Butler. Sedgwick lists five characteristic traits of critique and labels them an instance of 'paranoid reading.' Felski both reiterates and elaborates these prominent traits 1 in her own five-point description of critique in her book, where she lays out its rhetoric of 'standing back' and 'digging deep': 1. Critique is secondary 2. Critique is negative 3. Critique is intellectual 4. Critique comes from below 5. Critique does not tolerate rivals (Felski 2015: 121-150). As an alternative, Felski calls for a reorientation and a re-description of the critical practices associated with the 'hermeneutics of suspicion,' in accordance with Paul Ricoeur's understanding when he coined the term, hailing Nietzsche, Freud and Marx as initiators of a new mode of interpretation (Ricoeur 1970). But according to Felski, Ricoeur paid close attention to the question of mood or attunement and insisted on the need to adopt a wide range of methodologies in the interpretation of works of art. Thus, in addition to a 'hermeneutics of suspicion,' he also called for a 'hermeneutics of trust or restoration.' This attention to diversity in moods and methods was not adopted or sanctioned by the feminist practitioners of critique, who according to Felski gave priority to disenchantment and suspicion (Felski 2015: 30-39). Her approach to the study of literature is a set of reading practices, which draw on a number of different philosophical and theoretical traditions and movements. While arguing for a need for a reorientation and a redescription of what actually happens in critical reading practices, Felski wants to move away from negative aesthetics and instead orients herself towards what she calls relational ontologies. 2 In this move, she merges diverse, 3 and one might suggest, perhaps incompatible human and non-human actors and practices. She coins a hybridity that she names a 'post-critical' approach, where we discern the following preferential attachments discussed below. The first attachment that Felski makes in sketching out an alternative to critique is to the tradition of philosophical hermeneutics that pays heed to the notion of 'mood' as the tacit foundation of all interpretive acts. In her book (2015: 20), she makes an explicit reference to Martin Heidegger's thinking on 'Being-in-the-world' as 'Being-with,' notably as a mode of attunement, Stimmung (Heidegger 1962). In addition, Felski wants to retrieve that which the practitioners of critique have forgotten or overlooked in their appropriation of Paul Ricoeur's understanding of a 'hermeneutics of suspicion,' namely a 'hermeneutics of trust or of restoration' (Felski 2015: 32), which speaks of the more positive affects involved in the act of interpretation. The second theoretical attachment that Felski establishes is to phenomenology, which among others attempts to establish the parameters of a first person, subject position. She distances herself from the Husserlian notion of radical reduction to a 'transcendental ego' (Husserl 2006), which is thought of as a disembodied entity, and prefers instead the casting of the first person as an embodied subject position, as it is formulated in the phenomenological and existential philosophy of Simone de Beauvoir (Beauvoir 1997) and later elaborated by Toril Moi (2001). According to Moi, Beauvoir's notion of existential situation is both a dimension of facticity and of freedom, projected onto the future, and can therefore never be reduced to a fixed essence or identity (2001: 65-66). Felski aligns herself with Moi's reading of Beauvoir, and understands the first person position as an embodied situation; one that accounts for the idiosyncratic, embodied style of the subject and for the sociality inherent in the horizon in which the situated subject dwells. Felski's call for a return to phenomenology and Moi's appropriation of Simone de Beauvoir is partly supplemented by other theorists, such as Chantal Mouffe, Marielle Macé and Yves Citton, who embrace aspects of pragmatism and ordinary life philosophy in order to formulate a positive theory of subjective reading that is grounded in lived existence and in embodied affects. Felski also advocates for reviving the 'thick' descriptions that the method and practice of phenomenology provides when she calls for a more thorough and complex description of the phenomena of identification and attachment in reading practices. She supplements this phenomenological description with insights that the film critic Murray Smith (Smith 1995) introduced in his treatment of identification in film theory. Murray argues that the term identification ought to be divided into four different issues and experiences, namely alignment, allegiance, recognition and empathy. According to Felski, these four aspects are 'analytically and experientially distinct, though they can of course be combined' (Felski 2015: 8). A third positive attachment that Felski makes in her delineation of an alternative to critique is to the work of Sedgwick, who introduced not only queer theory (together with figures such as Teresa de Lauretis and Judith Butler), but also the importance of affect in the act of reading literature. Sedgwick launched the notion of 'reparative' as opposed to 'paranoid' reading, terms that she retrieved from Melanie Klein (Klein 1998) and which she marshalled as an alternative to critique's symptomatic reading practices. Felski repeatedly refers to Sedgwick's contribution and values her emphasis on affect; she thus aligns herself with Sedgwick as well as queer theorists like Heather Love and Elizabeth Weed, who both welcome the inclusion of affect in interpretive practices. These queer theorists, as opposed to for example Judith Butler (Butler 1990), attest to the fact that not every scholar engaged in queer theory categorically embraces the mood of negativity, disenchantment and soupçon, but also love and joyful attachment. Accordingly, Felski includes them in her group of allies, in opposition to queer theorists such as Butler, Weigman and Wilson. Felski's fourth attachment in fleshing out her 'post-critical' approach to literary and cultural studies is to the work of Bruno Latour and his Actor-Network-Theory, or ANT (Latour 2005). ANT alludes to the intricate network active in the ant hill, where every ant engages in a complex network connected to other ants, forming a special kind of sociality. Felski's interest in Latour is in part motivated by the fact that his network theory allows her to detect how critique as an ethos of reading is based on shared habits of thinking and established idioms. Hence, critique has succeeded in the building of an intellectual community and in establishing its own network of attachments, which has attained a hegemonic position in academe, especially in the U.S. Furthermore, Latour's Actor-Network-Theory allows Felski to elaborate her own thinking on the practice of literary and cultural studies as a process that assembles a vast number of human and non-human actors, who through interaction and mutual negotiations form complex networks: 'scholars, computers, email-messages, journals, conferences syllabi; seminar rooms; monographs' (Felski 2015: 12). Felski agrees with Latour in that any kind of knowing is a 'mode of existence,' with its own 'vectors, orientations, chains of action and experience ' (2015: 12). For her, reading literature is fundamentally a social enterprise, involving a complex network of actors, and can never be reduced to the reader-object dyad. Mood, Networks and Affect I find much of Felski's intervention both insightful and timely. That being said, I feel the need to question and challenge certain aspects of her proposed post-critical stance. First of all, I take issue with Felski's use of Heidegger and his ontological thinking on 'mood' as the basis for her re-thinking of affective identification. In Being and Time (Heidegger 1962), Heidegger carefully lays out, through his own version of a radical hermeneutics, the disclosure of Dasein's 'Being-in-the-world' as 'Being-with.' Heidegger's existential analytic is situated within his thinking on the hermeneutic circle, as the horizon into which we are all thrown, together with other beings. Dasein as a 'Being-Towards-Death,' experiences its ownmost Being in authenticity and fundamental precariousness in the loss of self, which causes fear and angst. This mood, the Stimmung of fearfulness and angst, occurs when Dasein falls out of the category of 'das Mann,' that is, a falling out of the subject position in ordinary language. Heidegger's Stimmung, which reveals itself in its authenticity when cast in the negative, constitutes the ontological experience of a loss of ground, when Dasein is most attuned towards Being. Attunement or mood, according to Heidegger, occurs prior to and is constitutive of interpretation, which is always thought as secondary or derivative (1962: 172-182). Interpretation, as an act performed by the subject, happens in an ontic position, that is, in Dasein's experience of inauthenticity. Ontic interpretation is thus dependent on and derivative of Stimmung, thought as an ontological mode of Being. My question to Felski is therefore: How can identification, which she understands as an affective orientation in the positive, find its basis in Heidegger's thinking on mood, attunement or Stimmung? In, Stimmung as an authentic ontological experience is an event through which Dasein finds its ownmost Being in the loss of ground, one that is steeped in angst and negativity. Furthermore, it is Heidegger who -prior to Ricoeur's coining of a 'hermeneutics of suspicion' -was perhaps the practitioner par excellence of critique and who lays the ground for the later hermeneutic thinking on soupçon, be it in Gadamer or Ricoeur. In his symptomatic readings of the history of philosophy (in part inspired by and in opposed to Nietzsche's thinking on interpretation), it suffices to refer to his work on the Pre-Socratics (Heidegger 1975) as well as to his work on Kant (Heidegger 1997), where Heidegger claims to unveil in his reading that which is silenced and hidden in Kant's thinking. In Felski's own words, this is the essence of what the practice of critique entails, namely to unveil that which the text presumably withholds. I agree with Felski that Stimmung occurs prior to and constitutes the basis for interpretation, but in Felski's recasting of identification as an affective orientation of the first person towards its object, Stimmung becomes a positive affect that is lodged within the subject-object parameter of ordinary language, which for Heidegger is nothing but 'idle talk.' Such an appropriation is far removed from Heidegger's thinking on authentic attunement or Stimmung, which he understands as an ontological mode of existence, in attunement to the groundless ground of Being. When Felski translates Stimmung into a positive affect that secures the subject's affective attachment and identification to an object, it amounts to a misappropriation of Heidegger's thinking, and may serve as yet another instance of metaphysical, ontic appropriation of his ontological thinking. My next point of contention with Felski pertains to her advocacy for Latour's Actor-Network-Theory (ANT), which she wants to import into the practices of literary and cultural studies. Latour's theory of the workings of complex networks -real or virtual -in contemporary Western societies is both relevant and useful. Like many other social theories that have been adopted by literary studies, I truly believe that an attempt to integrate some of Latour's methods at detecting and creating a cartography of networks at work in the reception and interpretation of literature -involving human as well as non-human, real and imaginary actors -could prove fruitful for literary studies. However, Felski remains quite vague as to how these studies should actually be conducted, and what methodologies should be used. Her accounts of the salience of ANT in both her paper and in her book are at best vague and underdeveloped. It remains unclear in Felski's account how this theory may function as a reading strategy and a mode of literary analysis in literary studies. Felski's understanding of the aesthetic experience pertains on the one hand to the first person singular; this position is marked by the affective idiosyncrasies of the subject. But in addition, the subject is a product of sociality, i.e. fabricated and co-produced, involving an intricate network of actors. A valid question in this context is whether or not affective identification in the first person as it is understood in phenomenology can be merged with Latour's understanding of agency within the actor-network. In my assessment, Latour's notion of agency, which includes human as well as non-human entities, is far removed from Beauvoir's phenomenological, embodied notion of the subject, not to mention Heidegger's thinking on attunement in the ontological disclosure of Dasein. Felski herself seems to ignore any possible problems in lumping these various theories together to form a first person foundation for her alternative reading strategy of 'positive identification.' Can we just infer from this account that identification will function in the same way when applied to collectives or networks of human and non-human actors, and how is this solved methodologically? These critical questions aside, I commend her intervention into the problem of critique by way of mood and positive affective identification in feminist theory. Felski raises important methodological and theoretical questions pertaining to our epistemic habits, and her call for a redirection in literary studies is both valid and persuasive. II Deleuze and Guattari on Affect Even though Felski's intervention into the shortcomings of critique might be instructive and valid, there is another school of feminist thinking on affirmative affect that I find even more compelling, namely one which is nourished by Deleuze and Guattari and the philosophical legacy of Spinoza, Nietzsche and Bergson. Felski by and large overlooks this theoretical branch of thinking on mood. Symptomatically, I have found but one brief reference in her book to Deleuze and Guattari, in a subordinate sentence where she disclaims the two French philosophers and their attack on what they call 'intepretosis' (Felski 2015: 10). Unlike Felski, feminist theorists such as Rosi Braidotti and Elizabeth Grosz embrace Deleuzian philosophy, which takes a radically different view on affect. Neither Braidotti nor Grosz can be subsumed under Felski's rubric of critique in the negative mood, since they both emphasize affirmation and positive affects in their critical practices. Even though they read Deleuze and Guattari in different ways, both Braidotti and Grosz import the French philosophers' understanding of affect into their own feminist theories. In Deleuze and Guattari's philosophy of life and becoming, they attempt to circumvent the idealist and rational philosophical tradition from Plato through Hegel. Thus, they seek -in part through the legacy of Spinoza, Nietzsche and Bergson -to produce ways of doing philosophy that values life; not as stable identity, but life in its becoming. According to Deleuze and Guattari, philosophy and art may serve life in as much as philosophy creates life-enhancing concepts and art produces affects. Affect is for them not a subjective emotion or a personal inclination, but connotes instead 'the incredible feeling of an unknown Nature,' and they write: For the affect is not a personal feeling, nor is it a characteristic; it is the effectuation of power of the pack that throws the self into upheaval and makes it reel. Who has not known the violence of these animal sequences, which uproot one from humanity, if only for an instant, making one scrape at one's bread like a rodent or giving one the yellow eyes of the feline? A fearsome involution calling us toward unheard-of becomings. (Deleuze and Guattari 1987: 240). For Deleuze and Guattari, when speaking of affect, it is a question of a capacity to act and to create movement, marked by different levels and qualities of intensity (of power), as opposed to a specific entity that is quantitatively measured through representational language, as in traditional metaphysics. Thus, affect in Deleuzian-Guattarian terms is (following Spinoza) above all 'the capacity to affect and to be affected' (Deleuze and Guattari 1987: 261): To every relation of movement and rest, speed and slowness grouping together an infinity of parts, there corresponds a degree of power. To the relations composing, decomposing, or modifying an individual there correspond intensities that affect it, augmenting or diminishing its power to act: these intensities come from external parts or from the individual's own parts. Affects are becomings (Deleuze and Guattari 1987: 256). In the process of 'becoming intense,' which marks a process of deterritorialization in different stages, Deleuze and Guattari include the notion of 'becoming-woman' as a first and necessary stage. They thus acknowledge that the molar identity of 'man' is being invested with oppressive power, notably through the patriarchal-capitalist assemblage. Molar, masculine identity must therefore be de-territorialized on the plane of organization in order for life forces to be liberated, allowing for affects to circulate freely, which again might allow new becomings on the plane of consistency or immanence. Affects, as intensive forces, are crucial for the capacity to act in order to undermine the stability of molar identities and their oppressive assemblages. Deleuze and Guattari thus propose diverse strategies of deterritorialization in order to undo the existing patriarchal gender and sexuality regime, which serves the interest of man in capitalist societies. In their material ontology, 'becoming-woman' is the first stage, and one that all processes of deterritorialization have to pass through. They ask: Why are there so many becomings of woman, but no becoming-man? First, because man is majoritarian par excellence, whereas becomings are minoritarian. (…) Majority implies a state of domination, not the reverse. (…) In this sense, women, children, but also animals, plants, and molecules, are minoritarian. It is perhaps the special situation of women in relation to the man-standard that accounts for the fact that becomings, being minoritarian, always pass through a becoming-woman (1987: 291). Affects mobilize all further processes of deterritorialization, which start with 'becoming-woman' and move via 'becoming-animal,' 'becoming-molecular' and finally reach, 'becoming-imperceptible,' which constitutes the final stage of deterritorialization on the plane of consistency or immanence, from whence new becomings may spring forth. Deleuze and Guattari advocate in this context for a continuous dynamic self-production of multiple sexes and poly-sexualities in the war machine against the tyranny of the molar identities of patriarchal-capitalist assemblages. Feminism, for Deleuze and Guattari, is an ideology of fixed gender and sexuality categories. As such, feminism is implicated in this power dynamic, and may attribute to solidify power within this oppressive state of affairs. Feminist critique may therefore function as reactionary and reactive forces in capitalist society. Furthermore, when Deleuze and Guattari propose ways of undermining patriarchal, molar power regimes on the plane of organization, they advocate for a dynamic self-production of multiple, un-natural, perverse assemblages, where the human body enters into productive relations with other affective entities, e.g. human, animal, machinic or artificial entities in order to create 'unheard-of becomings.' Art and literature constitute 'blocks of affect' for Deleuze and Guattari, and in their view, artistic practices are intensive, i.e. affective processes of deterritorialization. Mimetic art and literature written in representational language are accordingly deemed inferior forms of artistic productions; only when art produces 'unheard-of becomings,' affects with a capacity to create new becomings, can art and literature be said to be successful. In What Is Philosophy, Deleuze and Guattari (1994) ponder the specific transformative function of literature as affect: The affect goes beyond affections no less than the percepts goes beyond perceptions. The affect is not the passage from one lived state to another but man's nonhuman becoming. (…) [B]ecoming is an extreme contiguity within coupling of two sensations without resemblance or, on the contrary, in the distance of a light that captures both of them in a single reflection. André Dhotel knew how to place his characters in strange plant-becomings. Becoming tree or aster: this is not the transformation of one into the other, he says, but something passing from one to the other. This something can be specified only as sensation. It is a zone of indetermination, of indiscernibility, as if things, beasts and persons (…) endlessly reach that point that immediately precedes their natural differentiation. This is what is called an affect (1994:173). Affect in Deleuze and Guattari is therefore not connected to a subjective feeling or a personal inclination, but is rather a sensation that involves not only a subjectivity, but an affective field that connects the subject's body with other living and non-living force-fields, thus involving forces of cosmic implications. Rosi Braidotti's Affirmative Feminist Critique In her thinking, be it on the 'nomadic' (Braidotti 1994(Braidotti ), 'monstrosity' (1996, the 'ethics of affirmation ' (2006) or 'the posthuman' (2013), Rosi Braidotti embraces Deleuze and Guattari and their thinking on affect. Throughout the last three decades, she has consistently argued for an affirmative approach to feminist inquiries and feminism's ethical responses to the challenges of our times: This sort of turning of the tide of negativity is the transformative process of achieving freedom of understanding through the awareness of our limits, of our bondage. This results in the freedom to affirm one's essence as joy, through encounters and minglings with other bodies, entities, beings, and forces. Ethics means faithfulness to this potentia, or the desire to become. Deleuze defines the latter with reference to Bergson's concept of "duration," thus proposing the notion of the subject as an entity that lasts, that endures sustainable changes and trans-formation and enacts them around him/herself in a community or collectivity. Affirmative ethics rests on the idea of sustainability as a principle of containment and tolerable development of a subject's resources, understood environmentally, affectively and cognitively (2006: 246). Braidotti's repeated attempts to circumvent the pitfalls of negativity in her theorizing is founded on an ardent commitment to affirmative practices, where she pays heed to embodiment, sexual and racial difference, multicultural and post-secular citizenship, issues linked to globalization, network societies, contemporary art and technoscience. In Metamorphoses, Braidotti accordingly argues that Deleuze might prove productive for feminist thinking because he 'takes the plunge into the ruins of representation and the sensibility of the post-human ' (2002: 97), and she writes: He wants us to confront the kaleidoscope of affects and desires that one is deliberately not socialized into becoming. As a consequence, Deleuze's nomadology is not only conceptually charged, but also culturally very rich. In as much as he invests creativity with nomadic force, Deleuze raises issues of sensibility, affectivity and ultimately, desire. It is on this field, therefore, that his encounter with feminist allies is the most resoundingly vocal (2002: 97). 7 Braidotti here attests to the fact that Deleuzian affect forms an integral part of her own feminist theoretical project, which attempts to align his affective thinking with a host of other feminist thinkers, above all Irigaray and Haraway, in order to vitalize the field of feminist theory. She writes: Deleuze redefines the practice of theory-making in terms of flows of affects, and the capacity to draw connections. Accordingly, Deleuze describes the subject as an affective or intensive entity and ideas as events, active states which open up unexpected possibilities of life. The truth of an idea, in other words, is in the kind of affects and the level of intensity that it releases. (…) Affectivity governs the truth-value of an idea. In juxtaposition with the linear, self-reflexive mode of thought that is favored by phallogocentrism, Deleuze defines this new style of thought as 'rhizomatic' or 'molecular.' These new figurations of the activity of thinking are chosen for their capacity to suggest web-like interactions and interconnectedness, as opposed to vertical distinctions. Deleuze defends this view of the subject as a flux of successive becomings by positing the notion of a 'minority' consciousness, of which the 'becoming-woman' is somehow emblematic (2002: 70). In making active use of Deleuze and Guattari's notion of affect as a vital and creative force in feminist theorymaking, Braidotti usurps feminist theory's traditional reliance on rational and representational, reflexive models of thought. As such, Braidotti's understanding of affirmative affects differs radically from Felski's in that she does not approach the question of mood through the framework of phenomenology -such as an embodied individual subject or a social collective of agents -but rather as an ontological, collective flow or force at work in and through the subject or the collective. Furthermore, whereas Felski relies on a mimetic and a realist, descriptive mode of representation in her affective theorizing, Braidotti stresses the rhizomatic, deterritorializing, unpredictable and 'monstrous' couplings that may emerge in affective assemblages, effects of which by far exceed the control of the interpretative subject or its social network. Elizabeth Grosz: Art as Affect Another feminist theorist who finds great inspiration in Deleuze and Guattari's thinking on affect is Elizabeth Grosz. Throughout most of her work from the last thirty years, she delves into the thinking of Deleuze and Guattari and its relevance for feminist thought. Likewise, in her book Chaos, Territory, Art (Grosz 2008) Grosz explores, among others, the interconnections between art and affect. Citing Deleuze, Grosz claims that art 'is of the animal,' and she writes: Art, according to Deleuze, does not produce concepts, though it does address problems and provocations. It produces sensations, affects, intensities as its mode of addressing problems, which sometimes aligns with and link to concepts, the object of philosophical production, which are how philosophy deals with problems. (Grosz 2008: 10) Grosz follows Deleuze in his defense of art as affirmative and life-enhancing and like him, she denounces the representational aspects of art. Instead, she emphasizes art as 'the art of affect,' that is, 'a system of dynamized and impacting forces,' with a capacity to produce and generate intensity, 'which directly impacts the nervous system and intensifies sensation ' (2008: 12). Art is connected to the forces in nature, which are at work in human and animal bodies, in the earth and in the universe at large. In a footnote, Grosz clarifies her understanding of affects as opposed to the phenomenological notion of lived experience: Sensations, affects, intensities, while not readily identifiable, are clearly closely connected with forces, and particularly bodily forces, and their qualitative transformations. What differentiates them from experience, or from any phenomenological framework, is that they link the lived or phenomenological body with cosmological forces, forces of the outside, that the body itself can never experience directly. Affects and intensities attest to the body's immersion and participation in nature, chaos, materiality (2008: 12). Art as affect or intensities is thus cosmically connected to forces that exceed human beings, individual bodies or human subjectivities. There is something inhuman in art that is produced through human creativity, and it is this affective aspect of art that is of interest to Grosz, as it is to Deleuze and Guattari. Art, through the plane of composition, which it casts, is 'the way that the universe most intensifies life, enervates organs, mobilizes forces (2008: 33). And according to Grosz, what philosophy and art have in common is 'their capacity to enlarge the universe by enabling its potential to be otherwise, to be framed through concepts and affects ' (2008, 33). These capacities are to her among 'the most forceful ways in which culture generates a small space of chaos within chaos where chaos can be elaborated, felt, thought ' (2008: 33). CONCLUSION In this article, I have tried to account for the way that the question of mood constitutes a central concern, not only in Felski's attack on critique as a genre and ethos, but also that the concept plays a vital part in her proposed 'post-critical' approach, which she launches as an alternative to critique in the practice of interpretation. And, having formulated some critical questions and comments, above all in regards to her appropriation of Heidegger's understanding of Stimmung or attunement, I have, in the second part of this article, introduced another theoretical approach to the concept of mood, thought as affect, as it appears in the philosophy of Deleuze and Guattari. Subsequently, I have shown how this notion of affect has been embraced by both Braidotti and Grosz in their feminist theories. As I have noted, there are marked differences in the way Felski, Braidotti and Grosz approach and use the concept of mood in their respective theorizing, and accordingly, these different approaches generate different methodological and epistemological questions and implications for feminist epistemic habits. Despite the problems and questions that I have articulated in relation to Felski's denouncement of critique, I remain sympathetic to her theoretical intervention and her advocacy for positive identification in her post-critical approach. I value her effort to create a debate; not only in gender studies, but also in literary studies and cultural studies more broadly. Felski's polemic position forces us to reflect on our moods and methods of interpretive scholarship and the affects that govern our orientations. Her expressed aim is to re-describe and reorient critical and interpretive practices in such a way that interpretation no longer exclusively pursues negative inquiries in the mode of a 'hermeneutics of suspicion.' In her view, the most salient approach to interpretation must also include a 'hermeneutic of trust or restoration.' As such, she not only pays heed to Ricoeur and Sedgwick's demand for greater complexity in the appropriation of mood, method and affect in the interpretive enterprise, but also acknowledges that parts of the critical heritage in critique will and may be restored through her new 'post-critical' approach. The practice of reading and interpretation remains a major concern in any feminist scholarship, and the methods and affects that inform these practices ought to be continually questioned. For this effort Felski should be thanked, whether or not we support her arguments or question her assumptions and conclusions. However, Felski rarely, if ever, mentions Deleuzian affect theory, nor does she include any references to the host of theorists -non-feminist, feminist and queer theorists alike -who find great inspiration in Deleuze and Guattari's work on affect. Even if she does not agree with them, it would have been of interest to many had she engaged with some of these scholars of affect studies, since they have already made a great impact on literary studies as well as cultural studies at most institutions in the U.S. and Europe. Neither Braidotti's nor Grosz' approach to the question of mood, using Deleuze and Guattari's notion of affect, can be subsumed under Felski's understanding of critique, performed in the negative mood. Their Deleuzian theorizing represents important theoretical practices within feminist theory today. In addition to these two, suffice it to mention some of the most influential feminist theorists today, who likewise mobilize a Deleuzian framework in their respective works: Tamsin Lorraine (1999), Jasbir Puar (2007) Claire Colebrook (2012, Patricia Ticineto Clough (2010) and Dorothea Olkowski (2014). Rosi Braidotti, for her part, practices an affirmative approach to feminist theorizing and in so doing, provides important correctives to Felski's somewhat categorical denouncement of critique. Braidotti's nomadic understanding of the subject furnishes new insights into the complexities of agency and the differentiated and affective assemblages and networks into which the feminist subject is implicated in the age of techno-science. Elizabeth Grosz' Deleuzian approach to art as affect introduces yet another radically different approach than Felski's phenomenological approach to the aesthetic experience. Both Braidotti and Grosz' respective Deleuzian takes on mood as affect raises different questions, with important implications for feminist epistemic practices. For one, it raises the question of whether or not it is sufficient to cast the subject in terms of a phenomenological framework (be it within the mood of soupçon or restoration) when accounting for the complex webs of interconnections and assemblages that the subject are affected by in the age of technology. And we may ask if it is possible, without difficulty, to merge phenomenological methods and concepts with Heidegger's existential hermeneutics and Latour's Actor-Network-Theory? I would also be curious as to how the theories of Latour and Deleuze could be productively thought together. Even if they are engaged in different projects using different methods, would it not be interesting to explore Deleuze's thinking on affect and the emergence of rhizomic networks in relation to Latour's theories of orientations and actor network? Furthermore, I would be interested in pursuing an exploration of how identification might be thought in terms of affirmative affect in a Nietzschean-Deleuzian sense. The feminist treatment of mood, method and affect is, needless to say, to be continued. New and unpredictable interventions will undoubtedly occur, hopefully bringing new questions and challenges into the debate, which will affect the epistemic habits in feminist theory -be it affirmatively or negatively -in the time to come.
v3-fos-license
2020-04-21T14:33:06.831Z
2020-04-20T00:00:00.000
216032804
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10854-020-03385-9.pdf", "pdf_hash": "7e7fbb5fef092568d311220f71edf083acf20346", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2882", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Physics" ], "sha1": "7e7fbb5fef092568d311220f71edf083acf20346", "year": 2020 }
pes2o/s2orc
Thermal neutron irradiation effects on structural and electrical properties of n-type 4H‒SiC In this article, the thermal neutron irradiation (NI) effects on the structural properties of n-4H–SiC and electrical properties of Al/n-4H–SiC Schottky contacts have been reported. The noticeable modifications observed in the irradiated samples were studied by using different techniques. The X-ray diffraction studies revealed a decrease in the lattice parameter of the irradiated samples due to isotopic modifications and irradiation-induced defects in the material. As a result, the energy bandgap, Urbach energy, longitudinal optical phonon‒plasmon coupling mode, free carrier concentration, defect related photoluminescence and nitrogen bound exciton photoluminescence bands were prominently affected in the irradiated samples. The current–voltage characteristics of neutron irradiated Al/n-4H–SiC Schottky contacts were also strikingly affected in terms of zero-bias offset as well as decrease in the forward current. These modifications along with the increase in the Schottky junction parameters (such as ideality factor, Schottky barrier height and series resistance) were attributed to neutron-induced isotopic effects and decrease in the free carrier concentration due to induced defect states. Introduction In the last two decades the irradiation studies on silicon carbide (SiC), particularly 4H-SiC polytype and its electronic devices have greatly attracted the research community. Due to superior physical properties [1] and high displacement threshold energy [2], the 4H-SiC based electronic devices [3][4][5][6][7][8][9][10] were studied and are still being examined under different irradiation environments, with different irradiation parameters as well as under different temperature conditions. The performance of such devices are known to be affected by grown-in defects, irradiation-induced defects and issues with the metal contacts. Compared to other radiations such as electrons, gammarays and heavy ions, the neutrons are known to cause significant modifications in the material properties. Such modifications are found to depend on the nature of material as well as irradiation parameters such as neutron energy, neutron flux and irradiation temperature. In general, the fast neutrons greatly disturb the periodic structure of a crystalline material due to displacement damage effects. While the low energy neutrons such as thermal neutrons can move through the crystalline material without displacing any host atoms. But in the end the capturing of thermal neutrons by atomic nucleus will lead to the formation different isotopes and/or transmutation into new element in the exposed material. However, the fast neutrons, which are present in the reactor flux of thermal neutrons can cause displacement effects in the crystalline material and therefore generates defects such as vacancies, or cluster of vacancies and interstitials, or defects involving impurities present in the material [11]. Subsequently, the examination of thermal neutroninduced modifications in the material properties turns out to be questionable. The examination of material properties under nuclear reactor environments is of both fundamental and technological importance. In the past, Brink et al. [12] have reported reactor neutron flux irradiation on SiC polytypes in the fluence range of 10 15 -10 16 ncm −2 . The results have shown noticeable modification in the IR transmission and Raman spectral features due to the formation of point defects (acting as traps for free carriers). Wendler et al. [13] have reported Urbach type of absorption in the range 3.2-3.3 eV and decrease of the Raman intensity in the fluence range of 10 17 -10 19 ncm −2 . No linear dependence of LO and TO modes were found with respect to neutron fluence or defect concentration. Several new PL bands related to extended defects were detected in 3C-SiC polytype [14]. Few deep level transient spectroscopy (DLTS) reports are also available on neutron-induced defects in 4H-SiC [9,15]. Most of the studies are limited to either to the structural properties or electrical properties independently. The present article is intended to report on the thermal neutron irradiation (NI) effects on n-type 4H-SiC by studying both structural and electrical properties. The structural modifications were accounted by using X-Ray diffraction (XRD), UV-Vis absorption, photoluminescence (PL) and Raman spectroscopy techniques. The practical consequences of NI-induced modifications were realized by studying the current-voltage (I-V) characteristics of the Al/n-4H-SiC Schottky contacts. The combined structural and electrical studies were correlated with bulk information of the material under study. Experimental The commercially available standard two inch n-type 4H-SiC ⟨0001⟩ wafers were procured from Semiconductor Wafer Inc, Taiwan. The resistivity of the wafer was 0.012-0.03 Ω cm. The density and thickness of the wafer was 3.21 g cm −3 and 330 ± 25μm, respectively. The wafer was diced into approximately 0.5 × 0.5 and 1 × 1 cm squares using a diamond tip scriber. The diced samples are then cleaned according to the standard procedures as described in our previous studies [16,17]. The Al Schottky contacts on the cleaned 1 × 1 cm samples were formed by thermal evaporation technique. A prepared shadow mask of circular diameters 0.2 cm were employed to obtain Schottky contacts. The Al deposition was carried out at the deposition rate of 3 Å∕s under the vacuum pressure of 8 × 10 −6 mbar . The thickness of Al was kept at ∼ 50 nm by monitoring through digital thickness monitor (DTM). The neutron irradiation (NI) on n-4H-SiC and Al/n-4H-SiC Schottky contacts were carried out at Dhruva research nuclear reactor, Bhabha Atomic Research Centre (BARC) Trombay India. The samples were packed in the Al foil and kept for irradiation in the reactor environment. The thermal neutron flux at the sample position was ∼ 1.5 × 10 13 ncm −2 s −1 . All the samples were irradiated up to a fluence of ∼ 7.5 × 10 16 ncm −2 . After NI, the samples were kept under a cooling period of ∼ 24 h. The XRD pattern of the samples were collected in 2 range of 20°-80° using bench top Rigaku MiniFlex 600 powder diffractometer ( = 1.5402 Å, 40 kV, 15 mA ) under θ-2θ geometry condition. The Raman spectra of the samples were recorded in the back scattering symmetry using Horiba LABRAM-HR Evolution Raman visible spectrometer. The spectra was measured by exciting sample with 633 nm laser which was focused to a spot size of diameter 2 μm. The spectrometer was calibrated by using silicon which has a Raman mode at 520.6 cm −1 . The absorption spectra of the samples were collected by using Shimadzu UV-1800 spectrophotometer. The optical absorption coefficient (α) after irradiation was calculated by assuming uniformity in the damage over the whole thickness, which allows a direct calculation based on Beer-Lambert's law and in all cases the refractive index was assumed to be unchanged [13]. The Photoluminescence (PL) spectra were measured using Horiba Jobin Yvon 450 W Illuminator. The spectra was measured at an excitation wavelength of 325 nm at room temperature and in the wavelength range of 350-620 nm. The current-voltage (I-V) characteristics of Al/n-4H-SiC Schottky contacts were carried out using Keithley 2450 source meter. The spring loaded Al pressure contacts were used as back ohmic contacts during I-V characterization [17]. Figure 1 shows (0008) reflection of n-4H-SiC before and after NI at the irradiation fluence of ∼ 7.5 × 10 16 n cm −2 . As noticed, irradiated sample have shown peak shift towards the higher side. This suggesting a decrease in the c-axis lattice constant of the material, where c can be evaluated by using the relation [18]: where = 1.5402 Å . The change in the lattice constant ( Δc∕c ) was found to be 1.36 × 10 −3 . Such a decrease is mainly attributed to neutron-induced isotopic modifications and formation of defects in the material. A similar decrease in the lattice parameter has been noticed in diamond with the increase of 13 6 C isotope [11,19]. p-type) can be seen from the presence (or absence) of the free electron absorption band or Biedermann absorption bands (c). These bands are known to be responsible for the green-brownish colour of polytypes [18,20,21]. After NI, marginal variations in the absorption spectra were observed such as shift in the absorption band edge (a), increased tailing (or widening) of the absorption edge (b) and the narrowing of absorption bands (c) and (d). By fitting Tauc's equation [22] for an indirect allowed transitions in the region (a), the band gap E g of the sample was estimated. A decrease in E g of ∼ 0.2 eV has been noticed in the irradiated sample. While well below E g , the estimation of Urbach energy E U has resulted in the decrease of ∼ 0.01 eV . Thus as seen from Fig. 2 the overall decrease in absorption coefficient , E g and E U of the irradiated sample are attributed to structural modifications induced by thermal neutrons in the material. Figure 3 shows the first order Raman spectra (FORS) of n-4H-SiC before and after NI. The different phonon modes are as labelled in the spectra [23]. In the irradiated sample, no new bands were observed in the phonon energy range of 100-1800 cm −1 . Also no homo nuclear Si-Si and C-C bonds were detected in the irradiated sample, indicating no irradiation-induced amorphization effects in the material [22,24]. However, marginal decrease in the intensity and increase in width of the E 2 (FTO) mode has been noticed. This could be due to neutron-induced isotopic effects in the material. In addition to that the LOPC (longitudinal Fig. 3). A similar shift up to 3 cm −1 have been reported previously by Brink et al. [12]. Such a shift towards lower phonon energies suggests decrease in the free carrier concentration ( n e ). By considering empirical relation n e = Δ 1.0 × 1.23 × 10 17 cm −3 deduced by Nakashima et al. [25], it is estimated that n e of the unirradiated and neutron irradiated samples were about 4.2 × 10 18 and 3.5 × 10 18 cm −3 respectively. Such a decrease in n e of the neutron-irradiated sample indicates the capture of free charge carriers by native defects or irradiation-induced defects in the material (compensation effects). Figure 4 shows the PL spectra of n-4H-SiC before and after NI measured at ∼ 300 K and 325 nm excitation. As noticed the distribution of PL intensity is remarkably varied after NI. A broad defect related PL band (DPL) around ∼ 2.35 eV is observed in both the samples. The main radiative recombination path of DPL occurs via donor-acceptor pairs (DAP) of the N-impurities and N-impurities associated with the intrinsic defects such as V C and C i (carbon interstitials) [18,26]. PL spectra analysis of n-4H-SiC On the other hand a broad but less intense PL band observed near the band edge is attributed to recombination of nitrogen bound excitons (NBE) [10,27]. But, NBE PL signal was not detectable in the neutron irradiated sample. The similar quenching effects have been reported earlier in the high energy neutron irradiated 4H-SiC, where the authors attribute such effects for lattice damage due to induced defects in the material [10]. But in the present study no significant lattice damage was noticed. But apparently, PL quenching could be caused due to the passivation of dopants (N-or NIinduced P-dopants) by grown-in defects, which may in turn leading to charge carrier compensation effects or decrease in n e . This resulting in the increase of DPL intensity as opposed to that of the NBE band. The Z 1∕2 defects are usually much anticipated in the compensation effects [28], due to which n e was found to decrease in the irradiated sample (Sect. 3.3). Figure 5 shows the current-voltage (I-V) characteristics of Al/n-4H-SiC Schottky contacts before and after NI at the fluence of ∼ 7.5 × 10 16 ncm −2 . As expected the decrease in the forward and reverse currents are caused due to decrease in n e due to compensation effects in n-4H-SiC (Figs. 3, inset and 4). In addition to that, an important characteristic feature observed in the irradiated Schottky contacts is the zero-bias offset and the resemblance of double switch-on feature. This type of behaviour was also previously observed in the electron and gamma irradiated Al/n-4H-SiC Schottky contacts [17]. Such a behaviour is mainly attributed to the influence of irradiation-induced defects in the n-4H-SiC bulk and their role in the tunnelling mechanism than modification in the interface chemistry of the junction [17]. The Schottky contact parameters ideality factor ( η ) and Schottky barrier height ( Φ B ) were evaluated with and without considering the effects of series resistance ( R S ) of the junction in the similar to our previous studies on Al/n-4H-SiC Schottky contacts [17]. The semi log and Cheung plots are shown Fig. 6a and b, respectively, and the obtained values from these plots are reported in Table 1. The discrepancy observed in η [29,30]. In the present case such a deviation is attributed to the lattice mismatch between n-4H-SiC and Al, presence of impurities states (processed and irradiation-induced) in the interface as well as bulk of the crystal. In fact, the obtained Φ B value of 0.91 eV indicates the pinning of Fermi level ( E F ) due to the presence of defect level at E c − 2.35 eV (Fig. 4) [17]. On the other hand the increase in η , Φ B and R S of neutron irradiated Schottky contacts are attributed to neutron-induced modifications in n-4H-SiC i.e. decrease in lattice parameter due to isotopic effects (Fig. 1) and decrease in n e (inset, Fig. 3) due to formation of defects (Fig. 4). Thus noticeable modifications were observed in the neutron irradiated Schottky contacts. Conclusion The thermal neutron irradiation effects on the structural and electrical properties of n-4H-SiC and n-Al/n-4H-SiC Schottky contacts were studied. XRD studies revealed a decrease in the lattice parameter of the irradiated samples due to isotopic modifications and irradiation-induced defects in the material. The E g , E U ,n e and PL bands of n-4H-SiC were noticeably affected due to isotopic modification and accumulation of defects in the material. Due to which the I-V characteristics of Al/n-4H-SiC Schottky contacts were substantially affected in terms of irradiationinduced zero-bias offset as well as increase in the contact parameters such as Φ B , η and R s of the junction. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not Fig. 6 a Semi log plot, b Cheung plots of unirradiated and neutron irradiated Al/n-4H-SiC Schottky contacts permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
v3-fos-license
2019-05-03T13:09:55.153Z
2015-10-12T00:00:00.000
54668513
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijll.s.2015030601.24.pdf", "pdf_hash": "fed3e21904b5def86c713e916f8febbfec86b619", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2886", "s2fieldsofstudy": [ "Linguistics" ], "sha1": "e439b6f95715b80b4654f6d04479ae64b4efb517", "year": 2015 }
pes2o/s2orc
Language and Thought Convergence (Poetic Grammar) The dichotomy between thought and language is resolved in the productive act of knowledge. Language is a creative product of cognitive function according to the development of the human brain. It happens while maintaining a modal resonance of its constitution in sensitive and perceptive world contact. Each of its units gives access to this phenomenon. Introduction Language is an object that requires immersion in the energy producing and containing it. That's why it involves its own method of study, which gives it a specific character when considering it scientifically. The speech act presupposes a deep and immemorial saying or dictum never to be achieved (Jaspers, Levinas) though it renews every word in the language. That's why thinkers such as Herder, Humboldt, Gustav Gerber or Nietzsche, consider that speech contacts activate the energy contained in words or give them a new energy if they lost theirs in the so-called entropic process caused by their continuous utterance. It happens so, for example, on repeating the once original metaphors as a cliché. A singular use of its value, such as the poetic one, renews metaphors. Such energy is intersubjective and transcendent since it appears mentally in an existential environment common to every man. A singular relationship with things, life circumstances and action developing the human faculty of knowledge is thus established. We call this relationship objective thus placing the mental phenomena so-produced in the senses and in the intellectual perception and judgment that they induce in Reason. Its rational foundation is also singular intuition formerly non-existent and endowed with significant intention by the fact of naming beings and turning them into objects of sense. This phenomenon is a linguistic sign. The intuition originating it is also complex. It reverses itself on thought discovering in it a creative capacity. This is why we should consider language, as Amor Ruibal says, the "point of convergence" between Man and the world [1]. Things are now phenomena turning into objects of knowledge in the ontological saying entailed in such an act. This intuition is specifically verbal. It overcomes the contrast established by Kant between intuitive understanding (intellectus ectypus) and reflective one (intellectus archetypus) [2] since it delimits as a objectified word the discursive thought function thus determining its original indeterminacy. Poietic Energy These assumptions lead us to study language in its genesis and formation, in nuce or in fieri, and as something already made, as Amor Ruibal proposes [3]. And this allows us to consider the word footprint or mental gramma of that special intuition already endowed with saying and objective intention. In-between both phenomena some time passes in human evolution, according to anthropological research. In this process we discover, and in the words of Amor Ruibal, "a truly prodigious stream of psychic life between the speaker and the listener" [4]. This is the convergent background of thought and language. The verbal act is retroprojective. It progresses by replicating itself and always saying something new to the thing previously said but never in its perfection. It evolves, differs, varies, but the internal relationship of elements and basic functions are constant, appropriate, adequate and opportune, although they are also mutable, or plastic. It is, as it is usually said now from a neurological point of view, the epigenic perspective of language. Amor Ruibal at the beginning of the 20th century (some years before Saussure) systematized these characters to describe its overall structure. They are still the scientific "dawn" of linguistic study. To say "word" is to surround thought by deploying it. Words limit Space (S) articulated in Time (T) and Mode (M) of existence. The merge of Space-Time (S-T) on articulatory Mode (M)-the discreet character of the fission or the implicit, atomic, or so-to-say the quantum of language (it is on an ubi at the time it is projected into another and it covers itself holding the poles and the levels so correlated)-allows us to analyze its constitution. In this you can discover a three-phase process of stress (diátasis), separation (diaíresis, diástase) and distribution or diátaxis [5]. They are homologous of the enzyme catalysis by organic fermenting phases, a type of enzyme or vital yeast, as indicated by the Greek word enzume. If we consider, therefore, all verbal Term (Tm) as a product of a catalysis in the coordinates of internal Movement, Actions or expansive Mutation-relational speed included even in the stillness-, we can assist at its evolutionary process as Name (N) constituted in Relationship (R) and modally correlated effect: N-Tm (R) > Noun (Adjective) or Tm (R) > Verb (Adverb). The Name, or the nominal donation (Husserl's Sinngebung), implies a relationship of Mode (wie), Noun, Verb, Determinant, Adjacent, Complement categories. This Relationship presupposes, in turn, ontological inscription of being as an individual in the World, a beating and stressing inscription. The Name has a surrounded and, at the same time, expanded action effect. There is a pronominal instant in the thought, a PRO factor, anaphoric and cataphoric from left and right, up and down, horizontal and vertical. It creates a phonemic volume. The nucleus S-T of the word proceeds qualitatively and happens in existential Mode. Its resonance is the tone. Therefore, the articulated quantity is based on a point and principle that is present resounding in all acts of language. It is the poetic gramma or pre-scientific germ of language [6]. We do not know the origin of language, but it manifests itself in the basis of any speech act. And with this you can see that a phenomenological Linguistics or a linguistic Phenomenology can be made [7]. Language and thought require each other in the cognizant act: "Think and talk are simultaneous acts. The development of either of both is the one with the other", writes Karl Jaspers [8]. And knowledge contains poietic energy or an internal saying that is already an intentional act and the transit of one element into the other. That's why the word is, according to Gerber, Michel Bréal and Amor Ruibal, a trope. A Singular Inherency The Name is to be considered as a term affected in its root by means of a modal relative Function (F) which is explicitly reverberating in an environment, situation or circumstance, that is: N-Tm (R) F. The nominal value of the lexeme is specified as a noun or verb according to discursive understanding perception just as this takes the object of knowledge itself in a process in which the action is perceived or objectified. This explains the reason why Chomsky considers a "functional head" (v*) in the "full argument structure" of the verbal transitivity or element that determines the value of the root [9]. Noun and Verb are nominal roots on PRO tension (diátasis) that differs and separates (diástasis) distributing itself (diátaxis) by paradigms according to a lexical Category (Clx). The process can be represented in the following way: Language is a vital act in a modally relative tension of knowledge. It processes the dynamism of Being, its science. The motion of knowledge transforms the human sound into voice or phonemic unit. The sound action establishes a unit in which the sound waves become relational function of object-subject or ontological principle of knowledge. It is something new that reverses on the faculty allowing it and affecting its original energy. Its unity establishes an irradiation here-there-PRO-by which sound acquires a phenomenological rhythmic depth, a topological and indeterminate principle of inherent and multiple correlation. The correlation of sound equivalences induced by the ontological contains an implied, constant and durable association in which it is permanently inherent. It changes, mutates, but remains correlated in change and equivalence. In language there is an ongoing "relationship of origin", writes Amor Ruibal [10], meaning with origin the outbreak or articulated product and the direction to element enabling and conditioning it, even if we don't know their beginning. "Origin without origin", adds Jocelyn Benoist expounding Husserl's theory of meaning [11], but expounding it as well with a principle whose highlight is a molten unity of thought and transformed sound. This constitutes the significant and typological unit or the phonetic quantum (phono-phonological and iconic amalgam of voice) of linguistic motion. Another element of body action-articulated sound-displays other active factors of human behavior. With this a horizon of completeness is open. All terms are integrated in it with a certain tension (phrase, clause, proposition: German Satz 'leap') and forming a new unit thus giving meaning. The characters and terms integrated in this way possess the quality that, when analyzed, discloses its constituent principle. They have sense orientation. And this orientation is its name. It occurs in an intentional relationship of something unspecified being determined by integrating itself into a new unit. It is an internal and interpretative (i) factor or nominal (n) constitution of clause, sentence, proposition (p). We represent it as i p/n. The name displays a horizon of correlative integration [12]. The articulated motion acquired at a particular moment involves significant inherency. Its presence executes the ontological principle as tropic and existential framework of knowledge: Bild, entfreming, sheme, Ge-stel, etc. Such a phenomenon cannot be reduced to a simple abstraction. Nor can its dynamism be suspended, because such an act is the deployment of inherence thus remaining its attribute. It involves a certain vital color which in some way or another goes as a predicate in some of its determinations. In the verbal quantum thus elucidated you can find, therefore, the qualitative tension of its foundation, the theme as such. Theory is its praxis, and vice versa. Amor Ruibal insinuates therefore that language is text because of its objective constitution in the phonetic type. And all tension is due to rhythm. It is the fusion and fission point of the articulated S-T according to its Mode of existence. Each language contains a fundamental thoroughly resonance that constitutes it. It is cosmic energy. This resonance responds to the tropical character of language. Language is a trope. It always means more than said, as Nietzsche, Bréal, Amor Ruibal, Gerber, Georg Santayana, Jaspers and Ortega y Gasset, and others would say. Conclusion Grammar is then revealed as tropology of thought. Language considered in the way stated above founds a new mental orb in the universe and guides its study towards the cognizant germ. It is poetic grammar.
v3-fos-license
2019-05-10T13:09:17.005Z
2009-12-01T00:00:00.000
151665202
{ "extfieldsofstudy": [ "Sociology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.7454/mssh.v13i2.771", "pdf_hash": "72828592061d33a57dadf0646119e55311b5de6d", "pdf_src": "Neliti", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2887", "s2fieldsofstudy": [ "Business" ], "sha1": "6617c57ad5a7db81a5a5fd4edca64f02af897a8c", "year": 2009 }
pes2o/s2orc
Subsidiary Perspective of Coordination Mechanisms on Localization Decisions, Working Environment, Marketing Engagement and New Product Performance New product launching (NPL) to the local market by  subsidiary managers is a strategic activity, which  requires organizational supports from MNC global network. The NPL activity is marked by high level of uncertainty, risk, and market failure. Thus, a headquarter needs to integrate the subsidiary NPL into the global strategy. At the same time, subsidiary managers need to have a certain level of autonomy to ensure that the launching program is adapted to the local specificities. These two pressures have forced the subsidiary managers to take up the roles of ‘boundary spanners’. Good working environment between subsidiaries’ managers and headquarter is believed to be the determinant factor for the new product performance. However, good working  environment between headquarter and subsidiary is not automatically conditioned. The types of coordination developed by the headquarter influence the subsidiary managers and the headquarter working environment, and hence determine the new product success. This research emphasizes that negotiation coordination is more suitable than the hierarchical coordination when building good working environment during NPL process, determines the commercial performance of new products.   Introduction A long research tradition on the factors that contribute to the new products success has started in the beginning of 60s. Studies by Burns and Stalker (1961), followed by Lawrence and Lorsch (1967) examined the effects of organizational structure on the innovation success. This domain of research is continued between the 70s and the beginning of 80s by predominant authors including Cooper (1979Cooper ( , 1984 and Calantone and Cooper (1981). Hereafter, various organizational factors have been analyzed during the process of new product development to commercialization. Those factors include the interdepartmental cooperation (Zirger & Maidique, 1990), the supports of top management (Montoya-Weiss & Calantone, 1994), and the communication and training (Moenaert & Caeldries, 1996). Curiously, only a small number of studies have been made to the particular setting of internationalization. Several scholars have attempted to analyze NPL activities in the MNC (Multi National Company) operations, but limited to activities of new product development in R&D departments (e.g. Alphonso & Ralph, 1991;McDonough et al., 2001;Cheng & Bolon, 1993). According to another study, NPL is believed to be the competitive advantage source (Friar, 1995) in obtaining and maintaining favorable position in global market. Thus, it is important to comprehensively analyze NPL process in the MNC context. The MNC is confronted with classical problems of integration and coordination around the dispersed activities globally (Stopford & Wells, 1972;Wilkins, 1974). From another point of view, subsidiaries need to be sufficiently differentiated to adapt the specific local factors, i.e. cultures, industries, government regulations, and consumers. Thus, NPL process to the local market in subsidiaries is characterized by pressures of integration and localization (Jarillo & Martinez, 1990;Prahalad & Doz, 1981;Bartlett & Ghosal, 1989;Roth & Morisson, 1990;Taggart, 1998). As subsidiaries require integration and localization aspects, this research considers that subsidiary managers must synchronize and harmonize the necessity of standardization with adaptation at the same time during NPL process. Literatures show that the NPL to new and existing markets is risky and expensive (Calantone & Montoya-Weiss, 1993;Schmidt & Calantone, 2002). According to Cooper (1986), only 1 out of 4 development projects is successfully launched in market. Meanwhile, Stevens and Burley (1997) stated that 1 out of 3,000 new product ideas is commercially success. The NPL risk resulted when high investment is confronted with highcomplexity of relations within interdependent units of an organization, which increases uncertainties of positive market responses (Firmanzah, 2005). In MNC contexts, integration mechanism exercised to subsidiaries by headquarter is considered the fundamental organizational factor that influences the new products performance in local market. It is important to analyze the effects of the integration mechanism during NPL by subsidiaries. However, this article also attempts to answer the classical problem of the differentiation and integration during NPL by Lawrence and Lorsch (1967). Such problem is believed to be the important organizational factor for the new product success in subsidiaries. The subsidiary NPL is complex and expensive. The complexity resulted from the diversity of phases starting from the development to commercialization activities (Biggadike, 1979;Hultink et al., 1998;Guiltinan, 1999;di Benedetto, 1999;Hultink et al., 2000) and the rich information provenance both from the headquarter and its local environments. The classical problem of horizontal interface (Urban & Hauser, 1980;Zirger & Maidique, 1990) highlights the challenges of vertical relation between headquarter and subsidiaries. Thus it contributes to the complexity dimension of NPL process. However, this process is known for its expensiveness. A wide array of activities -from market information gathering and treatment, laboratory activities, market testing, to commercialization campaigns -requires huge financial sources. Consequently, the headquarter endeavors to ensure that the NPL process is implemented according to the plan. Furthermore, headquarter should coordinate this activity in order to maintain the consistency and synchronization of its global strategy. The integration of the activities is designed to minimize failure risk of the new product in local market by transferring the knowledge and the experience from other countries to local subsidiary managers. The subsidiary managers' role during NPL process will be analyzed through social-psychology literatures. According to this literature stream, no unit in the organization exists in isolation (Katz & Kahn, 1978;Kahn et al., 1964). Each unit is linked to other unitsboth directly and indirectly -through several mechanisms, e.g. method of work, nature of the task, and the report mechanism. To achieve efficiency, an organization requires a cohesive structure in which sets of functions and roles are integrated into the overall organization strategies. By applying this perspective into an MNC context, it emphasizes the importance of headquarter tasks in organizing its dispersed activities around the world. The global performance of MNC depends on the performance of each subsidiary. Consequently, headquarter is believed to be the integrator body in MNC networks through control and coordination instruments (Cray, 1984). From another perspective, subsidiary managers directly and indirectly respond on daily basis to the specificities of local environments. Therefore, subsidiaries require some degrees of autonomy to adapt and localize their operations to host-country. Accordingly, subsidiary managers receive two pressure factors, which resulted from the headquarter instruction and mandate as well as from the adaptation to local environment. This situation brings the subsidiary managers to the interface between MNC's headquarter and local environments of the host countries. This interface is called boundary spanner (Au & Fukuda, 2002;Thomas, 1994). However, Organ (1971) argued that the boundary spanner has a linking pin role between the organization and its environment. Wilensky (1967) considered boundary spanner as the man of contact, who plays a mediator's role between the external demands for flexibility and internal requirements for efficiency. Aldrich and Herker (1977) underlined that the capacity of organization to adapt the environment constraints partly depends on the boundary spanner capacity to find a compromise between the organization strategy and the constraints of external environment. The boundary spanner primary activities are to build the perception of the external environment and increase the organization resources commitment to implement the decisions (Dollinger, 1984). Boundary spanner is also considered as the position to gather and process information from external environment, and transfer it internally (Keegan, 1974;Tushman & Scanlan, 1981a, 1981b. In spite of the positive aspects of boundary spanner's position, a lot of studies illustrated the vulnerability of this position with the negative consequences on the work performance. Miles (1976) showed that the nature of the boundary spanner's role stimulates role conflict. In the same vein, Kahn et al., (1964) underlined that the employees located between the enterprise and its environment are particularly subject to the role stress, role ambiguity, and role conflict. The role stress is strongly associated with negative consequences on the short-term and long-term performance of the employees (Stamper & Johlke, 2003) This situation has driven to the analysis of working environment of boundary spanner (subsidiary managers) during NPL process. The working environment refers to how the individual in an organization interprets the working condition and interact each other concerning the required roles and tasks (Hellriegel & Slocum, 1974). Figure 1. The General Model of Hypotheses Previous research shows that the working environment is a structuralism and phenomenon of interaction. According to structuralism, the working environment is a function of structured pattern in an organization (Ashforth, 1985). The division of work, centralization or decentralization of the decisions, and formalization are the determinant factors for working environment. However, based on the interaction perspective, the working environment is the result of interaction patterns between units and actors in an organization (Schneider & Reichers, 1983). The integration mechanism developed by the headquarter covers two perspectives. The integration mechanism embeds types of task and functions of every unit and actor, and also defines the how and what of mechanism employed by headquarters to interconnect different units in MNC network. This situation is believed to influence the working environment between headquarter and subsidiary managers. If the headquarter imposes a high degree of integration through standardization, formalization, and mechanistic procedure, the working environment between headquarter and subsidiaries is very formal and procedural. On the other hand, if the headquarter applies a low degree of integration, based on interactions rather than bureaucratic procedures, the working environment between headquarter and its subsidiary managers is more informal and flexible (George & Bishop, 1971). Several researches in the past showed positive relations between working environment and employee satisfaction (Churchill et al., 1976) and motivation (Tyagi, 1982) to the tasks and work given. Yoon et al. (2001) confirmed that the internal working environment influences the relations between employees and consumers, which consequently determine the overall performance of the enterprise. How employees build and construct the relations with consumers determine the consumers' reaction of goods and services offered by this enterprise in the market. The effects of working environment on the performance have become the major problem in the psychology research field. Several researches confirmed that good working environment contributes positively to the efficiency of work realization (Rogg et al., 2001) and to the work performance and organization goals (Lyons & Ivancevich, 1974). In the subsidiary NPL process, good working environment between headquarter and subsidiary managers is considered to positively contribute to the way subsidiary managers carrying out the new product development process and commercialization. Such situation leads to the positive performance of the new product. On the other hand, a bad working environment creates uncomfortable and harmful situation, and most subsidiary managers' efforts are dedicated to solve the relational problems with headquarter. Consequently, less effort will be committed to implement the new product planning and strategy, thus negatively influence to the new product performance. H1a: The good working environment between headquarter and subsidiary managers positively influences both commercial and technical new product performances This research considers that the main objective of the presence of consumer goods' subsidiaries in a host country is to conquer the local market. Author like Behrman (1972) considered that one of the foreign direct investment presence objectives is to serve better in local market in order to win local competition. Therefore, the specificity of local environment has become the main concern of the subsidiary managers. The classical literature on the contingence perspective argues that the fit between organization and environment is an important indicator to survive and perform in a given market (Lawrence & Lorsch, 1967;Burns & Stalker, 1961;Bourgeois, 1985). Following this schema, subsidiaries need to adapt to the local features in order to achieve superior performance. Therefore, this research believes that the local character of decisions in each stage of subsidiary NPL process will contribute to the superior new product commercial performance. H1b: The localization of the subsidiary NPL decisions positively influence the new product commercial performance As the consumers of commercial goods' companies are individuals, the commercial performance is determined by the manners in which subsidiary managers influence the individual behaviors. Subsidiary managers should develop marketing strategies during NPL process. Mass marketing, organizational support, superior new products, and distribution channels are factors considered important in developing the marketing strategy. Local level Components including product, price, promotion, and publicity must be coordinated to reach geographically dispersed individual consumers. Thus, importance level of efforts and resources dedicated to the mass marketing allows the subsidiaries to better reach the individual consumers. The subsidiary NPL also needs the contribution and coordination from all departments within a subsidiary organization. Functions such as marketing, production, finance, human resources, and R&D should be harmonized during the process. Many researches in the past showed that superior product is an important element for new product success (Maidique & Ziger, 1984;Cooper & Kleinschmidt, 1987;Montoya-Weiss & Calantone, 1994). The last factor of marketing strategy, which is the distribution channel, is important for consumer goods companies as the success of new product highly depend on how to effectively bring closer the new product to individual consumer. These four factors positively determine the new product commercial performance. H1c: The high level marketing strategy engagement positively influences the new product commercial performance The working environment is considered to influence the strategic decision-making during NPL process. The working environment gives the context where strategies will be formulated (Daft, 1978) and setting in workrelated realization (Miller, 1997). The working environment that is favorable and supportive to the strategic formulation process will enhance the quality of the decision strategic and its realization. On the other hand, the working environment that impedes the exchange of ideas and communications during strategic decision formulation will reduces the quality of strategic decision and its implementation. Therefore, favorable working environment, in which the strategies are elaborated and decided, is an important factor for engagement levels of marketing strategies. H2a: Good working environment between headquarters and subsidiary managers during NPL process positively influences the degree of marketing strategies engagement This research is implemented based on the perspective that the consumer goods' MNC manages a wide array of global products 1 . Consequently, subsidiary managers also introduce and commercialize these products to local market. Global products need certain amounts of standardization and harmonization for global market. Thus, certain adaptation necessary to the local market 1 Global new product is a new product resulted from market research and R&D conducted by headquarter and regional offices. The role of subsidiary is merely to introduce the new product to local market. Include in the paragraph should follow the guidelines from headquarter. However, the role of headquarter is very important in developing global products' characteristics. Innovation and brand decisions for global product are important factors in ensuring the harmonization and consistency of global strategy development and implementation. Generally, the R&D unit in a consumer goods' MNC is centralized -in one location -under headquarter full control. Thus, the new product innovation during subsidiary NPL process is highly centralized in headquarters. The subsidiary managers could contribute to this process, although limited to the roles of local information gathering and processing. Brand construction is also considered as global initiatives. Publicity theme and channels are centralized. Limited amounts of necessary adaptation existed but they will not change the global strategy framework. For the above decisions, subsidiary managers need the roles of headquarter to organize and standardize global strategy to ensure harmonization and consistency. Therefore, the centralization of decisions will result in role clarity between headquarters and subsidiary managers. Conversely to the innovation and brand decisions, commercialization decision is highly correlated with the specificities of the host country. The price, launch time, distribution, and promotion are local-sensitive decisions. Subsidiary managers must respect local characteristic more than headquarter global guidelines. Thus, the localization of commercialization decisionmaking will enhance the role clarity between headquarters and subsidiary managers. H2b: The localization of new product innovation and brand decision negatively influence the working environment between headquarter and subsidiary managers H2c: The localization of new product commercialization decision positively influences the working environment between headquarter and subsidiary managers Previous researches confirmed that the configuration of organizational structure plays an important role in forming and conditioning organizational working environment (George & Bishop, 1971;Schneider & Reichers, 1983;Rousseau, 1988;Patterson et al., 1996). The integration mechanisms are employed by headquarters in order to harmonize subsidiary activities with global network, influence working environment between headquarter and subsidiary managers. The integration mechanism in subsidiary NPL process could consist of negotiation and hierarchical coordination. Negotiation coordination lies in the communications and feedback or adjustment from unforeseen and unexpected situations. This mechanism incites active contributions from each unit. The communication and information exchange between headquarter and subsidiary managers are considered as means of autoadjustment of different functions and roles involved in NPL process. Thus, the utilization of negotiation coordination enhances good working environment between headquarter and subsidiary managers. On the other hand, the integration mechanism that applies hierarchical and authoritarian mechanisms to the relationship between headquarter and subsidiary negatively influences the working environment. The process of hierarchical coordination takes place based on the intervention and programming of headquarter during subsidiary NPL process. Under this mechanism, subsidiary managers are confronted with double pressure -often contradictory -of headquarter's orientation and intervention as well as local pressure. This double pressure reduces good working environment between headquarter and subsidiary managers. H3b: The hierarchical coordination negatively influences headquarter and subsidiary managers working environment during NPL process The locus of NPL decision-making is influenced by the headquarter integration mechanism. If headquarter applies negotiation coordination, subsidiary managers will take more initiatives and participates in the decision-making process during NPL. This type of coordination allows the information exchange and discussions between headquarter and subsidiary managers. It enables the subsidiary managers to play important roles during NPL process problem solving as they understand the actual host country environments. Such knowledge is an important factor for launching decision-making and execution. Utilization of negotiation facilitates the subsidiary managers in conveying local information and specific conditions during the decision-making process with headquarter. Therefore, negotiation coordination tends to orient NPL decisions towards local characteristics more than global standardization. On the contrary, hierarchical coordination prevents the adjustment and information exchange between headquarter and subsidiary managers. Under this mechanism, headquarter plays a major role in coordinating and integrating the dispersed activities of subsidiaries worldwide. Fixation and programming activities are often conducted by headquarter. Even though subsidiary managers have the opportunity to make certain program adjustment, they will not change the general program framework decided by headquarter. Subsidiary managers are more a passive rather than active institution, as it is headquarter that plans and develops the program for harmonization in each phase of NPL process. Therefore, in this type of coordination, interest in global standardization is more powerful than local adaptation. H3c: The negotiation coordination tends to orient subsidiary NPL towards localization rather than global standardization H3d: The hierarchical coordination tends to orient the subsidiary NPL towards global standardization rather than localization The negotiation coordination stresses the relational rather than intervention pattern. The subsidiary managers are granted autonomy to decide and communicate the strategy and action plan to bring new products into local market. The strategic implementation literatures confirmed that the incorporation of those whose involved in or affected by the implementation of decision increase the degree of acceptability of strategic decision (Miller, 1997). Such incorporation influences the motivation degree in strategic realization. Therefore, the negotiation coordination increases the level of marketing strategy engagement in subsidiaries. Contrarily, hierarchical coordination emphasizes the orientation and intervention of subsidiaries activities. Under this mechanism, subsidiary managers are not given space to take initiatives and present opinions during strategic decision-making and implementation. In other words, no close linkages exist between the decision that must be executed and those who will execute it, particularly if contradiction between what is thought and what must be done by subsidiary managers exists. Subsidiary managers will put into operation the NPL program as demanded by headquarter. This situation will decrease the subsidiary managers' commitment in achieving objectives of NPL program determined by headquarter. Therefore, utilization of hierarchical coordination will decrease subsidiary marketing strategy engagement during NPL process. H3e: The negotiation coordination increases the degree of subsidiary managers' marketing strategy engagement H3f: The hierarchical coordination decreases the degree of subsidiary managers' marketing strategy engagement Methods The questionnaire construction is processed based on the discriminate principle between success and failure of new products (Cooper, 1979). The respondents were asked to differentiate two products representing success and failure cases. Therefore, each question must be answered according to these different dimensions of success and failure. Calantone and Cooper (1979) argued that this method allow analysis of responses by directly comparing factors contributing to the success or failure. This mechanism also facilitates the respondents in cognitively differentiating between the NPL experience contributing to success and failure in the past (the NPL realized within five years). The development of subsidiaries is divided into the following two phases: (1) to select list of subsidiaries from the existing data base (kompass and icpcredit), and (2) to gather list of subsidiaries via internet site of each MNC. Finally, sample consists of 1,167 subsidiaries of consumer goods in 18 countries located in 2 regions, Asia and Latin America. The reason to focus on subsidiary consumer goods is that the frequency of NPL by consumer goods is more than that of industrial companies. Consumer goods companies have sufficient experience to launch new products in local market. The postal survey has been conducted twice to marketing or commercial directors of subsidiaries. Considering the diversity of subsidiaries locations as well as managers' nationality, questionnaires used English language. Such language is a standard international business language so that it could minimize the bias comprehension of different cultures and local social conception in different countries. For the purpose of facilitating the questionnaire answering by subsidiary managers and saving time, a special web site is developed to facilitate the participants completing the questionnaires. Subsidiary managers were able to take part in this study by visiting www.firmanzah.bacabuku.net to fill out the questionnaire. Finally, some 69 subsidiaries agreed to participate in this study. About 55 respondents (79.7%) responded online and 14 (20.3%) by mail. As each subsidiary provided two cases (products), our data base constitutes 138 products, of which 50% is successful. The product became the level of analysis as all the organizational process is reflected by the success and failure of products in market. The low participation rate of subsidiaries was due to several factors, e.g. long question, information confidentiality, and language barrier. Operational measures. To show the distinct variables in each concept, a principal components analysis (PCA) is mobilized to analyze these items (as the sample size was not sufficient for confirmatory factor analysis). The author used the oblimin rotation since moderated-size correlations is expected found among some factors. Pattern matrix of the five concepts was mapped onto the scale as expected, therefore providing evidence of factorial validity of measures. To construct the integration form, the respondents were asked to think about their relationship with headquarter and internal cross-functional coordination within subsidiaries using series of statements on a scale ranging from 1 (very low) to 5 (very high). The PCA shown in Table 1 lead into two coordination mechanisms, i.e. (1) negotiation and (2) hierarchical coordination. The negotiation coordination interpretation is based on the concept of coordination by communications and feedback of March and Simon (1958) and is corresponded to the construction of relational coordination of Gittel (2000). The communications and the feedback facilitate the interaction process enable adjustment activities of different units. This type of coordination facilitates the circulation of information. This type of coordination is also characterized by the continuity communication and relational dimension in organization (problem resolution, mutual respect, objectives, and knowledge sharing). In contrast, the hierarchical coordination closely relates to the concept of programming coordination of March and Simon (1958). This form of coordination stresses the aspects of controls and intervention of NPL. Headquarter decides the specialization of the activities of each subsidiary and synchronize it in the global network. Locus of decision dimension is developed by asking questions on standardization to subsidiary managersvarious adaptive of decisions ranging from 1 (highly following headquarter) to 5 (highly adapting local environment). The result of PCA is shown in Table 2. The locus of decision-making covers three types of subsidiary NPL decisions, i.e. (1) the decisions concerning new product innovation (2) the decisions correlating to brand identity, and (3) the decisions associated with commercialization. The innovation decision concerns the degree of innovation and driver of new product innovation. The brand identity decisions are related to the brand positioning and characteristics (logo, symbol, picture, and personality). Commercialization decisions concern the pricing, choice of distribution channels, and new product promotion. The production working environment variable is developed by questioning the relations climate of headquarter and subsidiary managers, ranging from 1 (very poor) to 5 (excellent). This construction measures whether the actors have a clear vision of the activities required during NPL and whether they are under harmonious working climate. As shown from Table 3, PCA analysis provides two fundamental concepts, i.e. (1) role clarity and (2) functional conflict. The role clarity corresponds to the degree in which the individual comprehends and understands the clarity of activities required to achieve his/her tasks (Kelly & Hise, 1980,). The concept of role clarity is the inverse concept of role ambiguity, which is defined as the lack of clarity in definition, finality, and means to recognize the tasks (King & King, 1990). The role ambiguity also illustrates the situation in which the actor or the individual who is unaware of required task must face multiple demands. The second dimension of working environment is the functional conflict defines the situation where different points of views inter exchange among organization units during the problem solving (Jehn, 1994). The functional conflict measures different levels of ideas and perspectives between headquarter and subsidiary managers during NPL process. This type of conflict is closely associated with cognitive conflicts (Amason, 1996;Amason & Mooney, 1999) and task conflict (Janssen & Veenstra, 2000;Jehn & Mannix, 2001). This situation is believed to improve the decisions quality. On the other hand, the dysfunctional conflict provokes serious organizational problems because it incorporates the personal and emotional conflicts. The construction of the marketing strategy variable is measured by questioning the quality of each item in the marketing strategy, ranging from 1 (strongly disagree) to 5 (strongly agree). The results of PCA as shown in Table 4 illustrate four factors, i.e. (1) mass marketing efforts, (2) new product superiority, (3) distribution channel engagement, and (4) organizational support. The first and the third factors are correlated with marketing mix elements. However, the second and fourth are associated with the new product success factors of Cooper & Kleinschmidt (1987) and Montoya-Weiss & Calantone (1994). Theses factors are important during the NPL because they influence the manners on seeking ways to win competition in local market. The new product performance is built by questioning the degree of new product performance achievement compared to the respondents' initial expectation, ranging from 1 (far less) to 5 (far exceeded). The PCA shown in Table 5 distinguishes two types of new product performance, i.e. (l) commercial performance and (2) technical performance. Commercial performance refers to all market performances including consumers' satisfaction and acceptance, market share, sales volume, product revenue, and profitability. Technical performance refers to aspects of realization quality in each phase and stage during NPL and commercialization. Several authors, e.g. Hultink et al., (1998), Guiltinan (1999), and di Benedetto (1999) argued that the coherence and constancy of new product development and commercialization is an important dimension for new product success. Therefore, the measurement of program achievement compared to initial planning is an important dimension for new product performance. Results and Discussion Descriptive statistics and zero order correlations are presented in Table 6 (more detailed results are available upon request). Although the correlations were generally consistent with our expectation, the direct relationship between commercial performance and coordination type were not statistically significant (r = .11 for hierarchical coordination; r = -.139 for negotiation coordination). On the contrary, zero order correlation also showed positive contribution of the coordination mechanisms to technical performance (r = -.461, p < .01; for hierarchical coordination; r = .433, p < 0.01 for negotiation coordination). From this result, hierarchichal coordination has negative correlation with technical performance. On the contrary, negotiation coordination has positive correlation with technical performance. Hypothesis tests were conducted using the structural equation modeling (AMOS 5). As noted earlier, it was our intention to obtain a comprehensive measure of an organization's subsidiary new product success. This type of analysis has the advantage over correcting unreliability of measures and also provides information about the unique paths between the constructs. The global model test provided a good fit to the data (χ² = 875.057, df = 826, p < .05, CFI = .987, IFI = .987, RMSEA = .021). The relatively small size, multivariate non-normality, and non-linear interaction term of our samples may adversely affect the sample stability. In order to check the robustness of the findings, author reassessed the hypothesized relations with Bootsrap Computation. The Bootstrap involves repeated reestimation of a parameter using random samples with replacement from the original data. These analyzes allow the calculation of confidence interval on the estimated data. The Bootsrap analysis using AMOS provides the value of P (Bollen -Strip Bootstrap) for the model = .935. Considering the conventional significant indicator = .05, this model is accepted to test the hypotheses. In other words, the model fit to the data and globally robust to test each of hypothesized. The tests of hypothesis I show the importance of working environment between headquarter and subsidiary managers as determinant factors for new product success. Meanwhile, the other factors including Figure 2. Structural Model Significant Standardized Parameter Estimatesª the locus of decision and the marketing strategy do not statistically show significant relations in our model. Both elements of working environment positively influence the new product commercial performance (β = .37, p < .05 for role clarity; β = .34, p < .05). However, the technical performance also increases market performance (β = .21, p < .05). Curiously, when author separately tested marketing strategy and new product performance (χ² = 217.628, df = 187, p < .05, CFI = .981, IFI = .982, RMSEA = .038), several elements were statistically significant to new product commercial (β = .42, p < .05 for mass marketing; β = .41, p < .05; β = .20, p < .05 for technical performance). Similar result were also obtained when author partially tested the locus of decisions and commercial performance (χ² = 195.454 The second hypothesis illustrates the importance of locus of decisions on functional conflict between headquarter and subsidiary managers during NPL. However, different pattern of influence exists. Both localization of commercialization and brand identity decisions positively influence functional conflict (β = .49, p < .05; β = .58, p < .05). On the contrary, localization of innovation decision negatively influences functional conflict (β = -.85, p < .05). The third hypothesis reinforces the result in the past concerning the structural effect and interaction to the working environment (Ashforth, 1985;Schneider & Reichers, 1983). Negotiation coordination increases the role clarity during NPL (β = .63, p < .05), whilst hierarchical coordination prevents functional conflict between headquarter and subsidiary managers and during NPL (β = -.73, p < .05). Coordination is an integration mechanism to manage headquarter and subsidiary activities in the value-chain process. The results of hypothesis testing show that negotiation coordination increases the subsidiary managers' role clarity. This integration mechanism allows clarification of subsidiary managers' roles through mutual adjustment with headquarter. In this context, subsidiary managers are not merely implementing bodies of global strategy. More than that, they make their own decisions and have ideas and interests concerning the required tasks. Thus, negotiation coordination is important, as it facilitates the adjustment and idea exchange, which enables the clear roles between headquarter and subsidiary managers. In contrast, hierarchical coordination impedes the discussions, information and idea exchange, and the problem-solving in NPL decision-making involving headquarter and subsidiaries. It reduces the idea and information exchange due to the subsidiaries activities programming during the process. This mode of coordination also leads passive behaviour of subsidiary managers because all have been decided by headquarter. The subsidiary managers' role is limited to an implementting body of strategic decision made by headquarter. Therefore, this type of coordination negatively influences the functional conflict during subsidiary NPL. Another results of this research also show the importance of good working environment between headquarter and subsidiaries' managers during NPL process. The subsidiaries working environment determines the NPL success in local market. The hypothesis testing illustrates that working environment is more significant in influencing new product performance rather than the locus of decisions and marketing strategy. Two measures of working environment have been analyzed, i.e. role clarity and functional conflict. The role clarity is vital for subsidiary managers because they need the clarities of roles, task, and job in interactions with headquarter. Many authors in the past showed that this situation valorize the implementation quality, motivation, and engagement of the actors (Miles & Petty, 1975;Teas et al., 1979;Kelly & Hise, 1980). Our research also supports the findings in the past by indicating that the role clarity has a positive relation with new product commercial performance. The findings of this research also support the decisionmaking process literatures. This article demonstrates that the functional conflict positively influences new product commercial performance. The decision quality requires various reflections, ideas, and information exchange of the different units in an organization (Hambrick & Mason, 1984) to analyze and more comprehensively develop NPL program. This situation could facilitate the commercialization, thus increase performance (Rogg et al., 2001;Harborne & Johne, 2003). A good working climate facilitates the actors of an organization in developing mutual respect, information sharing, and inter-departmental cooperation. Conclusion The results of hypotheses testing reinforced the research findings in the past (e,g., Schneider and Reichers, 1983). According to them, working environment is influenced by organizational structure (formalization, specialization, centralization, etc) and the perception construction of the actors. In this context, the working environment has both an objective (the organization structure) and subjective aspects (the actors' perceptions). Subsidiary managers establish the sense and roles of signification based on the integration mode developed by headquarter. If the headquarter applies high levels of control and coordination, this would minimize the roles of subsidiary managers. If the headquarter allows more autonomy to subsidiaries, the managers will have more strategic roles during NPL process. However, from the structural equations modeling, it is the working environment dimensions that have significant effect of new product performance. Two dimensions of working environment-role clarity and functional conflict-increase commercial performance of new product launched by subsidiary in the local market. It seems that good working environment facilitates good communication and information exchange among managers in the headquarter and subsidiary level. Such mechanism is believed as a main source of organizational effectiveness (Churchill et al., 1976;Tyagi, 1982;Yoon et al. 2001). Thus it increases the quality of products and services produced by the firms. This research has certain amount of limitations. Firstly, it did not take into considerations the distinction of subsidiaries. In reality, a subsidiary could establish a joint venture with local partner (Killing, 1983;Yan & Gray, 1994), and this structure can influence the decision configuration with parent companies. Subsidiary managers are not only dealing with headquarter but also for the interest of the local parent company. Not considering this situation will reduce pertinence of conclusion in the research. Secondly, it did not distinguish several types of new products. New product literatures distinguish several types of new products (Booz Allen Hamilton, 1982;Garcia & Calanton, 2002;Song & Montoya-Weiss, 1998;Kleinschmidt & Cooper, 1991). Therefore, different new product types need to be analyzed separately. I gratefully acknowledge the valuable advises from and discussions with Prof. Dr. Jacques Jaussaud, Avanti Fontana Ph.D., Ratna Indraswari, and Janfry Sihite during the refinement of earlier version of this paper. Any remaining deficiencies are my sole responsibilities.
v3-fos-license
2020-06-25T09:08:52.852Z
2020-07-01T00:00:00.000
225709298
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://agupubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2019GB006503", "pdf_hash": "f6169e4e64f8b9af8b5d4e877508f1edd40d3585", "pdf_src": "Wiley", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2889", "s2fieldsofstudy": [ "Environmental Science", "Agricultural And Food Sciences" ], "sha1": "3004dcbe2f815b08731f442977012835b69fc827", "year": 2020 }
pes2o/s2orc
Rewetting Offers Rapid Climate Benefits for Tropical and Agricultural Peatlands But Not for Forestry‐Drained Peatlands Peat soils drained for agriculture and forestry are important sources of carbon dioxide and nitrous oxide. Rewetting effectively reduces these emissions. However, rewetting also increases methane emissions from the soil and, on forestry‐drained peatlands, decreases the carbon storage of trees. To analyze the effect of peatland rewetting on the climate, we built radiative forcing scenarios for tropical peat soils, temperate and boreal agricultural peat soils, and temperate and boreal forestry‐drained peat soils. The effect of tree and wood product carbon storage in boreal forestry‐drained peatlands was also estimated as a case study for Finland. Rewetting of tropical peat soils resulted in immediate cooling. In temperate and boreal agricultural peat soils, the warming effect of methane emissions offsets a major part of the cooling for the first decades after rewetting. In temperate and boreal forestry‐drained peat soils, the effect of rewetting was mostly warming for the first decades. In addition, the decrease in tree and wood product carbon storage further delayed the onset of the cooling effect for decades. Global rewetting resulted in increasing climate cooling, reaching −70 mW (m2 Earth)−1 in 100 years. Tropical peat soils (9.6 million ha) accounted for approximately two thirds and temperate and boreal agricultural peat soils (13.0 million ha) for one third of the cooling. Forestry‐drained peat soils (10.6 million ha) had a negligible effect. We conclude that peatland rewetting is beneficial and important for mitigating climate change, but abandoning tree stands may instead be the best option concerning forestry‐drained peatlands. Introduction Efficient climate change mitigation requires a drastic decrease in greenhouse gas emissions during the next few decades (Intergovernmental Panel on Climate Change [IPCC], 2018). Strengthening greenhouse gas sinks in ecosystems is needed in addition to emissions reductions from industry, energy production, and transport (Rockström et al., 2017;Rogelj et al., 2018). Strengthening ecosystem sinks could mean, for example, increasing the carbon sink in forests or decreasing land use-induced carbon loss from soils. Peatlands are important regulators of atmospheric greenhouse gas concentrations and the climate. Undrained peatlands are, on the one hand, carbon dioxide (CO 2 ) sinks due to peat accumulation (e.g., Loisel et al., 2014;Yu, 2011). On the other hand, they are methane (CH 4 ) sources due to favorable methanogenesis conditions (e.g., Couwenberg et al., 2010;Korhola et al., 2010;Pangala et al., 2013). These two greenhouse gases have very different properties (Etminan et al., 2016;Myhre et al., 2013aMyhre et al., , 2013b: CH 4 has a 137-fold radiative efficiency (including indirect effects) per kilogram gas than CO 2 when in the atmosphere. However, CH 4 is also very short lived (atmospheric lifetime 12 years) compared to CO 2 . Due to CH 4 emissions, an undrained peatland may have a climate-warming effect (a positive radiative forcing) for up to several thousands of years since its initiation (Figure 1; Frolking et al., 2006;Frolking & Roulet, 2007;Mathijssen et al., 2014Mathijssen et al., , 2017. Undrained peatland will eventually have a climate-cooling effect. The warming effect of the short-lived CH 4 stabilizes over time, and increasing CO 2 levels are removed from the atmosphere due to peat accumulation. Peatlands drained for agriculture or forestry have a completely different effect on the climate compared to undrained peatlands. As drainage decreases methanogenesis and favors methanotrophy due to a lowered water table, drained peatlands are negligible CH 4 sources or even act as CH 4 sinks (e.g., Couwenberg et al., 2010;Hiraishi et al., 2014b;Ojanen et al., 2010). On the other hand, drainage causes peat loss due to enhanced aerobic decomposition. Peat loss leads to CO 2 and nitrous oxide (N 2 O) emissions, as carbon (C) and nitrogen are released from peat (e.g., Hiraishi et al., 2014b;Tiemeyer et al., 2016). Peatland drainage may initially have a climate-cooling effect due to the decrease in CH 4 emissions but will eventually have a climate-warming effect due to the persistent CO 2 and N 2 O emissions caused by progressive peat loss (Figure 1; Dommain et al., 2018;Laine et al., 1996). If a drained peatland is rewetted, greenhouse gas exchange levels close to those of undrained peatland may be reinstated, as the CO 2 and N 2 O emissions decrease and the CH 4 emission increases . Based on emission factors for drained and rewetted peatlands, converted to CO 2 equivalents by applying global warming potentials, rewetting has been found to have a climate-cooling effect over various climate and land use categories (Hiraishi et al., 2014b;Wilson et al., 2016). Thus, peatland rewetting has been promoted as a way to effectively mitigate climate change (Joosten et al., 2012). However, the CO 2 equivalent approach has two loopholes that prevent us from making well-founded conclusions on the potential that peatland rewetting offers in mitigating current climate change: 1. Global warming potentials used to calculate the CO 2 equivalents are accurate only when comparing pulse emissions (= emissions that occur at this moment; Myhre et al., 2013aMyhre et al., , 2013b. When comparing sustained emissions due to permanent land use changes, global warming potentials underestimate the relative importance of short-lived greenhouse gases-that is, CH 4 , in the case of peatlands. 2. Even if specific sustained global warming potentials (e.g., Neubauer & Megonical, 2015) are used instead, the resulting CO 2 equivalents describe the average effect on the climate over a certain time frame, typically exceeding 100 years. However, both warming and cooling effects may occur during the chosen time frame (Figure 1). To reveal the temporal dynamics of rewetting on the climate, radiative forcing needs to be considered instead of CO 2 equivalents. As the effect of rewetting on the climate mirrors that of drainage, it can have both warming and cooling phases ( Figure 1). If compared to that of peatland initiation, the effect of rewetting is different: In addition to causing an undrained-like CH 4 source and CO 2 sink, successful rewetting halts the CO 2 and N 2 O emissions from drained peat soil. Thus, a much faster climate-cooling effect can be expected for rewetting than for peatland initiation (Figure 1). So far, only ground vegetation and soil have been considered when compiling the emission factors for drained and rewetted peatlands (Hiraishi et al., 2014b;Wilson et al., 2016). This omission of tree stands may be all encompassing for agricultural peatlands rewetted to open fens but is insufficient in many other cases. Changes in tree stand C dynamics due to rewetting may have both (1) climate-warming and (2) climate-cooling effects. (1) For example, increasing tree biomass is a large CO 2 sink in boreal forestry-drained peatlands (e.g., Hommeltenberg et al., 2014;Lohila et al., 2010;Minkkinen et al., 2018;Uri et al., 2017). As drainage has largely increased average tree growth and biomass (e.g., Hökkä et al., 2008;Seppälä, 1969), rewetting is likely to largely decrease tree growth and the CO 2 sink of the tree biomass. This decrease has a climate-warming effect. (2) Many undrained peatlands are forested such as Figure 1. (a) An example of radiative forcing caused by the soil CO 2 sink and CH 4 and N 2 O source of mire development over 620 years, since mire initiation at year 1,500, and alternative scenarios due to drainage at 1970 and rewetting at 2020. The applied gas sinks (−) and sources (+) for CO 2 , CH 4 , and N 2 O are realistic values for one m 2 of peatland soil but are chosen for illustrative purposes and do not represent any specific peatland type or climate: Undrained and rewetted: −130, +7, and +0.1 g year −1 of gas; drained: +130, ±0, +0.2 g year −1 of gas. (b) Magnification of figure a, for 1970-2120. Cooling and warming effects of drainage and rewetting are shown with blue and red arrows. (c) The effects of drainage (= drained − undrained) and rewetting (= rewetted − drained); the effect of rewetting is the object of this study. Blue and red colors are used to emphasize the cooling and warming phases. See section 2.4 for the calculation of radiative forcing. the tropical peat swamp forests of Indonesia (Page et al., 1999). Rewetting a peatland drained and cleared for agriculture back into an undrained forest (Lampela et al., 2017) creates a growing tree stand and a CO 2 sink to the tree biomass. This CO 2 sink has a climate-cooling effect. These tree stand effects should be considered when evaluating the climate change mitigation potential of peatland rewetting. This study aims to answer two questions: Can the rewetting of peatlands drained for agriculture and forestry be used to mitigate climate change? How important would the rewetting be on a global scale? For this purpose, we constructed radiative forcing scenarios for rewetting peat soils belonging to different climate and land use categories by applying soil emission factors. These emission factors vary greatly between the categories, and thus, the simulations offer a tool for inspecting the effect of rewetting on a wide range of soil and climatic conditions. Combining the radiative forcing scenarios with a global area estimate of drained peatlands, we further calculated a radiative forcing scenario for rewetting all these drained peat soils. In addition, we analyzed the importance of trees in boreal forestry-drained peatlands by building scenarios for tree biomass and wood product C storages for drained and rewetted cases in Finland. Effect of Rewetting on Soil Net Emissions To estimate the effect of peatland rewetting on climate, we created 100-year scenarios for CO 2 , CH 4, and N 2 O net emissions from the soil due to rewetting. When a peatland is rewetted, the greenhouse gas emissions of a drained peatland are replaced by those of a rewetted peatland. Thus, the effect of rewetting on the emissions of each gas is: Effect of rewetting¼net emission at rewetted peatland − net emission at drained peatland In this study, the effect of rewetting was assumed to be instantaneous and thereafter constant. Emissions of CO 2 , CH 4, and N 2 O for different climate and land use categories based on the IPCC Wetlands Supplement (Hiraishi et al., 2014b) revised by Wilson et al. (2016) were applied (Table 1). On-site net gas emissions (net exchange of gas between the soil and ground vegetation and the atmosphere) were included. Methane emissions from the ditches of drained peatlands were also included (Hiraishi et al., 2014b;Wilson et al., 2016). The CO 2 emission also included 90% of the dissolved carbon export , as this share has been estimated to end up in the atmosphere as CO 2 (Evans et al., 2015;Hiraishi et al., 2014b). Three land use categories for drained peatlands were applied (Table 1) in the boreal and temperate zones, following the IPCC guidelines (Hiraishi et al., 2014b) and Wilson et al. (2016): cropland, grassland, and forestland. These categories were further divided into nutrient-poor and nutrient-rich subcategories and in the temperate zone further into deep and shallow drained subcategories, as the emission factors differ distinctly. The IPCC guidelines (Hiraishi et al., 2014b) and Wilson et al. (2016) divide drained and rewetted peat soils in the tropics into cropland and plantation (Table 1). There, cropland means the cultivation of short-rotation plants, whereas plantation typically comprises the cultivation of longer-rotation palm species and acacia trees. The emission factors of these land use categories are, however, very similar (Table 1). High CH 4 emissions following rewetting have occasionally been observed (Koskinen et al., 2016;Vanselow-Algan et al., 2015), as, on the other hand, have very low emissions even years after rewetting (Juottonen et al., 2012;Komulainen et al., 1998). The emissions for rewetted peatlands applied in this study do not describe either of these situations. Rather, the applied emission factors that describe the average situation after rewetting are close to those of undrained peatlands . When calculating the effect on the climate of rewetting 1 ha of peatland, we simply assumed that the effect of rewetting on the emissions of CO 2 , CH 4, and N 2 O (Table 1) is constant for 100 years. However, when calculating the effect of rewetting all the 33 million ha of drained peatlands (Table 2), it would be unrealistic to assume that they could be rewetted at once. To be a bit more realistic, we assumed that they would be rewetted at a constant pace during the first 20 years (= 5% of the area is rewetted every year). Area Estimates Area estimates of peatlands drained for forestry and agriculture were searched for primarily in the National Inventory Submissions 2017 of the United Nations Framework Convention on Climate Change (Table 1). For up-to-date information, land use inventories, other publications, and local colleagues were also consulted when necessary. In many cases for forestry and virtually always for grassland, no division of the area into various emission factor subcategories was available. In such cases, an even distribution between subcategories was assumed for calculating the effect of global rewetting. Contribution of Trees Soil is not the only important stock of C in forestry-drained peatlands, as trees may also contain a considerable amount of C, which can change according to management. Thus, for this land use, that is, forestry-drained peatlands, we estimated the contribution of changes in tree biomass and wood product C storage to the effect of rewetting on greenhouse gas emissions. The estimation was carried out as a case study of forestry-drained peatlands in Finland because nearly half of the global forestry-drained area is situated in the country (Table 2) and because we had all the necessary data from Finland to calculate changes caused by various management scenarios. Emissions were calculated separately for nutrient-poor and nutrient-rich peatlands corresponding to the boreal nutrient-rich and nutrient-poor soil emission categories (Table 1). The effect of rewetting on soil net emissions was equivalent to the effects estimated in section 2.1. The C sink/source potential of tree biomass and wood products varies greatly between possible management scenarios. Thus, four tree stand management scenarios were considered for tree biomass and wood product C storage at a regional scale, two for drained and two for rewetted peatlands. The purpose of these four scenarios was to describe the range of C storage by estimating minimum and maximum scenarios for drained and rewetted peatlands. Trees grow well on drained peatlands, enabling the accumulation of tree biomass. On the other hand, cuttings restrict tree biomass accumulation. Intensive forestry continues in the minimum scenario (1), with cuttings restricting tree biomass C storage. No cuttings occur in the maximum scenario (2), with all growth increasing tree biomass C storage. Further C storage increase in rewetted peatlands is prevented by decreased tree growth. In the maximum scenario (3), trees are not cut at rewetting, which maintains the current tree biomass C storage. In the minimum scenario (4), trees are cut at rewetting, leading to a drastic decrease in tree biomass C storage. Scenario 1. Forest management continues (forestry): This scenario describes the development of tree biomass and wood product C storage under continuing forestry when applying a typical forest management scheme (rotation forestry, including thinnings, clear-cutting, and forest regeneration). At a regional scale, this scenario means that the stem volume increases until it reaches the rotation-mean stem volume. Cuttings increase the C storage in wood products. Scenario 2. Forest management is discontinued and trees are abandoned (abandonment): This scenario describes the highest possible tree biomass in forestry-drained peatlands, meaning that the forest continues Note. The division between nutrient-poor and nutrient-rich sites is given for forested peatlands when available from the sources. The varying precision of the areas is due to the varying precision of sources. The Sources are as follows: National Inventory Submissions for 2017 (https://unfccc.int/process/transparency-and-reporting/reporting-and-review-under-the-convention/greenhouse-gas-inventories/submissions-of-annual-greenhouse-gas-inventories-for-2017) consisting of country-specific national inventory reports (NIR) and common reporting format tables (CRF), other publications, and researchers. growing without cuttings until it reaches the maximum stem volume of an unmanaged stand (Minkkinen et al., 2001). On the other hand, wood product C storage decreases, as no new products are manufactured but the current products continue decaying. Scenario 3. Peatland is rewetted by blocking ditches and trees are abandoned (abandonment and rewetting): This scenario describes the highest possible tree biomass in rewetted peatlands. Trees are not cut at rewetting, representing the restoration of a wooded mire. At a regional scale, current tree biomasses (Table 3) are much higher compared to undrained peatlands (Gustavsen & Päivänen, 1986;Heikurainen, 1971). Thus, the current biomass is the highest possible for rewetted peatlands. Wood product C storage decreases, as no new products are manufactured but the current products continue decaying. Stems and canopies are harvested and merchantable stem parts are utilized for wood products and the rest is burned for energy (= C instantly released). Belowground biomass (stumps and roots) is left on the site and does not decompose due to rewetting (= current belowground biomass is sustained). Wood product C storage increases at first due to the clear-cut but subsequently decreases, as the current and new products decay. Based on the management scenarios, four possible effects of rewetting on tree biomass and wood product C storage were calculated: Effect of rewetting ¼ C storage in clear − cut and rewetting scenario − C storage in forestry scenario (2) Effect of rewetting ¼ C storage in clear − cut and rewetting scenario − C storage in abandonment scenario (3) In addition, the effect of abandonment without rewetting was calculated for comparison: Effect of abandonment ¼ C storage in abandonment scenario − C storage in forestry scenario (6) Finally, the effect of rewetting (or abandonment) on C storage was converted to a 100-year scenario of CO 2 net emissions. The emission for year n (Emission(n)) was calculated based on the effect on C storage at the end of the current (C storage(n)) and previous (C storage(n − 1)) years as follows: The initial tree stem volumes and stem volume growths of all the management scenarios were based on the Finnish National Forest Inventory for 2009-2013 (Table 3). All the scenarios were calculated separately for each site type in southern and northern Finland (Table 3). Finally, area-weighted means were determined for nutrient-poor and nutrient-rich categories. Tree biomass total and aboveground and stem C storage were estimated by multiplying stem volume by the dominant species-specific (pine vs. other species) biomass expansion factor (Table 4). Biomass and wood product C contents of 50% were assumed in all calculations. In scenario 1, tree stem volume increased, asymptotically approaching the rotation-mean stem volume. The initial increment, corresponding to growth − cuttings, was estimated as follows: initial growth (Table 3) × the ratio of current mean increment and mean growth in Finland (Table 4), as information on the actual cuttings is not available separately for drained peatland forests. Wood product C storage was estimated as a constant ratio of wood product C storage/tree biomass C storage (Table 4). (2013) Share of merchantable stem biomass that ends up as sawn wood/wood panels/paper and paperboard: used to calculate how much of the stem biomass used to produce wood products ends up into product C storage after clear-cutting the rewetted peatland 0.15/0.02/0.38 Vaahtera et al. (2018) In scenario 2, tree stem volume increased beginning with initial growth (Table 3), as there were no cuttings, asymptotically approaching the maximum stem volume of unmanaged stands (Table 3). The initial wood product C storage decayed exponentially, as defined by product-specific time constants (Table 4). In scenario 3, initial tree biomass C storage remained unaffected throughout the study and wood product C storage decayed similarly to scenario 2. In scenario 4, we utilized the merchantable stem parts for wood products and the rest of the initial aboveground tree biomass C storage was instantly released to the atmosphere. Belowground initial C storage remained unaffected. After an initial increase, wood product C storage decayed similarly to scenario 2. Radiative Forcing Calculations First, scenarios for the atmospheric perturbation of CO 2 , CH 4 , and N 2 O (= change in atmospheric gas levels) due to the emissions and removals (= negative emission) were calculated for the emission scenarios of rewetting. After entering the atmosphere, the gas levels reduced according to the exponential decay model with gas-specific lifetimes (Table 5). Carbon dioxide was divided into four fractions with different lifetimes describing the various processes removing CO 2 from the atmosphere at varying paces. For removals, the calculation of atmospheric perturbation was otherwise identical to that of emissions but the sign for perturbation was the opposite (− instead of +). The radiative forcing (RF) scenario due to the perturbation scenario was calculated as follows: perturbation × radiative efficiency × indirect effects multiplier (Table 5). Radiative efficiency describes the direct effect of greenhouse gas on RF due to absorbing radiation and indirect effects describe the indirect effects due to changes in atmospheric chemistry caused by the greenhouse gas in question. Radiative efficiencies and indirect effects multipliers were assumed constant throughout the study, thus not considering the possible effects of climate change. The effect of the studied emissions and removals on the atmospheric concentration was also assumed negligible, thus not affecting the radiative efficiencies (Myhre et al., 2013a(Myhre et al., , 2013b. The radiative forcing of CO 2 resulting from the atmospheric decay of CH 4 was taken into account by including it into the RF of CH 4 in the calculation. This effect of CH 4 -derived CO 2 is demonstrated as a slow rise in the RF of constant CH 4 emissions after the rapid rise at the beginning ( Figure 1a). See, for example, Frolking et al. (2006) and Frolking and Roulet (2007) for detailed examples of calculating RF scenarios. To compare the cooling (CO 2 and N 2 O removals) and warming (CH 4 emissions, decrease in tree stand and wood product C storages) effects of rewetting, a warming/cooling ratio was calculated, describing the RF share (%) of the cooling effects offset by the RF of the warming effects: Comparison of Land Use and Climate Categories Different land use and climate categories showed distinctly different RF scenarios for rewetting peat soils (Figure 2). In the tropics, rewetting caused an immediate, almost linearly increasing climate cooling (negative RF) for both cropland and plantation soils. The net effect was cooling already at the beginning, as the increasing CH 4 emissions offset only a few percent of the cooling by decreasing the CO 2 and N 2 O emissions. The warming offset was much higher in temperate and boreal agricultural soils ( Figure 2). Consequently, only boreal soils and temperate nutrient-poor grassland soils with their relatively low increases in CH 4 Note. An updated value by Etminan et al. (2016) was applied for the RE of CH 4 . Carbon dioxide emission/removal was divided into four fractions with different lifetimes. Global Biogeochemical Cycles OJANEN AND MINKKINEN emissions experienced a cooling net effect at the beginning. Temperate nutrient-rich shallow drained grassland soil with its low decrease in CO 2 emissions and high increase in CH 4 emissions (Table 1) even showed a climate-warming effect during the first decades. In forestry-drained soils, the temperate nutrient-poor case alone showed a climate-cooling effect within a few decades (Figure 2). For all the other cases, the increased CH 4 emissions offset over 100% of the cooling impact of decreased CO 2 and N 2 O emissions for at least the first 40 years. Even 100 years after rewetting, the offset was at least 50%. OJANEN AND MINKKINEN In addition to the temporal dynamics, the magnitude of the climate cooling also varied ( Figure 2). In the tropics, an RF of −50 × 10 −10 W (m 2 Earth) −1 for a hectare of peat soil was reached within 100 years. At temperate and boreal agricultural soils, typically half of that was reached. At temperate and boreal forestry-drained peatlands, the cooling was close to zero in most cases and approximately −8 × 10 −10 W (m 2 Earth) −1 in the best case. Global Rewetting of Peat Soils in 20 Years Global rewetting of peat soils (without the effect of tree stands) resulted in increasing climate cooling, reaching −70 mW (m 2 Earth) −1 in a century (Figure 3). Even though the area was nearly evenly distributed between tropical soils, temperate and boreal agricultural soils, and forestry-drained soils (Table 2), their Global Biogeochemical Cycles shares in the climate cooling were uneven. The tropics accounted for approximately two thirds and temperate and boreal agricultural soils for one third of the area. Forestry-drained soils had a negligible effect. Half of the cooling effect was offset by the warming effect at the beginning. The Effect of Trees The dynamics of the C storage in tree biomass and wood products in the Finnish forestry-drained peatlands were very different between the management scenarios ( Figure 4). In the abandonment scenario, the C storage tripled in 100 years. Changes were much smaller in the forestry scenario, as the initial stem volume was already close to the rotation mean at most site types (Table 3). In the abandonment and rewetting scenario, only a slight decrease in C storage occurred due to the decrease in wood product storage. In the clear-cut and rewetting scenario, two thirds of the aboveground C storage was lost during the first year, as the majority of the C in the tree stems and crowns was released as CO 2 . Both the initial C storage and the changes occurring were approximately twofold in the nutrient-rich category compared to the nutrient-poor category. Tree biomass and wood product C storage dynamics strongly affected the RF scenario caused by rewetting ( Figure 5). In the case of comparing rewetting to abandonment, the effect was on the climate-warming side for over a century. Comparing to forestry, clear-cut and rewetting needed nearly a 100 years before reaching zero. During the first decades after rewetting, the warming effects were multifold compared to the cooling effects in all these cases. Comparing abandonment and rewetting to forestry showed a different result ( Figure 5). While the tree stand and wood product effect shifted the RF upward even there, it delayed the change from warming to cooling for only 10-20 years. The effect of abandonment without rewetting was expectedly cooling and was slowly saturating toward the end of the scenario. Discussion The ability of peatland rewetting to mitigate climate change during the next decades depends strongly on the climate zone and current land use. Soil CO 2 emissions from tropical peatlands drained for croplands and plantations are so high ( Table 1) that their successful rewetting results in virtually instant climate cooling ( Figure 2). The increased CH 4 emissions offset only a few percent of the cooling. These values for rewetted peatland do not include CH 4 emissions from the trees, which may be substantial in tropical wetland forests (Covey & Megonical, 2019;Pangala et al., 2013). However, even if these quadrupled the CH 4 emissions of rewetted tropical peatland, the cooling effect would still be strong. We additionally need to remember that only the peat loss through decomposition is included in the emission factor used in our analysis. In addition to decomposition, peat fires release large amounts of CO 2 from drained tropical peat soils (Gaveau et al., 2014;Page et al., 2002), which further underlines the importance of rewetting in decreasing CO 2 emissions and consequent RF. Those temperate and boreal drained peatlands that are under agriculture have the potential to mitigate climate change by rewetting ( Figure 2). However, due to approximately 50% lower peat loss than under a tropical climate (Table 1), increased CH 4 emissions can offset a major part of the cooling effect during the first years and decades. Thus, peatlands that are likely to have low CH 4 emissions after rewetting should be prioritized as targets for rewetting. In addition, the soil CO 2 emissions decrease more or less linearly with a rising groundwater table, but CH 4 emissions largely increase only when the water table is raised close to the soil surface or above it (Couwenberg et al., 2011;Tiemeyer et al., 2016). Thus, moderate rewetting that raises the water table to 10-20 cm below the soil surface may be considered a means to prevent a major portion of peat loss without causing high CH 4 emissions. Removal of the nutrient-rich topsoil has also been suggested as an effective means to decrease CH 4 emissions following rewetting, especially when the site has been heavily fertilized during agricultural use (Harpenslager et al., 2015;Zak et al., 2018). Contrary to agricultural peatlands, the possibility of mitigating climate change during the next decades by rewetting temperate and boreal forestry-drained peatlands is very limited (Figures 2, 3, and 5). The current soil CO 2 and N 2 O emissions are so low that even a modest increase in CH 4 emissions can offset the cooling effect for decades. If the tree biomass and wood product C storage decreases considerably, reaching a climate-cooling effect is further delayed. Even though rewetting of forestry-drained peatlands contradicts with the mitigation of current climate change, it is clear that rewetting would be the best option for safeguarding peat C storage in the long run. If drainage is maintained, a peatland with a thick layer of peat may gradually lose much more C than any tree stand can store. Also, the warming climate is likely to enhance peat decomposition, leading to increasing CO 2 emissions from peat (Table 1). Additionally, if climate change leads to increasing occurrence of severe droughts (Dai, 2013;Jolly et al., 2015), the risk of releasing great amounts of C to the atmosphere in forest and peat fires increases. Peatland fires are already common in continental areas, for example, in many parts of Canada (Turetsky et al., 2004) and Russia (Sirin et al., 2018). Even if not rewetted, forestry-drained peatlands should be kept as wet as possible, without endangering the growing tree stand. There are at least two ways to maximize wetness: (1) If forestry is continued, ditch depth should be as limited as possible while still keeping the water (2) If a forestry-drained peatland is abandoned without active rewetting, drainage ditches will gradually deteriorate over decades due to peat subsidence and natural blocking of ditches (Sikström & Hökkä, 2016). In this study, we assumed constant soil greenhouse gas emissions and tree growth conditions after abandonment, but in reality, abandonment would lead to a gradual decrease in both factors due to the rising water table. Thus, abandonment may combine the tree biomass CO 2 sink during the first decades ( Figure 5) with preserving most of the peat. However, keeping the tree stand may warm the climate locally, as forest albedo is lower than that of open mire (Gao et al., 2014;Lohila et al., 2010). Yet, part of this warming may be offset by the higher formation of aerosols and clouds, as trees are important sources of volatile organic compounds (Teuling et al., 2017;Tunved et al., 2006). As shown by our results (Figure 5), the effect of tree biomass and wood product C storage on the climate strongly depends on how trees are managed in rewetting versus no-rewetting scenarios. Further, the initial volumes, volume growths, and maximum volumes of unmanaged stands dictate how large and how rapidly changes in C storage are possible. All these naturally depend on climate, peatland type, and management history, which are highly variable between countries. Thus, our results on trees cannot be directly extended outside Finland. However, we can state that the management of trees may be crucial, at least when emissions from drained peat soil are relatively low (Table 1). Further studies are needed to judge whether tree management can be of importance under more intensive land use and a warmer climate. There, soil emissions are much higher (Table 1), but on the other hand, the growth potential of trees is also higher. We estimated that global rewetting of drained peat soils during the next 20 years would decrease RF by 70 mW (m 2 Earth) −1 by the end of the following 100 years (Figure 3), due to the major effect in tropical peatlands and temperate and boreal agricultural peatlands. Temperate and boreal forestry-drained peat soils played a negligible role in this result. Also, assuming a similar effect of trees as in Finland ( Figure 5), 10.6 million ha of boreal and temperate forestry-drained peatlands (Table 2) would together offset the benefit by only a few percent. Thus, by rewetting all peatlands we could, for example, mitigate 15% of the current warming caused by anthropogenic methane emissions, that is, 0.48 W (m 2 Earth) −1 (Myhre et al., 2013a). The importance of peatland rewetting for climate change mitigation is well demonstrated also by their current emissions. Despite coarse and somewhat uncertain global area estimates for drained peatlands (Barthelmes, 2018;Joosten, 2010), drained peatlands are a globally important source of CO 2 and N 2 O. Multiplying our area estimates (Table 2) by the IPCC emissions factors (Table 1) gives a rough estimate of 1 Gt of CO 2 equivalents per year (GWP 100 ) for soil greenhouse emissions. This emission corresponds to approximately ¼ of total emissions from land use, land use change, and forestry (Olivier et al., 2017), even though the area of drained peatlands corresponds to only 2‰ of the Earth's land area. Joosten (2010) and Leifeld and Menichetti (2018) estimated twice as high global emissions for drained peatlands, 2 Gt of CO 2 equivalents per year, due to a higher area estimate (50 vs. 33 million ha) and the inclusion of CO 2 emissions from tropical peat fires. As the rapid rewetting of up to 50 million ha of drained peatlands is a huge effort, identifying the most prominent peatlands for climate change mitigation would be crucial for efficient resource allocation. Our results clearly indicate that tropical and agricultural peatlands have the highest potential for climate change mitigation by rewetting. Yet, it should be kept in mind that the emission factors applied in this study (Table 1) are mean values for wide land use and climate categories. Huge variation in emissions occurs within each drained category (Couwenberg et al., 2010(Couwenberg et al., , 2011Hooijer et al., 2010Hooijer et al., , 2012Ojanen & Minkkinen, 2019;Tiemeyer et al., 2016). Also, the potential of tree effects is case specific. Feasible and unfeasible targets for rewetting may be found within any category. Other means for reducing greenhouse gas emissions should be sought for peatlands where rewetting is unfeasible. Conclusions Peatland rewetting is generally beneficial and important for mitigating climate change during upcoming decades. Tropical and agricultural peatlands in particular have a high potential to mitigate climate change: the climate-cooling effect of preventing peat loss is larger than the climate-warming effect of increased methane emissions. Abandoning tree stands without active rewetting is the best option for boreal forestry-drained peatlands: Peat loss prevented by rewetting is so low that increased methane emissions may offset the cooling effect for decades. The decrease in tree and wood product carbon storage further delays the onset of the cooling effect. Data Availability Statement All data necessary to reproduce the calculations (Tables 1-5) and the results (data for Figures 1-5) are available through Figshare (Ojanen & Minkkinen, 2020).
v3-fos-license
2018-12-19T02:48:24.624Z
2018-03-19T00:00:00.000
58923808
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2305-7084/2/1/12/pdf?version=1521420763", "pdf_hash": "31ade20aef98b0843047335775b485f2c7b7c6f0", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2891", "s2fieldsofstudy": [ "Engineering", "Physics" ], "sha1": "31ade20aef98b0843047335775b485f2c7b7c6f0", "year": 2018 }
pes2o/s2orc
Hydrodynamics of Bubble Columns : Turbulence and Population Balance Model This paper presents an in-depth numerical analysis on the hydrodynamics of a bubble column. As in previous works on the subject, the focus here is on three important parameters characterizing the flow: interfacial forces, turbulence and inlet superficial Gas Velocity (UG). The bubble size distribution is taken into account by the use of the Quadrature Method of Moments (QMOM) model in a two-phase Euler-Euler approach using the open-source Computational Fluid Dynamics (CFD) code OpenFOAM (Open Field Operation and Manipulation). The interfacial forces accounted for in all the simulations presented here are drag, lift and virtual mass. For the turbulence analysis in the water phase, three versions of the Reynolds Averaged Navier-Stokes (RANS) k-ε turbulence model are examined: namely, the standard, modified and mixture variants. The lift force proves to be of major importance for a trustworthy prediction of the gas volume fraction profiles for all the (superficial) gas velocities tested. Concerning the turbulence, the mixture k-ε model is seen to provide higher values of the turbulent kinetic energy dissipation rate in comparison to the other models, and this clearly affects the prediction of the gas volume fraction in the bulk region, and the bubble-size distribution. In general, the modified k-ε model proves to be a good compromise between modeling simplicity and accuracy in the study of bubble columns of the kind undertaken here. Introduction Two-phase flow is a major topic of study in diverse research projects, due simply to the fact that it is a phenomenon occurring in many industrial flow situations, and because a rich body of information has been accrued to support advanced fluid dynamic studies of the subject for model validation. Bubble columns in particular are widely used as test cases for experimental analysis of dispersed two-phase flows [1][2][3], since they entail simple geometric configurations, and are able to provide useful data for the validation of the CFD models [4].Table 1 provides a comprehensive list of some of the most referenced experimental studies in this area. Both the global and local properties of the flow in bubble columns are related to the existing flow regime, which can be defined as homogeneous or heterogeneous, depending on the prevailing bubble size [5].If the bubbles are in the range of 0.001-0.007m and the superficial gas velocity is less than 0.05 m/s, the flow can be defined as homogeneous [5][6][7]. According to Besagni et al. [8], homogeneous flow (as that studied in the present work) is characterized by the gas phase being uniformly distributed in the liquid phase as discrete bubbles [9].This characterization can be further designated mono-dispersed flow or poly-dispersed flow (also known as "pseudo-homogeneous flow").Mono-dispersed flow, as the name implies, is characterized by a single bubble-size distribution and a flat local void fraction, while poly-dispersed flow involves bubbles of different sizes and a local void fraction profile with a central peak [5,8]. Most of the cited experimental papers provide local flow data (i.e., velocities and gas volume fractions) for different superficial inlet gas velocities (in homogeneous and heterogeneous ranges), but only a few provide data on the turbulence characteristics [10,11].This is unfortunate, since turbulence in the liquid phase is known to play an important role in the local gas distribution in the numerical simulation of dispersed bubbles [10]. Many numerical works have studied the effects of different turbulence models in bubble columns for constant-diameter bubbles (as seen in Table 2).As reported by Liu and Hinrichsen [12], this assumption is justified for bubble columns with low inlet gas velocity (up to 0.04 m/s), and for low void fraction.However, more advanced models are required to predict other phenomena occurring in the flow, such as coalescence and breakup of the bubbles, induced by the turbulent eddies, especially in flows where mass transfer (i.e., boiling) plays an important role independently of the inlet gas velocity [13].Consequently, bubble size distribution has normally been coupled with CFD by means of models capable of solving for the population balance. Population balance is a continuity condition written on terms of the Number Density Function (NDF), which itself is a variable containing information on how the population of the dispersed phase, particles or bubbles, inside an infinitesimal control volume is distributed over the properties of interest (volume, velocity or length, for instance) of the continuous phase.NDF is defined in terms of an internal coordinate of the elements of the dispersed phase, i.e., particle/bubble size, volume or velocity, and in terms of the external coordinates, i.e., physical position and time [14]. The population balance equation, as defined by Ramkrishna [15], is a partial integro-differential equation that defines the evolution of the NDF (see Section 3.1).In the present work the length-based NDF, n(L; t), is considered, since our primary interest is in the evolution of the diameter of a bubble. Different methods exist for numerically solving the population balance equation [13,14], such as the class method [16], Monte Carlo [17], the Method of Moments (MOM) [18], as well as the Quadrature Method of Moments (QMOM) and Direct QMOM (DQMOM) [19].The QMOM approach was first proposed by McGrow [18] for aerosol distributions, but it has also been shown to be suitable for modelling different dispersed flow regimes involving aggregation and coagulation issues [14]. QMOM, in contrast to the class method, solves the population balance equation by considering the concept of moments, which are obtained after integration of the NDF transport equation along the particle/bubble length, using a quadrature approximation to estimate this integral (see Section 3.1).Normally, just three quadrature nodes giving six moments, are sufficient to provide a reliable bubble distribution.This method is then generally less expensive in terms of CPU time than the discrete method [13]. In several industrial applications of bubble reactors, especially for those in which mass transfer and chemical reactions occur, bubble coalescence and break-up lead to the formation of thin films, fine drops and threads of much smaller scale than that of the average bubble [20,21].Interactions taking place at these smaller scales could eventually result in a larger range of scales needing to be considered, and treated differently.Aboulhasanzadeh et al. [20] have developed a multi-scale model that allows mass diffusion to be incorporated into Direct Numerical Simulation (DNS) of bubbly flows and predict in more detail the mass transfer between moving bubbles.This multi-scale approach, together with population balance models, can provide very deep understanding of the various phenomena occurring in multi-phase flows [20,21]. In the present work, Euler-Euler simulations are carried out for turbulent "pseudo-homogeneous" two-phase flows in a scaled-up bubble column similar to the experimental test case of Pfleger et al. [22], but with the bubble size distribution determined using QMOM.Three different superficial gas velocities, 0.0013, 0.0073 and 0.02 m/s are studied, for which bubble coalescence rather than bubble breakup is the predominant bubble interaction mechanism for the dispersed phase.It is important to mention here that the use of a population balance model for the considered inlet velocities is merely to provide a deeper understanding of the possible phenomena occurring in the dispersed flow, and not to provide any comparison with constant bubble diameter results, which has already been done by Bannari et al. [23], and Gupta and Roy [13], for instance. Three different variants of the k-ε turbulence model are used to predict the liquid flow patterns inside the bubble column.The choice of Reynolds Averaged Navier-Stokes (RANS) models is based on the fact that they represent a simple implementation of the turbulence effects, with much lower computational effort, compared to Large Eddy Simulation (LES) and the anisotropic Reynolds Stress Model (RSM), for example, and that they are already widely used for two-phase flow simulations [24]. The main contribution of this work lies in the attempt to better understand the flow by specific investigation of the bubble diameter distribution, the gas-holdup, the axial liquid velocity, as well as the turbulent dissipation rate, which represents the coupling between the turbulence and the bubble-size distribution.This work is motivated by the need for better understanding of the hydrodynamics of gas-liquid flow in the design of bubble reactors, particularly for those who are unfamiliar with the topic of two-phase flow modeling.Very careful selection of the experimental data suitable for the validation of the models is presented, and the underlying physics explained in detail.The ultimate goal is to provide an overview of the main parameters affecting the flow patterns of the bubbles in water columns, and, through comparison with experimental data, to provide recommending options for solving similar problems.Mude and Simonin [28] Rectangular column [29]; V i = 0.20 m/s D-Vm 0.003 m -k-ε [30] and low-k-ε No significant discrepancy on the results were found between the turbulence models investigated. Li et al. [31] Cylindrical column [32]; VSG = 0.10 m/s; D-L-Vm-WL-TD Class method Break: L-S; Coal: P-B Modified-k-ε [33] The configuration and location of gas distributor affects the overall gas hold-up and bubble sizes. Ekambara and Dhotre [24] Cylindrical column [34]; RSM and LES provided a better prediction near the sparger, where the flow is more anisotropic. Zhang et al. [37] Rectangular column [38]; The lift force strongly influences the bubble plume dynamics, and Tomiyana lift coefficient provided better results. Bannari et al. [23] Rectangular column [22]; VSG = 0.0014 m/s and 0.0073 m/s D-L-Vm Class Method Break: L-S; Coal: S-T and H k-ε [28] Good agreement with experimental data was found when using PBM for the bubble size. Physical Model All the numerical simulations presented here are performed for a scaled-up rectangular bubble column geometry.This configuration is chosen because comprehensive experimental data exists concerning gas hold-up and liquid velocity; those provided by Pfleger et al. [22], Buwa et al. [35] and Upadhyay et al. [11] in particular are used here for model validation purposes.As depicted in Figure 1, the column is of width (W) 0.20 m, height (H) 1.2 m and depth 0.05 m.The level of water corresponds to H w /W = 2.25, a value inspired by all the tests performed by the preceding work of Becker et al. [29].The bubbles are introduced via a sparger at the bottom. In the experimental facility of Pfleger et al. [22], the sparger consists of a set of 8 holes placed in a rectangular configuration for the gas injection, each hole having a diameter of 0.0008 m in a square pitch of 0.006 m [13].For simplicity, the sparger in our numerical simulations is represented by a simple rectangular area placed at the bottom of the column.The same simplification has been used in other numerical works based on this test [13,39]. Governing Equations The governing equations of mass continuity and momentum for two-phase flow with no heat transfer, following the two-fluid methodology (e.g., Versteeg and Malalasekera [44]), can be expressed as follows: where i denotes the fluid phase (liquid or gas), U i the phase velocity vector, α i the phase fraction, p the pressure, R i the turbulent Reynolds stress and M i the interfacial momentum transfer between the phases, which in this work are comprised of the drag (F D ), lift (F L ) and virtual mass (F vm ) forces.This choice follows previous studies on this topic, which showed that these three forces are enough to capture the bubble behavior in the velocity range studied here [13,37].The turbulent dispersion force has been neglected, since previous studies have already found that, for the range of superficial gas velocities considered in this work, its influence is insignificant [45,46].The drag force, also known as the resistance experienced by a body moving in the liquid [24], ultimately determines the gas phase residence time and thereby the bubble velocities, and greatly influences the macroscopic flow patterns [47].It is defined as follows: The subscripts G and L in the above equation denote gas and liquid phases respectively, and d S the Sauter mean diameter (explained in Section 3.1).The drag coefficient C D is modelled according to Schiller and Naumann [48], where Re is the relative Reynolds number defined as The lift force, which arises from the interaction between the dispersed phase and shear stress in the continuous phase, is directed perpendicular to the incoming flow, and is described as: One of the lift coefficients C L used here is from Tomiyama et al. [49] (see Equation ( 7)), which has also been used by Gupta and Roy [13], Liu and Hinrichsen [12], and Ekambara and Dhotre [24].For comparison purposes, constant values (C L = 0.14 and C L = 0.5) have also been examined. in which, and d H given by the Wellek et al. [50] correlation: where σ is the surface tension between air and water (0.07 N/m).Finally, the virtual mass force F vm , which refers to the inertia added to the dispersed phase due to its acceleration with respect to the heavier continuous phase, is defined as Here the coefficient C vm is taken to be 0.5, based on the experimental study of Odar and Hamilton [51]. Illustrations of the three interfacial forces here considered are shown in schematic form in Figure 2, and a summary of the coefficients adopted for these forces is given in Table 3. Table 3. Coefficients of the interfacial forces used in the present work and the respective models employed. Interfacial Force Model Drag (C D ) Schiller and Naumann [48] Lift (C L ) Tomiyama et al. [49] and Constant (0.14 and 0.5) Virtual mass (C vm ) Constant (0.5) The Reynolds stress for phase i, needed in Equation ( 2), is given as: where I is the identity tensor, k i the turbulent kinetic energy for phase i, and ν eff ,i is the effective viscosity, composed of the following contributions: where the subscriptions l, t and L refer, respectively, to laminar, turbulent and liquid phase.The three k-ε turbulence models considered in this work calculate the effective viscosity in different ways.The standard k-ε model calculates ν eff ,L exactly as written in Equation (10).The modified k-ε includes an additional term on the right-hand-side of the equation, accounting for the turbulence induced by the movement of the bubbles through the liquid.Here, this term follows the model proposed by Sato et al. [33]: where d S is the bubble diameter (see Equation ( 35) below), U G the gas velocity, and U L the liquid velocity. For both the standard and modified k-ε models, ν t,L is defined as: in which C µ = 0.09, k is the turbulent kinetic energy, and ε the rate of dissipation of the turbulent kinetic energy, calculated just for the liquid phase, as described by Equations ( 13) and ( 14) respectively: where 3, and P k stands for the production of turbulent kinetic energy defined, for example, as in Rusche [52]: For the mixture k-ε equations, the turbulence variables are defined as mixture quantities of the two phases, and are calculated here for both the gas and liquid phases in the manner proposed by Behzadi et al. [53]: The coefficients present in the mixture k-ε model are the same as for the standard k-ε model, while the mixture properties are defined as: where Population Balance Model (PBM) In this work, we specifically consider the presence of different bubble diameters.The sizes may vary due to the coalescence and break-up processes occurring, caused by the liquid turbulence.We have adopted here the QMOM model of McGrow [18] to determine the bubble-size distribution. Assuming that both coalescence and breakup processes may occur in the bubble column, the governing equation for the population balance (length-based) can be written, according to Marchisio et al. [14], as: where n(L; t) is the length-based (L) number density function. Applying the moment transformation and a quadrature approximation for the integral [18], results in the following equation: where the subscript φ refers to the order of each moment, varying from 0 up to (2N − 1), where N is the number of nodes considered in obtaining the weights and abscissae.The coefficients β ij and a i are, respectively, the coalescence and breakage kernels, and b (φ) i denotes the daughter size distribution.Note that specific details of the step-by-step derivation leading to Equation ( 26) can be found in the reference paper of Marchisio et al. [14]. The weights (w i ) and abscissae (L i ) in Equation ( 26) can be obtained through the product-difference approach [54] or Wheeler algorithm [55]; the first approach is adopted in the present work, since it results in a simpler implementation, and is said to be stable for N < 10 and m 1 > 0 [56], which is our case here. The total number of nodes used is N = 3, which then provides six moments [57].Each moment m φ (t) has different physical meaning depending on the order φ.For φ = 0, m 0 is the total number of particles/bubbles; for φ = 1, m 1 is the length of the particles/bubbles; for φ = 2, m 2 is the surface area of the particles/bubbles; while for φ = 3, m 3 gives the total particle/bubble volume [57]. The coalescence between two colliding bubbles, represented by the term containing the coefficient (also referred as the kernel) β ij in Equation (26), can be expressed as the product of the collision frequency w c (d i , d j ) and the collision efficiency P c (d i , d j ), in which d i and d j are the diameters of the two colliding bubbles, as follows: The coalescence kernel used here is that proposed by Luo [41], which is based on the assumption of isotropic turbulence in the liquid phase [58].This leads to a collision frequency of the following form: where u ij is the mean turbulent velocity of the bubbles (which should be related to the size of the turbulent eddies): where β = 2.0, according to [41]. The collision efficiency P c (d i , d j ) given by Luo [41] is defined as the ratio between the coalescence time t C needed to annihilate the liquid film between the colliding bubbles to a critical rupture thickness, and to the interaction time t I between the two bubbles, as expressed below: Luo [41] considered t C to be a function of Weber number (We ij ), and t I to be a function of the virtual mass coefficient C V M , the physical properties, and the bubble size ratio.The final collision efficiency is then expressed as: where ξ = d i /d j , We ij = ρ L d i u ij /σ and σ is the surface tension.For the breakage kernel a i , we use that suggested by Luo and Svendesson [40], which has been widely applied before [13,23,[59][60][61], and which calculates simultaneously the daughter distribution (b i ) according to the following prescription: where assuming isotropic turbulence, ξ min = λ/d i is the size ratio between an eddy and a bubble in the inertial sub-range, and c f is defined as the increase coefficient of the surface area given by in which f BV is the breakage volume fraction, assumed here to be given by f BV = 0.5, to represent a symmetrical breakage.The integration in Equation ( 32) can be performed by applying an incomplete gamma function approach, as shown in the work of Bannari et al. [23]. After calculating the kernels, the Sauter mean diameter (d S ), which is needed in the momentum and turbulence equations (Equations ( 2) and ( 11)), can be obtained by dividing the third moment (bubble volume) and the second moment (bubble surface area), as suggested by Marchisio et al. [14]: The initial values of the moments may be given by a Gaussian distribution of the density function, as found by Marchisio et al. [56], which results in the following values for the moments: m 0 = 1, m 1 = 5, m 2 = 26, m 3 = 140, m 4 = 778 and m 5 = 4450. Validation and Verification Verification and Validation are distinct assessment processes; both have been utilized in the present work to give a measure of confidence in the numerical results obtained.The AIAA [62] has produced a guideline document on verification and validation, in which one can easily appreciate the difference between the two. In brief, verification is defined as the process to determine whether the mathematical equations have been solved correctly.Assurance of this is obtained by comparing numerical results against analytical data or against numerical data of higher accuracy.Validation is the process of determining the degree to which a model is an accurate representation of the physics, and this can only be achieved by comparison with actual experimental data; i.e., by a simulation featuring the same set-up, boundary conditions, materials and flow conditions of the experiment.The distinction between the two was succinctly stated by Roache [63] as: "verification is solving the equations right" (meaning correctly), "while validation is solving the right equations". Therefore, before using the models for validation purposes, the QMOM equations implemented in the OpenFOAM code were first checked by comparing the moments obtained (Equation ( 26)) with the analytical solutions of Marchisio et al. [14].As shown in Figures 3 and 4, very good agreement was obtained, so the model was accepted as being correctly implemented for the further validation studies. Part of the verification process in this paper includes the calculation error where a mesh sensitivity analysis is performed on the test-cases with same setup as the experiment (see Section 2).The time-step here selected is 10 −3 s and remains constant over the simulation time; it respects the CFL condition of Courant number C o < 1, provides low residuals, and small computational time. Boundary Conditions The geometry presented in Figure 1 features one inlet at the bottom (sparger), one outlet (top of the water column), and non-slip walls.The superficial gas velocities 0.0013, 0.0073 and 0.020 m/s represent the respective inlet velocities of the air, after multiplying each value by the cross-sectional area of the column and dividing by the sparger area.In the experiment, no liquid was injected with the gas (i.e., α L = 0 and α G = 1 at the inlet), so the inlet liquid velocity is set to zero in the calculation. The inlet conditions for k and ε are calculated, respectively, as follows: where U = α G U G + α L U L , and I the turbulence intensity, assumed here to be standard 5% (it was not measured), and in which l e is the assumed initial eddy size, calculated as l e = 0.07L c , where L c is a characteristic length, assumed to be the sparger width, and the constant 0.07 is based on the maximum value of the mixing length in a fully-developed turbulent pipe flow [64,65]. For the outlet region, a switch between a Neumann condition (zero-gradient) for outflow and a Dirichlet condition for inflow is used for k, ε, α G , U G and U L .In OpenFOAM, that is imposed through the inletOutlet condition [66]. For pressure, a boundary condition that adjusts the gradient according to the flow is used for both the inlet and the walls (given by the name f ixedFluxPressure in OpenFOAM).A Dirichlet boundary condition is assigned for pressure at the outlet boundary, using a total pressure (atmospheric) of 1.01325×10 5 Pa. The kernels (Equations ( 27) and ( 32)) are evaluated in each cell of the domain: so, for the inlet, the bubbles are assumed not to have any initial coalescence and breakage rates associated with them.For the outlet, and the walls, a zero-gradient condition is defined for both the coalescence and breakage kernels.The same is applied to the Sauter mean diameter, d S , on the walls and outlet, but at the inlet a fixed value (0.005 m) is assumed, which is the average bubble diameter found in the experiment of Pfleger et al. [22] being simulated here; the bubbles at inlet are assumed to be of spherical shape. Wall functions are used to estimate k and ε in the viscous layer region near the wall, allowing y+ > 30 [64,67,68]. Table 4 presents all the boundary conditions used in the simulations here discussed.It is important to point out that the simulations with inlet air velocity greater than 0.0013 m/s are initialized using the values obtained from the last time-step of the simulation with the next lower inlet air velocity, to reduce the computational time to reach steady-state conditions. Numerical Methods All the simulations are performed using the the open-source CFD code OpenFOAM version 2.3.0.The PIMPLE algorithm, merged PISO [69] and SIMPLE [70], is used for the coupling between the velocity and pressure fields, as recommended by Rusche [52] for bubble-driven flows.Inside the PIMPLE loop, the momentum equation is solved using an initial guess for the pressure (Step 1).Then, at each time iteration, the pressure equation is solved twice with velocity obtained from Step 1, and finally a corrector is applied to provide an updated value of the velocity based on the updated value of the pressure: this is continued to convergence.The procedure is repeated for each time step.The interested reader can obtain a more detailed explanation of the algorithm in Holzmann [71] and the OpenFOAM documentation [72]. A second-order-accurate, central-difference scheme (CDS) is used for the gradient and Laplacian terms in the governing equations, while a second-order-accurate, upwind scheme has been selected for the advection terms: this to ensure stability of the iteration.Time integration is performed using a second-order implicit backward method.The convergence criterion for all the cases is that the average residuals (for mass, pressure, velocity and turbulence variables) are less than 10 −8 . The Euler-Euler two-phase solver is structured in such a way that first the momentum equations are solved inside the PIMPLE loop, then the turbulence quantities are calculated, and finally the PBM equation is solved to provide the Sauter mean diameter for the bubbles, which is then used in the next time-step to estimate the interfacial forces in the momentum (Equations ( 4) and ( 7)) and turbulence (Equations ( 11) and ( 23)) transport equations. All the simulations are carried out with a constant time-step of 0.001 s, which ensures a Courant number (C o ) less than 1.For this C o criterion to be achieved, a mesh independence analysis has been performed, as explained in the next section.Under-relaxation factors ( f r ) are used for pressure ( f r = 0.3), velocity ( f r = 0.7), k L ( f r = 0.7) and ε L ( f r = 0.7), in order to promote smooth convergence, especially for the test cases with higher superficial gas velocities (0.0073 m/s and 0.020 m/s). Time-averaging in each simulation was taken over a period of 200 s, which was sufficient for the steady-oscillation regime to be achieved.Table 5 summarizes all the test cases featured in this work. Mesh Sensitivity Study A mesh sensitivity analysis has been performed for three mesh sizes, the details of which are specified in Table 6.The analysis was performed assuming a constant bubble size of 0.005 m, and with the standard k-ε turbulence model.As shown in Table 6, the mesh refinement was undertaken in the transversal direction to the flow, plane xz (see Figure 1), in order to better capture the major velocity gradients, which are higher in this direction than axially, in accordance with the conclusions of Ziegenhein et al. [73] and Guédon et al. [5] in their mesh sensitivity studies. Another important piece of information concerning the meshes it that, as discussed comprehensively by Milelli [74], an optimum ratio of the grid to the bubble diameter should be maintained of around 1.5, since larger values than this could lead to transfer of a large portion of the resolved, energy-containing scales into sub-grid-scale motion; hence in the present mesh sensitivity study a ratio less than 1.5 was specified in all cases. The order of grid convergence (p) is here estimated based on the vertical water velocity component integrated over the column width (s i ), as provided by each i-th mesh index (see Table 6), namely by: where r = A 1 /A 2 = A 2 /A 3 = 2 is the grid refinement ratio. In order to calculate the theoretical asymptotic solution (s 0 ) for zero grid size, also known as the Richardson extrapolation method [75], the value of p calculated from Equation ( 38) is used in the following equation: Another parameter useful in indicating the degree of mesh independence is the Grid Convergence Index (GCI), which is an estimate of the discretization error, and how far the solution is from its asymptotic value.For RANS computations, a GCI of less than 5% is considered acceptable [63,76].According to Roache [63]: Using GCI ij , one can calculate the asymptotic range (ca) of convergence: which indicates if the solution is in the error band, ca ≈1 [77]. Figure 5 shows that Mesh 2 is already in the asymptotic range, representing a good compromise between computational effort and acceptable accuracy.All the details from this mesh sensitivity study are summarized in Table 7.For all the further simulations, Mesh 2 has been the one used. Influence of the Interfacial Forces on the Fluid Dynamics For a better and more robust prediction of the flow patterns in a bubble column, and in two-phase flow simulations in general, an accurate description of the relevant interfacial forces is an absolute necessity.Therefore, before investigating the turbulence models in detail, the influence of the drag and lift forces on the fluid flow behaviour has first to be scrutinized.As concluded by Gupta and Roy [13], the virtual mass force does not greatly influence the fluid flow patterns for this kind of bubble column once the flow is established, so this effect has not been studied in detail in this paper. Pourtousi et al. [47] have provided a comprehensive literature review on the topic of interfacial forces in bubble columns from which they concluded that, for the drag and lift coefficients, the most-used models are respectively those of Schiller and Naumann [48] and Tomiyama et al. [49], respectively.Consequently, this paper concentrates on these models for the prediction of the velocity and gas hold-up profiles. It is important to point out that for the bubble-size range studied in this paper (≈0.005-0.0054m), the Tomiyama et al. [49] coefficient in the lift force is positive, and acts to drive the bubbles towards the wall.For bubbles larger than a critical diameter (0.0058 m at atmospheric pressure), the opposite occurs, and the bubbles would move towards the center of the water column [78][79][80]. As shown in Figure 6a, a near-zero velocity is found at a distance less than 0.020 m from the wall.This situation occurs for downward flow direction close to the walls, which is not the case for the interior of the column.Later, Figure 11a will confirm this behavior. An interesting observation is that, for the velocity profiles, the combination of both models (Schiller and Naumann [48] for the drag coefficient and Tomiyama et al. [49] for the lift one) works well, providing good comparisons against data from different experiments, and from associated LES data: see Figure 6a.However, the same is not true for the gas hold-up: see Figure 6b.It is also important to note that the magnitude of the lift coefficient provided by the Tomiyama et al. [49] model is C L = 0.2 (similar observation was obtained by Kulkarni [81]).To test the sensitivity to this value, two further simulations were carried out using smaller (C L = 0.14) and higher (C L = 0.5) lift coefficients.In addition, one extra calculation is presented for which the lift force has been ignored completely, i.e., for which C L = 0. As can be seen in Figure 6, without the lift force there is no lateral force to encourage the bubbles to migrate towards the walls (in the absence of the turbulent diffusion force), hence both velocity and gas hold-up profiles are too peaked.Including a non-zero lift coefficient, the bubbles are seen to indeed migrate to the walls, and flatten the profiles, particularly so for the gas hold-up profile.A fixed lift coefficient of C L = 0.14 appears to be the optimum choice for this particular experiment. Previous studies on the topic [8] presented some discrepancies between Tomiyama et al. lift force [49] and the experimental data, which were explained to be originated from the Wellek et al. correlation [50].The latter was originally obtained for droplets in liquids [82] and it might not be suitable for the present poli-disperse bubbly flow [8,82,83]. In Figure 6b, the reason why their experimental data is shifted off-center, according to Buwa et al. [35], is due to the fact that, for the considered superficial gas velocity (0.0013 m/s), the bubble plume is narrow, and its oscillation period large, making it difficult to collect data over a sufficiently long time to obtain a symmetric gas hold-up profile.In the present work, the errors are within the range of the experimental uncertainties [35]. Therefore, we conclude that the lift force has an essential role to play on the prediction of the gas hold-up, which, in comparison to that of the vertical water velocity, is much more sensitive to the choice of lift coefficient. Influence of the Turbulence Model on the Hydrodynamics In addition to the interfacial forces, turbulence modelling is an important component in the accurate prediction of the flow patterns in bubble columns.The most used turbulence model, in two-phase flows is still the k-ε model, due to its simplicity and the low computational effort [46,47].For this reason, in this paper, we have studied the turbulence in the context of three variants of the k-ε model: standard, modified and mixture.The purpose here is to reach some conclusion on which choice provides better agreement with experimental data, at least in the context of the present application. It transpires that both the standard and modified k-ε models produce similar profiles for axial velocity, gas hold-up and Sauter mean diameter.However, the same is not true for the mixture k-ε model, as shown in Figure 7a. One can also observe that for all the turbulence models, the profile of the bubble size distribution can be explained by the effect of the lift force, which enhances the movement of larger bubbles towards the center of the bubble column, and the smaller bubbles towards the walls, resulting in a parabolic shape of the size distribution for the considered superficial gas velocity (0.0013 m/s).A similar conclusion was reached in the experimental work performed by Besagni and Inzoli [84].As it will be shown in the Section 4.4, as the superficial gas velocity increases, the recirculation of the flow becomes more intense, causing a spread of the bubbles all over the domain (even far from the sparger), and consequently a more homogeneous bubble size distribution (see Figures 10d and 12b).Turbulent boundary conditions affect the results mostly on the bottom wall close to the sparger, where as discussed by Ekambara et al. [24], the flow is more anisotropic and requires different modeling such as Reynolds Stress Model (RSM) and Large Eddy Simulation (LES).Away from the sparger the numerical solutions given by isotropic turbulence models, such as the k-ε model, proves to have a good agreement with the experimental data [24,29,35]. Figure 8 shows the profiles of the time-averaged turbulent dissipation rate (ε) in two different regions: one close to the sparger (y = 0.13 m), and the other close to the liquid/gas interface (y = 0.37 m).As can be observed, near the sparger (Figure 8a,c) the higher values of ε occur in the central region, for all the turbulence models here studied, as a direct consequence of the turbulence created by the insertion of bubbles into the water column. However, in quantitative terms, the mixture k-ε model predicts ε to be one order of magnitude higher than those of the other two models in the region near the sparger.This can be explained by the high influence the model has in respect to the gas volume fraction through the ratio of the dispersed phase velocity fluctuations to the continuous phase fluctuations, as represented by the coefficient C t : see the variable definitions below Equation (23).This ratio is used in the model to obtain ε for each phase. According to Behzadi et al. [53], for bubbly flows for which ρ L >> ρ G (such as for water-air flows), the mixture turbulence equations tend to those of the continuous phase alone, except in regions where α G ≈ 1, as happens very near the sparger, explaining the behavior of ε L in Figure 8a, but not elsewhere. In the region close to the upper water level, where flow recirculation is a predominant mechanism, higher values of ε occur near the wall, and the difference between the mixture and the other k-ε models is even more evident (Figure 8b,d), suggesting this model is not the most adequate for a deeper analyses of this kind of bubble column. The high values of ε provided by the mixture k-ε model near the sparger, which as a consequence dissipates more effectively the bubbles over the height of the water column, is also reflected in under-prediction of the gas hold-up, as depicted in Figure 9.The results shown in Figure 9a,b are from different test-cases, and taken at different positions, for validation purposes, once the experiments (depicted by green stars in the graphs) were obtained by different superficial gas velocities, at different locations in the bubble column. It is worth noting that if one is only interested in obtaining velocity profiles of the continuous phase, the mixture k-ε model could be used with some confidence (as can be seen in Figure 7a).The same does not hold for the gas hold-up, which is the variable highly affected by the over-prediction of the turbulent kinetic energy dissipation rate, as shown in Figure 9. Influence of the Air Superficial Gas Velocity at Inlet on the Fluid Behavior As well as the specification of the interfacial forces and the turbulence modeling, the superficial gas velocity at the inlet plays an important role on the predicted hydrodynamics of the bubbles in the column.To study this influence, three different air injection velocities have been investigated: UG = 0.0013, 0.0073 and 0.02 m/s.It is to be noted that none of these generate turbulent eddies with enough energy to induce bubble break-up, as observed for UG = 0.03 m/s [85].Consequently, we focus here on the fluid dynamics for conditions under which coalescence is the dominant bubble interaction phenomenon; discussion of results for the UG = 0.03 m/s case is reserved for future work. Figure 10 shows the gas hold-up and Sauter mean diameter profiles for two different regions of the water column, for three superficial gas velocities.As can be observed, near the sparger (y = 0.02 m) there is a higher concentration of bubbles, albeit in a narrow region, resulting in similar behavior for the bubble size distribution for all the three superficial air inlet velocities.Figure 10d shows that, for UG = 0.02 m/s, the bubbles are predicted to be of uniform size across the width of the column, and reach the top with a higher probability of interaction with each other in the central region due to the higher population density there (Figure 10c).Figure 11a shows downward flow along both walls and upward flow in the bulk region.A rotational pattern can also be observed in the middle of the column, which is stronger in the lower half of the column.This flow pattern can help explaining the fact of 0.0054-m bubbles staying in the centre region of the column, as depicted by Figure 12a.A downward flow along the left wall and on the lower portion of the right column wall can be discerned in Figure 11b, as well as a strong upward flow in the middle of the lower portion of the column, which tends to drift towards the upper-left wall.This cross-mixing behaviour may explain why the 0.0054-m bubbles spread throughout the upper portion of the column as shown by Figure 12b. Figure 13a,b show, in shaded contour form, the behavior of the bubble volume fraction inside the water column, for UG = 0.0013 and 0.02 m/s.The greater axial velocity and stronger recirculation provided by the UG = 0.02 m/s case (Figure 11b), results in a wider spread of the bubbles across the column (Figure 13b). For the superficial gas velocity highlighted here (UG = 0.02 m/s), the coalescence kernel (that of Luo [41]) is driven by the collision frequency (w c in Equation ( 28)), this is, in regions where ε is non-negligible the frequency of bubble coalescence is more predominant than the collision efficiency (P c ), which is proportional to the mean turbulent velocity as seen in Equation (31).A similar observation was made by Deju et al. [59], who studied different combination of coalescence and breakup kernels in a circular rather than square bubble column.In their study, they found that the collision frequency diminishes when the eddy dissipation rate becomes insignificant at the centre of the bubble column, while the opposite occurs in respect to the collision efficiency: both calculations incorporated the model proposed by Prince and Blanch [85] for the coalescence kernel. For the given superficial gas velocities adopted in this work, a bi-dispersed model, as that proposed by Guédon et al. [5], proved to be a very good alternative to the use of a population balance model.However, if one is interested in capturing the size distribution of the bubbles, and understanding the phenomena that take place in the evolution of the bubble sizes, specially for higher inlet velocities (>0.04 m/s), use of population balance approach is recommended [85]. Conclusions A hydrodynamic investigation of two-phase flow, with different spherical bubble size distributions, in a scaled-up rectangular bubble column geometry using CFD has been presented.Three parameters are identified as playing important roles on the fluid dynamics for this kind of flow: the gas/liquid interfacial forces, the liquid turbulence, and the superficial gas velocity at inlet. From our examination of the interfacial forces, it is concluded that, though the use of the bubble lift force suggested by Tomiyama et al. [49] has resulted in reasonable comparisons of vertical water velocity compared with the experimental data, the same is not true for the measured gas hold-up, at least for the cases investigated here.In fact, a constant lift coefficient (C L = 0.14) proved to be the most suitable choice for better axial velocity and gas hold-up comparisons with experimental data.The lift force formulation is seen to have had a strong effect on the prediction of the gas hold-up, and consequently on the bubble size profiles, at different locations in the bubble column. Concerning turbulence modeling, the mixture k-ε model predicted much higher turbulent kinetic dissipation rates, combined with lower values of gas volume; other variants of the k-ε model perform better.From the k-ε turbulence models here investigated, the modified k-ε model proved to be a good compromise between modeling simplicity and accuracy of predictions. It is seen that the superficial gas velocity at inlet (UG) has an important influence on the characteristics of the fluid dynamics in the bubble column, and ultimately on the bubble size distribution within it.By increasing UG, there is an increase in the gas hold-up, even near the upper interface of the liquid, resulting in a more uniform bubble-size distribution throughout the column. More work is needed for cases with higher injection velocities than those considered here, for which few experimental data are currently available from the point of view of deciding on the turbulence modeling approaches. Figure 1 . Figure 1.Sketch of the bubble column configuration: the sparger is located at the centre of the bottom face, as in the experimental facility of Pfleger et al. [22]. Figure 3 .Figure 4 . Figure 3. Verification of QMOM equations by comparison with the analytical solution provided by Marchisio et al. [14]: (a) time history of moment 0 and (b) its respective time-history error. Figure 5 . Figure 5. Asymptotic averaged-axial-water-velocity behavior for a normalized grid spacing tending to zero. Figure 6 .Figure 7 . Figure 6.(a) Axial mean water velocity profile at y = 0.25 m; and (b) gas hold-up profiles at y = 0.37 m for the case UG = 0.0013 m/s from various sources.All the results have been obtained using the modified k-ε model, and are represented by the line with circles (No lift), the joined triangles (C L = 0.14), diamonds (C L = Tomiyama et al. [49] ≈ 0.2), and the straight line (C L =0.5). Figure 8 .Figure 9 . Figure 8. Time-averaged profiles of ε across the column width at (a); (c) y = 0.13 m and (b); (d) at y = 0.37 m, for a superficial gas velocity at inlet of 0.0013 m/s. Figure 10 . Figure 10.Time-averaged of gas hold-up (left) and Sauter mean diameter (right) provided by Modified k-ε model at (a,b) y = 0.02 m and (c,d) y = 0.37 m. Figure 11 . Figure 11.Time-averaged water velocity field with instantaneous water velocity vectors provided by the Modified k-ε model after 100 s of simulation at (a) 0.0013 m/s; and (b) 0.02 m/s. Table 1 . Summary of experimental works of turbulent flows in bubble columns Table 2 . Summary of numerical works of turbulent flows in bubble columns with and without bubble-size-distribution modelling. Table 4 . Boundary conditions for all the variables involved in the simulations of the bubble column here studied. Table 5 . Summary of all the cases investigated by this work. Table 6 . Details of the meshes used in the sensitivity analysis. i (10 −6 m 2 ) * * V (m/s) Normalized Grid Space * * A i /A f inest ; **A i = area of the cells along the side walls in the xz plane. Table 7 . Convergence criteria used for choosing the appropriate mesh for the further simulations.
v3-fos-license
2018-05-07T14:09:45.289Z
2003-09-09T00:00:00.000
98469602
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.arkat-usa.org/get-file/19603/", "pdf_hash": "b5a5181e98cfed95580cdb27c4272ed3a2993e27", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2892", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "527aefd129a854b44d19555b43876cb71d9b9865", "year": 2003 }
pes2o/s2orc
Oxidation of sulfides to chiral sulfoxides using Schiff base-vanadium (IV) complexes A library of Schiff base ligands was synthesized from salicylaldehyde by reaction with various β -amino alcohols. These ligands were used with vanadium (IV) to screen for the enantioselective oxidation of sulfides to chiral sulfoxides Introduction 2][3][4][5][6] Satoh and coworkers have reported the synthesis of chiral allenes by first coupling alkenyl aryl sulfoxides with aldehydes followed by alkyl anion induced elimination of the sulfur. 7Toru has reported the enantioselective addition of Grignard reagents to 1-(arylsulfinyl)-2naphthaldehyde, where a chiral sulfoxide conformer controls stereoselectivity of the addition. 8ptically active β-(trimethylsilyl)ethyl sulfoxides supported on Merrifield resin undergo enantioselective Michael addition to α,β-unsturated esters, followed by removal of the sulfoxide group via thermal elimination. 91][12][13] Yuste and Ellman have independently described the use of sulfoxides as chiral auxilaries in the asymmetric synthesis of β-amino alcohols which, in turn are synthetically-useful chiral building blocks.14a, b, c, 15a, b Toru has reported the elegant use of a chiral sulfoxide to synthesize an insecticidal chiral chrysanthamate.16a More recently, Colobert 16b and Bravo 16c have demonstrated the use of chiral sufoxides in the synthesis of myoinositol, pyrrolidine and tetrahydroisoquinoline alkaloids, respectively.These examples clearly demonstrate the versatility of chiral sulfoxides as chiral auxilaries in asymmetric synthesis. A number of sulfoxides are also finding application in the pharmaceutical industry.The chiral sulfoxide quinolone 1 is known to inhibit platelet adhesion by interfering with the release of 12(S)-hydroxyeicosatetraenoic acid from platelets.17a, b, 18 Pyrazolotriazine 2 is a new drug developed to treat hyperuricemia and isochemic reperfusion injury.The drug inhibits the biosynthesis of uric acid by blocking xanthine oxidase. 19Unge and co-workers have reported the asymmetric synthesis of esomeprazole, a drug containing a chiral sulfoxide group known to inhibit gastric acid secretion.20a Padmanahan and co-workers from Cambridge Neuro Science have reported the asymmetric synthesis of a sulfoxide containing a guanidine portion that is an active N-methyl-D-aspartate ion-channel blocker.20b These few examples clearly illustrate the growing importance of chiral sulfoxides in the pharmaceutical industry.Since enantiomerically pure sulfoxides can play an important role as chiral auxilaries in organic synthesis, it is surprising that very few examples exist in which this ligand participates in homogeneous catalysis.Khiar used a Fe(III) complex of C 2 -symmetric bis-sulfoxide as a catalyst in the asymmetric Diels-Alder reaction. 21Shibasaki and Williams have independently used Pdsulfoxide complexes in asymmetric allylic substitution. 22,23 olm and Carreño have also attempted the use of chiral sulfoxides to catalyse the enantioselective addition of diethyl zinc to aromatic aldehydes; the products were obtained in moderate ee's. 24,25 ese results have prompted researchers over the past two decades to develop new methods leading to asymmetric oxidation of a sulfide to a chiral sulfoxide (Equation 1).Numerous methodologies have been reported for the transformation of a prochiral sulfide to a chiral sulfoxide.Most of them involve use of a chiral ligand with a transition metal, such as titanium, vanadium or manganese, in the presence of hydrogen peroxide or an hydrogen peroxide adduct as the oxygen source.The chiral ligands that have been successfully used include: bidentate diethyl tartrate 3, 26 diol 4, 27 BINOL 5, 28,29 tridentate Schiff base ligands 6 [30][31][32] and tetradentate Salen type ligands 7. [33][34][35][36] Results and Discussion As part of a wider study of asymmetric transformations, we proposed the preparation of a large library of chiral Schiff base ligands of the -O---N---O-type 6.Along with a transition metal ion, Ti(IV), V(IV), Cu(II) or Zn(II)), it would permit screening of the Schiff base ligands in various asymmetric chemical transformations.Recent application of this strategy in our laboratories to the addition of trimethylsilyl cyanide to benzaldehyde in the presence of Ti (IV) ion resulted in trimethylsilyl cyanohydrins in 40-85% enantioselectivity (Equation 2). 37,38 nadium (IV)-Schiff base complexes have been successfully used by Bolm, 31 Ellman 15 and Skarzewski 32 to oxidize different sulfide substrates to chiral sulfoxides.Based on these reports we have created a library of Schiff base ligands with subtle variations in the size of the substituents on the ligand.The library of ligands was derived from salicylaldehydes 8 and chiral β-amino alcohols 9 as shown in Equation 3. (3) The results of our screening are shown in Table 1.From our previous work with these ligands in the trimethylsilylcyanation of benzaldehyde catalyzed by Ti(IV)-Schiff base complexes, we discovered it was necessary to have a bulky substituent ortho to the phenol (R 1 ). 37A similar trend was also observed in the sulfide oxidation; when R 1 = H, OCH 3 or R 2 , R 3 = naphthyl, the observed ee's were low.Hence, we designed a number of Schiff bases with a bulky substituent at R 1 , and then varied the size of substituents on R 2 , R 4 and R 5 .Initially, we incorporated a conformationally-rigid five membered ring at R 4 and R 5 , derived from cis-1amino-2-indanol. Our assumption here was that the bulky indanol ring would increase the energy difference between two diastereomeric transition structure orientations, thereby enhancing the resulting enantioselectivity.When R 1 = tert-butyl or adamantyl, and R 4 , R 5 = cis-1-amino-2indanyl, reasonably good enantioselectivities were observed (ligand 10).However, when R 1 was replaced with 3,3-dimethyl propyl or 1,1-dimethylbenzyl, enantioselectivity was considerably lower (ligands 15 and 16).This lowering of enantioselectivity probably came from steric overcrowding around the metal, thereby inhibiting the sulfide-metal coordination.Interestingly, when the rigid five-membered ring was replaced with a conformationally more flexible β-amino alcohol fragment (R 4 ), enantioselectivity was considerably improved, (ligands 42, 43 and 44).However, when both R 4 and R 5 were substituted, the enantioselectivity once again decreased (ligands 28-32).Our results are in accordance with the recent report from Bergman and Ellman, 14a who have isolated the active intermediate in the Schiff base-vanadium catalyzed oxidation of sulfide to sulfoxide.The intermediate was found to be a 2:1 complex of Schiff base ligand to vanadium, which then reacts with hydrogen peroxide, eliminating one of the ligands to give a vanadium hydroperoxide complex, which then oxidizes the sulfide to sulfoxide.It is reasonable to assume that a certain amount of steric crowding around the metal in the transition state is essential in order to enhance the enantioselectivity of the sulfide oxidation.From our and previous works related to trimethylsilylcyanation of aldehyde and sulfide to sulfoxide oxidation using Schiff base ligands, it appears that a tert-butyl substituent provides the ideal steric size and gives good enantioselectivity in both types of reactions. Having investigated the size of substituents on the Schiff base ligands and their effects on the enantioselectivity, we next turned our attention to studying the electronic effects of these substituents in the sulfide to sulfoxide oxidation.Skarzewski and coworkers have reported that with the electron withdrawing nitro group para to the phenolic OH in the Schiff base-V(IV) complex(system) gives high enantioselectivity in the sulfide to sulfoxide oxidation. 32However, in our hands R 2 = NO 2 and R 1 = tert-butyl led to low ee, which is also in agreement with Ellmann's observation. 14When the strong electron withdrawing nitro was replaced with a less electron attractive bromine atom, along with sterically bulky substituents R 1 , R 4 and R 5 on the Schiff base ligand, enantioselectivity was improved (ligand 18).A similar trend was also seen in the trimethylsilylcyanation of benzalaldehyde catalyzed by Schiff base-Ti(IV) complex. 39n conclusion, the steric requirements of the Schiff base-vanadium (IV) complex-catalyzed oxidation of a sulfide to a chiral sulfoxide parallels the Ti(IV)-Schiff base catalyzed trimethylsilyl cyanation reactions. 38Added to this, the presence of electron-withdrawing bromine at R 1 or R 2 along with appropriate bulky substituents on the ligand enhances the enantioselectivity in the sulfide to sulfoxide oxidation.Thus, in designing new chiral Schiff base ligand-vanadium complexes for the sulfide to sulfoxide oxidation, consideration has to be given to both steric and electronic factors. Experimental Section General procedure for the oxidation of methyl phenyl thio ether to sulfoxide The enantiomeric excesses were determined using a Hewlett-Packard liquid chromatograph (detecting UV diodes at 254 nm), with a (R,R)-WHELK-01 chiral column.
v3-fos-license
2020-11-12T09:01:17.341Z
2020-11-10T00:00:00.000
228865176
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-3417/10/22/7983/pdf", "pdf_hash": "4cb5e27e2055837acb47ac0175bc6b5c352b31d8", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2893", "s2fieldsofstudy": [ "Engineering" ], "sha1": "c94ac307c9931ac4e1cead7186d9e31da756d804", "year": 2020 }
pes2o/s2orc
Master and Auxiliary Compound Control for Multi-Channel Confluent Water Supply Switching Control Based on Variable Universe Fuzzy PID : During the multi-channel confluent water supply process, the pressure control of the main pipe is often held back by such problems as non-linearity, hysteresis and parameter uncertainty, its own unique load dynamic changes, channel switching disturbance and other system characteristics caused by the actual working conditions. Moreover, pressure fluctuations in the main pipe will lead to a reduction in the service life of fire-fighting equipment, an increase in the failure rate, and even an interruption of the fire-fighting water supply. Therefore, a master and auxiliary control strategy is proposed to stabilize the pressure change in the process of multi-channel concentrated water supply switching, by using variable universe fuzzy proportional integral derivative (PID) control as the main controller on the main pipe and traditional PID control as the subsidiary controller on the channel. The control strategy is verified by the co-simulation platforms of LabVIEW and AMESim. Simulation results show that the variable universe fuzzy PID control and the master and auxiliary compound control based on the variable universe fuzzy PID control have advantages in step response, tracking response and anti-interference, respectively. The parameters obtained in the co-simulation are used in the experimental system. The experimental results show that the maximum deviation rate of main pipe pressure can be reduced by about 10% compared with other control methods under di ff erent loads. In conclusion, the proposed control strategy has strong anti-interference ability, fast dynamic response speed, high stability and good peak shaving e ff ect. Introduction Fire water supply is an important component in fire-fighting and rescue operations, and plays a key role in determining its success or failure. Fire water supply is developing in the direction of high efficiency and high stability with the increasing complexity of fire accidents and the increasing number of fire truck dispatches. As the pivotal aspect of the fire water supply, the multi-channel confluent water supply (MCCS) system is very important in ensuring the efficiency of water supply for fire-fighting and rescue work. In the event of large-scale fires, especially a fire located in a city with limited space, not only the is flow of fire-fighting water supply required, but there is also a higher demand for the stability of the water supply. The process of channel switching will inevitably occur in the multi-channel confluent water supply, which will lead to the pressure change of the water supply system. The common manifestation of pipeline pressure changes is water hammer, which can cause major damage to the water supply system. For example, references [1][2][3] describe in detail the failure of water supply pipes due to the water hammer effect. Reference [4] analyzes the factors that affect pressure variation in the water supply system, such as power outages, pump shut-downs, valve operation, flushing, fire-fighting and main breaks. For a fire water supply system, changes in the pressure of the system will cause cavitation of the subsequent on-board pump, the fire extinguishing performance will be affected due to the large fluctuation of the terminal fire monitor, and the precision of the foam proportional mixing system is limited. Therefore, it is necessary to use appropriate switching methods and control strategies to minimize the pressure change caused by switching under the premise of ensuring the minimum energy loss, so as to ensure the continuity and stability of water supply. The MCCS device consists primarily of a cluster structure, control system and additional equipment. In reference [5], the authors carried out a detailed analysis of the structure under the clustering conditions. The actuator of the control system is a pipeline valve. At present, proportional integral derivative (PID) controllers are mostly used to control pipeline valves in process control. This is because of their simple structure and low maintenance cost [6]. However, the traditional PID controller cannot achieve the ideal control effect due to the uncertainty of the object parameters and the nonlinearity and hysteresis of the MCCS system. In order to solve the deficiencies of the PID controller, various intelligent advanced control technologies have been developed in recent years. For example, Hamed et al. [7] use a sliding model controller to achieve the smoother and more rapid time responses of drum water level in a steam power plant, and the oscillation after control is smaller. Wu et al. [8] develop a stable model predictive tracking controller for coordinated control of a power plant, which realizes the off-set free tracking of the system under a wide range load variation. Liu et al. [9] propose an adaptive fuzzy PID controller with compensation correction, which successfully realizes the pressure control of the tractor, and has a better dynamic performance. Liang et al. [10] propose an improved genetic algorithm optimization for a fuzzy controller, which realizes the accurate closed loop of the wellhead back pressure system. The control technology combined with these different advanced control methods has reached the control goals of most control systems in terms of control accuracy, response speed and robustness. Taking the combination of fuzzy control and traditional PID control as an example, domestic and foreign researchers have also conducted a lot of research in different application scenarios. Wang et al. [11] adopt fuzzy adaptive PID control to realize the stable control of grouting pressure with uncertain, time-varying and nonlinear characteristics, and its performance indicators are better than traditional control methods. In reference [12], a fuzzy PID controller is designed to control the steam temperature of the supercritical lignite boiler, and its excellent response speed and stability are verified through simulation. References [13,14] introduce the idea of a variable theory domain on the basis of a fuzzy PID control, which overcomes the shortcoming of the limited control rules of a fuzzy PID controller, further increases the controller's adaptive ability, and improves the dynamic characteristics of the control system. In addition, references [15][16][17] use variable universe fuzzy PID control in different applications, and the control effect is significant. In practical applications, the MCCS systems have common control system characteristics, such as non-linearity, hysteresis and parameter uncertainty, but also have their own unique load dynamic changes, channel switching disturbance and other system characteristics caused by the actual working conditions. Therefore, the multi-controller compound control is proposed by researchers, which is suitable for more complicated control systems or devices with higher control accuracy requirements. For example, reference [18] proposes a control method combining fuzzy PID and implicit generalized predictive control. Simulation experiments show that it can reduce the variation amplitude of the main steam pressure in the marine steam power system during the dynamic change process, and improve the response speed. Teresa et al. [19] propose a control strategy that is a combination of a fuzzy sliding film controller and a linear controller for the speed control of dual-mass drives, which has been verified by experiments and simulations. Song et al. [17] propose a new type of double closed-loop control, chaos optimization and adaptive fuzzy PID compound control strategy for variable spray systems with large inertia, large hysteresis, nonlinearity, etc., and a satisfactory control effect is obtained through experimental verification. In reference [20], a compound controller based on fuzzy logic is proposed for the steam supply system of a nuclear power plant. Two local controllers are coordinated according to the working conditions based on the neural network PID controller and the fuzzy controller. The simulation results show that the control effect is a smoother and more stable operating performance. In this paper, based on the change law of pressure in the process of MCCS switching, a master and auxiliary control strategy is proposed. On this basis, a co-simulation platform is built to verify the effectiveness of the proposed control strategy, and the effectiveness of the proposed control strategy is compared with that of the single fuzzy PID and the single variable theory domain fuzzy PID control. The advantages of the proposed composite control strategy are verified. The control strategy is tested and verified on the test bench, which further proves that the main and auxiliary compound control based on the variable universe fuzzy PID control can effectively optimize the pressure change of the main pipe. The paper is organized as follows. In Section 2, we build a research platform and co-simulation platform of the MCCS system. In Section 3, we design the master and auxiliary compound controllers. Then, in Section 4, we give simulation results and experimental results to demonstrate the superiority of the compound control strategy compared with single control. Finally, in Section 5, we summarize this paper. MCCS Research Platform The composition of the MCCS system of the research platform in this paper is shown in Figure 1. The system has four channels and one main pipeline. The medium water is pumped out of the water tank by the centrifugal pump through the filter. The pressure water of the four channels is collected into the main pipeline and led into the tank. The frequency converter in the control cabinet controls the speed and opening and closing of the centrifugal pump, and each channel has check valves, hand valves, pressure transducers, electric control valves and other components. A pressure transducer and a flow meter are installed on the main pipe to measure the pressure and flow of the main pipe. The main electric control valve is the executive structure of the controller, and the pipe load simulation ball valve is used to simulate pipeline resistance. The acquisition control system of the test platform consists of the chassis cDAQ-9185 of the American instrument NI, the acquisition card NI9203 and the output card NI9266. The main hardware parameters of the research platform are shown in Table 1. Mathematical Model Multi-channel concentrated water supply means that multiple channels are connected in parallel in the same aggregate main pipe, and the switching process is its typical working condition. In fact, the switching process of the two channels is as follows: under the parallel operation of the water supply pump of each channel, the opening degree of the control valve in the operating branch gradually decreases from 100% to zero, while the control valve in the standby branch gradually increases from closed to 100%. If the action between the operating channel control valve and the standby channel control valve is not coordinated during the switching process, the pressure and flow rate in the collecting main pipe will change, resulting in a big sudden change in the pressure in the main pipe. Due to the complex internal mechanism of the MCCS process, the mathematical model cannot be precisely established, and only an approximate model based on experience and test data for approximate processing can be established. The established model makes the following simplifications and assumptions: the system medium is water at room temperature and pressure; the fluid is in a single-phase flow state; the transient flow in the pipeline is one-dimensional and homogeneous. The numerical models of the key components of the whole system are established respectively. The pressure and flow rate of the MCCS system will change during the switching process, which belongs to the transient flow state of the pipeline. The basic equation is composed of a mass conservation equation (continuity equation), a momentum conservation equation (motion equation) and an energy conservation equation (Bernoulli equation) in the transient process [21]. Continuity equation: where ρ is density, and v x , v y and v z are the mean velocity in the x, y and z directions, respectively. According to the model simplification, the pipeline flow is a one-dimensional single-phase incompressible transient flow, and the amount of flow in and out of a certain section of the control body along the pipeline is equal, namely: where A is the cross-sectional area of a pipe. Motion equation: where p is pipe pressure, and f x , f y and f z are the unit mass force in the x, y and z directions, respectively. According to the model simplification, it can be obtained as follows: where D is pipe diameter. Bernoulli equation: According to the assumption, the model pipeline is a horizontal pipeline, and the equation is: where p A and p B are the pressure at point A and B of a certain pipe, respectively, v A and v B are the mean velocity at point A and B of a certain pipe, respectively, and h w is the energy loss from pipe A to pipe B. Component Models For the pipeline structure of the MCCS shown in Figure 2, according to the conservation of mass, the flow rate through the concentrated water supply device at any instant meets the following conditions: According to the conservation of energy, the pipeline structure of the MCCS device meets the following requirements: where Q 1 , and section 0-0 of the Figure 2, respectively, and h wi-0 is the energy loss from section i-i to section 0-0 of the Figure 2, i = 1, 2, 3, 4. The classic method for analyzing the total flow and pressure of centrifugal pumps in parallel is the graphical method. This method is simple and convenient, but it does not reflect its internal mechanism, and the error is relatively large. At present, the characteristic curve of a single centrifugal pump and the characteristic curve of pipe resistance are mostly obtained by the quadratic polynomial fitting. where H i is the centrifugal pump head, H res is the head of the pipeline/device, H s is the static head of the pipeline/device, K 1 is the constant, K 2 and K 3 are the characteristic curve fitting coefficients, K 0 is the coefficient of the pipe resistance characteristic curve, and Q i is the centrifugal pump flow. The characteristic curve of the same type of centrifugal pumps in parallel is obtained according to the principle of the "addition of flow under the same head" of the characteristic curve of a single pump. Therefore, the characteristic curve of the parallel pump group can also be described by a quadratic polynomial [22]. where N is the number of centrifugal pumps in parallel. The flow through the valve in a transient state is Q, and the pressure loss caused by it is ∆p. The relationship between the two is expressed as follows [23]: where λ v is the resistance coefficient of the ball valve, and S is the flow cross-section of ball valve. The corresponding relation of the proportion of the cross-sectional area S to the full-pass area (area opening), spool rotation angle and ball valve opening (by percentage) is shown in Figure 3 below: The clustered structure in the MCCS device is composed of tees, elbows, etc., and the local resistance caused by them is ignored in the calculation model. Therefore, the local resistance model is required to compensate for the system in the calculation. The total head loss of the pipe is the sum of the head loss along the way and the local resistance loss: where λ j is the coefficient of local resistance, λ is the coefficient of resistance along the way, l is pipe length and d is pipe diameter. For pipe flow, Q = vA, which is then combined with the above formula: We then define the equivalent resistance coefficient R, so the above formula can be simplified to: According to the principle of conservation of mass, the dynamic mathematical model of each channel can be obtained as follows: Dynamic Mathematical Model where P i is the pressure at the outlet of the centrifugal pump of channel i, i = 1, 2, 3, 4, R i is the equivalent resistance coefficient of the corresponding pipe in the figure, i = 1, 2, 3, 4, 5, 6, P ii is the pressure at the junction of channel i and the gathering structure pipe, i = 1, 2, 3, 4, and P 0i is the pressure in front of the control valve on channel i, i = 1, 2, 3, 4. According to the principle of the conservation of mass and energy, the dynamic mathematical model of the clustered water supply structure can be obtained as follows: where P 5 is the pressure at the connection between the main pipe and the gathering structure pipe. The dynamic mathematical model of the main pipe can be obtained as shown in the following formula based on the principle of conservation of mass. The control valve on the main pipe is the main regulating valve, and the valve resistance is much greater than the pipeline resistance, so the pipeline resistance is ignored. ρλ The water-consuming system can be equivalent to a resistance element, which is replaced by a pipe load simulation valve. The pressure at the outlet of the water-consuming system is atmospheric, and its dynamic mathematical model is as follows: where P 0 is the pressure before the load simulation valve, Q 0 is the flow before the load simulation valve, and R 7 is the equivalent resistance coefficient of the load simulation valve and its front and rear accessories. Equations (18)- (21) represent the dynamic mathematical model of the MCCS process. Through this dynamic mathematical model, it can be seen that the main factors affecting the pressure of the main pipe are the pre-aggregate flow rate, the main pipe control valve and the equivalent resistance of the pipe load simulation valve. Co-Simulation Platform Based on LabVIEW and AMESim The co-simulation technology can reduce the dependence on the physical prototype, reduce the number of tests under the premise of obtaining reliable experimental data, and verify the feasibility of the control scheme in advance to avoid unnecessary losses. The joint simulation platform based on LabVIEW and AMESim is a dynamic link library (.dll file) generated by LabVIEW, called the AMESIM simulation model. Three subVIs are used to realize the data interaction between the LabVIEW and AMESim models, and realize the process operation and control of co-simulation. Simulation Model Based on AMESim The AMESim model in the co-simulation platform consists of two parts, as shown in Figure 5. The first part is the AMESim simulation model of MCCS, which can simulate different working conditions of multi-channel concentrated water supply. The second part is the interface module, LabVIEWCosim, of the co-simulation, which provides AMEDoAstep2.VI, AMEEInitModel.VI and AMETerminate.VI, and realizes the LabVIEW and AEMSim signal transmission. In the simulation model, the pressure signal of the channel and the pressure signal of the main pipe are selected as the output of the simulation system and fed back to the controller. At the same time, the algorithm in the controller calculates the control value of the regulating ball valve and acts on the simulation system through the simulation interface. A sketch of the MCCS simulation model is shown in Part A of Figure 5. It includes the MCCS structure, pump, valve, sensors, etc., and is slightly simplified compared to the real one. The Hydraulic and Hydraulic Resistance libraries are used to model the MCCS system. Taking into account the flow characteristic curve of the ball valve, the SIGUDA01 of the signal library is used to realize the relationship between the valve opening and the flow in modeling. Besides this, the parameters of the main components have been obtained from the experimental results and technical data sheets of the manufacturer. When building the model, it is necessary to ensure the mathematical transfer relationship between the component sub-models, and realize the output and input relationship of the front and back sub-models on the same pipeline [24]. The sub-models and parameters of the main components of the MCCS are shown in Table 2. The verification of the MCCS simulation model includes two aspects: constant flow performance verification and unsteady flow performance verification. Constant flow performance refers to the pressure and flow of the main pipe when the system is stable. That is, the pressure and flow of the main pipe are recorded by turning on different numbers of centrifugal pumps and using different main pipe loads (the opening of the valve at the end of the main pipe simulates the pipe load). Unsteady flow performance verification refers to the characteristics of the main pipe pressure changing with time when the system status changes. For example, in the parallel operation of three channels and the standby of one channel, different switching signals are given to the channel control valve so that the standby channel can be switched with one of the parallel channels to obtain the time-varying characteristics of the pressure of the main pipe during the switching process, including synchronous switching, delayed switching and advanced switching. In the parallel operation of three channels, a control signal of 0-100% opening is given to the main regulating ball valve to obtain the time-varying characteristics of the pressure in the main pipe. The simulation model is set with the same parameters, and the results obtained are compared with the experimental results. The performance verification of constant flow and unsteady flow is shown in Figures 6 and 7, respectively. It can be seen from Figure 6a that when different numbers of branches are connected in parallel, the pressure of the main pipe is inversely proportional to the opening of the pipe load simulation valve, and it conforms to the regulation law of the ball valve. That is, when the opening of the ball valve is between 80% and 100%, the pressure change is relatively gentle. There is little difference between the main pipe pressure value in the simulation model and the experiment, and the simulation and experiment have good consistency. Figure 6b shows that the flow of the main pipe is proportional to the opening of the pipe load valve, which also conforms to the regulation law of the regulating ball valve. Moreover, the increment of the main flow from three channels to four channels in parallel shows an increasing trend with the increase of the opening of the pipe load valve. It should be noted that when the pipe load valve opening is greater than 60%, the flow of the four-channel main pipe in parallel is greater than the range of the flowmeter (20 m 3 /h), and there is no test value temporarily. However, from the comparison between the simulation value and the experimental value of the main flow of the three-channel pipe in parallel, it can be seen that the simulation model is also consistent with the test bench data sample. Therefore, it can be considered that the simulation model based on AMESim can replace the testbed for subsequent research when the multi-channel confluent water supply is constant. The control process of the multi-channel concentrated water supply system is an unsteady flow state for the main pipe pressure, that is, when the main control valve on the main pipe changes or the control ball valve on the channel changes, the flow state in the main pipe is not stable, and the main pipe pressure changes with time. The model based on AMESim not only needs to have a high degree of consistency with the research platform in the steady state of the system, but more importantly, the simulation model also needs to have a good dynamic consistency with the research platform when the system state changes. From Figure 7a, it is evident that when the switching time and switching sequence of the two channels are different, the fluctuation of the main pipe pressure will also change. Compared with the simulation model, the main pipe pressure before and after the switching is different. This is because the switching of two centrifugal pumps and channels cannot be exactly the same, resulting in different pressure values of the main pipe before and after the switch. From Figure 7b, we see that when the three channels are connected in parallel, the opening time of the main control valve is 5 s, and the pressure on the main control valve increases gradually with the opening of the main control valve. There is a difference between the simulated value and the test value when it is turned on. This is because there is a certain initial pressure at the beginning of the research platform, but the overall trend is the same, and the experimental value and the simulated value have a high degree of consistency after the turn-on. In a word, although the results of the experiment and the simulation model under constant flow and unsteady flow have certain errors, the overall trend is the same. It can be considered that the simulation model is a reproduction of the research platform, and the simulation model can be used for system control research. Controller Model Based on LabVIEW The model established in LabVIEW is a co-simulation master and auxiliary compound control system, which is divided into two parts: the front panel and the block diagram. The front panel is used for the operation interface of co-simulation, in which the control system parameters can be set, including the determination of PID control parameters, set value pressure, etc. The corresponding pressure curve can also be read in real-time. The model in LabVIEW completes the interactive function of the two-model data by recalling the '.dll' file generated in AMESim (see Figure 8). There are three input parameters and three output parameters in the simulation interface of AMESim for data interaction with LabVIEW. After the MCCS co-simulation, in addition to observing the main pipe pressure change curve on the co-simulation operation control interface, more comprehensive and rich data can also be extracted from the AMESim software. For the same simulation process, the running results obtained from the operation control section are consistent with the simulation data directly extracted from AMESim, which proves that the co-simulation of LabVIEW and AMESim is correct. Design of the Master and Auxiliary Compound Control The function of the MCCS system is to meet the requirement of multiple water sources in supplying a piece of water-requiring equipment stably and continuously. According to the requirements of the MCCS system, the pressure of the MCCS device should not undergo a big sudden change on the premise that the flow rate after gathering meets the requirements, so as to avoid the adverse impact on the subsequent equipment. Especially when a channel must be switched due to the exhaustion of the water source, the pressure of the main pipe will inevitably fluctuate, causing instability in the entire water supply system and causing trouble in the use of terminal equipment. For example, pressure changes can cause cavitation in the centrifugal pump, and deviations in the drop points of water jets from fire monitors. Based on this, the control objective is obtained as follows: under the premise of meeting the water supply requirements, the terminal equipment can obtain continuous medium water and stable pipeline pressure. For switching conditions, the main pipe pressure can remain stable at the maximum value, reducing the pressure loss of the multi-channel concentrated water supply system and improving its efficiency. The pressure adjustment of the MCCS system seems simple, but it is difficult to achieve better results with conventional industrial technology. The reason is that this system has unfavorable factors such as nonlinearity, large lag, and large interference. Therefore, this article provides a compound pressure control strategy, including two parts: the main controller and the sub-controller. The main controller controls the main control valve and quickly adjusts the pressure on the main pipe to near the set value. The sub-controller is selectively opened for non-switched channels to control the pressure fluctuations caused by changes in the resistance of the switched channels and interference of the front water supply system, playing an auxiliary role in the pressure stability of the main pipe. The sub-controller adopts incremental PID control, because the incremental PID itself will not cause valve jitter and is more suitable for the small-range adjustment of branch circuits. The main controller adopts variable universe fuzzy PID control. This is due to the uncertainty of the equipment connected to the MCCS system. The conventional fixed fuzzy PID control cannot effectively adapt to the changes in system characteristics, and it is difficult to achieve the requirement of the main pipe pressure control under variable working conditions. The variable universe fuzzy PID controller can adjust the fuzzy universe via the expansion factor according to the difference in main pipe pressure, and overcome the limitation of the conventional fuzzy PID controller's limited adaptive ability. In summary, the MCCS compound control structure is shown in Figure 9. Variable Universe Fuzzy PID Controller Variable universe fuzzy PID control introduces the idea of a variable universe on the basis of a adaptive fuzzy PID control, improves the robustness of the control system, expands its adaptive ability, and further improves the steady-state accuracy and dynamic response of the control system. The variable universe fuzzy PID controller is mainly composed of an adaptive fuzzy PID control module and the contraction-expansion factor adjustment module. In the operation of the system, the expansion factor adjustment module continuously adjusts the expansion factor of the output and input universe according to the main pipe pressure deviation and the deviation rate, and then changes the fuzzy universe in the adaptive fuzzy PID control module, so that the actual control rules always remain high, realizing the stability, anti-interference and adaptability of the controller. The structure principle of the variable universe fuzzy PID controller is shown in Figure 10. Adaptive Fuzzy PID Control Module Adaptive fuzzy PID control is a combination of fuzzy control and PID control. The main idea is to first establish the fuzzy relationship between the three parameters of PID and the deviation e and deviation change rate ec, and then the real-time monitoring of e and ec and, according to the fuzzy logic PID controller of proportion, the integral and differential online adjustments. The design of a fuzzy PID controller mainly consists of four parts: fuzzification, determination of membership function, the establishment of fuzzy control rules and defuzzification. (1) Fuzzification of input and output The structure of the fuzzy controller is a two-dimensional structure with two inputs and three outputs. The difference between the measured main pipe pressure and the coordinated set value is e. The deviation e and the deviation change rate ec are inputs, and the PID parameter adjustment values ∆K p , ∆K i and ∆K d are output. First, the deviation, deviation change rate, proportional coefficient increment, integral coefficient increment and differential coefficient increment are fuzzy processed, the quantization domains corresponding to them are determined, and the fuzzy language is used to express them. Through the experiment, the deviation range of the main pipe pressure under different working conditions is −0.06 to 0.06, and the deviation change rate range is −0.9 to 0.9. In the test process, the PID parameters are roughly tested according to the trial and error method, and the basic domain of the proportional adjustment can be obtained ( The input and output variables are summarized in Table 3. ( 2) Determination of membership function To obtain the analysis results quickly, reduce the complexity of the control system and improve the operation efficiency, the membership function adopts the triangular membership function. Given the characteristics of different working conditions of MCCS, the membership function is set to be sparse at both ends of the input universe and dense in the middle to meet the requirements of the rapid response of large deviations and the accurate regulation of small deviations. The membership functions of the input domain and the output domain are shown in Figure 11. (3) Establishment of fuzzy control rules According to the influence of the parameters K p , K i and K d on the PID control output characteristics of the multi-channel concentrated liquid supply system, combined with the existing reference literature and the operating experience of the research platform, the PID parameter adjustment principle is designed, as follows. General principle: When the deviation e is large, the deviation should be eliminated as soon as possible under the premise of ensuring the stability of the system; when the deviation e is small, the stability of the system is the main thing. The fuzzy rules are shown in Table 4. (4) Defuzzification The centroid method is used to solve the fuzziness, and the precise adjustment value of the PID parameters is obtained. Then, the PID control parameters K p , K i and K d are obtained by adding the initial values of the PID control parameters. The calculation formula is shown below. where K p is the scale factor, K i is the integral coefficient, and K d is the differential coefficient. ∆K p is the increment of the proportional coefficient, ∆K i is the increment of the differential coefficient, ∆K d is the increment of the integral coefficient, and K p0 , K i0 and K d0 are the initial proportional coefficient, initial differential coefficient and initial integral coefficient, respectively. Contraction-Expansion Factor Adjustment Module The idea of a variable domain can be understood as the domain of adjusted input and output variables according to certain control rules according to actual requirements. Let us set [−E, E] and [−EC, EC] as the initial universe of input variables e and ec, respectively. The initial output domains of the proportional coefficient increment, ∆K p , the integral coefficient increment ∆K i and the differential coefficient increment and β(x) are the contraction-expansion factors of the input and output domains, respectively. There are currently three design methods: the functional contraction-expansion factor, the fuzzy reasoning contraction-expansion factor, and the error classification contraction-expansion factor. This paper chooses the calculation method based on the fuzzy inference type contraction-expansion factor, because the contraction-expansion factor calculation model based on fuzzy rules satisfies the monotonicity, duality, coordination, normality and zero avoidance of the contraction-expansion factor, and avoids the functional contraction-expansion factor's calculation model parameter selection [15,25]. At the same time, fuzzy rules are used to express the change law of the contraction-expansion factor to realize the online automatic adjustment of the contraction-expansion factor. The contraction-expansion factor model based on fuzzy rules is based on the actual control deviation and deviation change rate, and uses easy-to-understand language to describe the change law of the universe. That is, the universe shrinks when the deviation becomes smaller, and the universe expands when the deviation becomes larger. The size of the quantization factor and the scale factor actually reflects the expansion change of the corresponding domain. The fuzzy reasoning expansion factor is used to establish another fuzzy controller on the basis of the basic fuzzy PID controller, in order to modify the parameters of the quantization factor and the scale factor. The adjustment rules are as follows: when e and ec are large, the input domain should remain large; when e and ec are small, reduce the input domain to improve the pressure control accuracy, as shown in Table 5. (2) Output universe contraction-expansion factor fuzzy control The contraction-expansion factors β(kp), β(ki) and β(kd) of the output universe are divided into seven fuzzy language variables, which are extremely-small (Z), very small (VS), pretty small (S), small (SB), medium (M), large (B), extra-large (VB). A triangular membership function is used, as shown in Figure 12b. The adjustment rule of the output universe contraction-expansion factor: when e and ec are large, and the signs of the two are the same, this indicates that the target value and the process variable are very different. At this time, there should be a large output control amount, which will make the process variable quickly track the target value. The contraction-expansion factor is larger in order to increase the output control amount. When e and ec are large and their signs are opposite, this indicates that there is a big difference between the target value and the process variable, but the difference is decreasing. In this case, the system should have a small output control quantity, which will ensure that the process variable can track the target value quickly without causing a big shock. In other words, a small value of the contraction-expansion factor is taken. When e is close to zero and ec is very large, this indicates that the difference between the process variable and the target value is very small, but the process variable is deviating from the target value at a very fast speed. At this time, the system should have a large output control amount to restrain the actual value from deviating from the target. The contraction-expansion factors β(kp), β(ki) and β(kd) and their fuzzy control rules are shown in Table 6. Table 6. Contraction-expansion factors β(kp), β(ki) and β(kd) and their fuzzy control rules. Valve Controller In the process of the MCCS, each channel and the main pipeline has a regulating ball valve. Due to the uncertain influence caused by the flow characteristics of the ball valve, the accuracy of the control volume is greatly affected. In addition, combined with the use of the multi-channel concentrated water supply system, that is, in order to obtain a large flow under the premise of pressure stability, which requires the adjustment of the ball valve, the opening cannot be too small. Therefore, it is of great significance to increase the control module of the adjusting ball valve to control the multi-channel concentrated water supply system. According to the flow characteristic curve of the ball valve, the local resistance coefficient is very small when the opening degree of the ball valve is 80-100%, and there is almost no change and no adjustment effect; when the opening degree of the ball valve is 0-50%, the local resistance coefficient increases sharply. Although the adjustment ability is strong, the flow rate is reduced greatly, which is contrary to the control objective of the multi-channel concentrated water supply system. When the opening degree of the ball valve is 50-80%, the local resistance coefficient gradually increases, which has a regulating effect, and the flow rate change amplitude is small. Therefore, combining the flow characteristic curve of the regulating ball valve and the actual operating conditions, the ball valve expert controller is designed. When the opening of the ball valve is 0-50%, and the increment of the ball valve is ∆u, then u = 50% + ∆u. When the opening of the ball valve is 50-80%, and the increment of the ball valve is ∆u, then u = u + ∆u. When the opening of the ball valve is 80-100%, and the increment of the ball valve is ∆u > 0, then u = 100%. Pressure Set Point Coordinated Controller In the process of MCCS, if the main pipe's pressure value is set too low, most of the energy of the parallel channels will be lost, reducing the efficiency of the multi-channel concentrating water supply. The set value of the main pipe's pressure is too high, and the regulating effect of the main pipe's pressure is not obvious. Therefore, an important link to ensure the pressure stabilization effect and the water supply efficiency of the multi-channel concentrated water supply control is to coordinate and optimize the pressure-setting value. There is an adjusting ball valve on each of the four channels, and the pressure fluctuation caused by switching can be adjusted by controlling the ball valve on the non-switching channels to play an auxiliary role. The regulating ball valve on the main pipe plays a leading role as the main control executive element. The pressure-setting value of the non-switching channel and the main pipe is related to the voltage stabilization effect of the MCCS system. The specific method is as follows: adjust the switching time of the switching branch valve to make the switching waveform a convex pressure wave, so that the pressure-setting value can be set as large as possible (that is, the pressure when the three channels are normally collected and supplied); secondly, record a set of pressure data in the pipeline when the water supply is stable before switching, and calculate the average value; finally, the average value is used as the pressure-setting value of the switching process in this pipeline. Simulation Result Analysis By using the co-simulation platform, the control process of the multi-channel concentrated liquid supply system is simulated, and the feasibility of the co-simulation platform and control scheme is verified through step response, sinusoidal tracking and anti-interference, respectively. In addition, in this simulation analysis, a traditional PID and an adaptive fuzzy PID controller are used as comparison methods to study the change in the main pipe's pressure after clustering, and explain the advantages of a variable universe fuzzy PID controller in the control of MCCS system. In order to facilitate the comparison and analysis, we use a set of PID parameters with better control effects (K p0 = 4.5, K i0 = 3.4, K d0 = 10, T = 0.05 s) while the rest of the basic parameters are exactly the same, and the control effect of three controllers is observed under different target signals. Step Response Set channel 1, channel 2 and channel 4 of the MCCS system to work, load the simulation ball valve opening to 100% with no interference, and only open the main controller. The set value of the pressure in the main pipe is set at a step from 0.025 MPa to 0.035 MPa, and the step response curve of the pressure in the main pipe is obtained, as shown in Figure 13. It can be seen from the figure that, when the set value of the main pipe changes in a step, the control effect of the MCCS system using the variable universe fuzzy PID is compared with the traditional PID control and adaptive fuzzy PID control, with a shorter adjustment time and higher control accuracy. Figure 14. The main pipe pressure under the three control methods has a certain lag and error. It can be seen from the figure that the changing trend of the tracking error also shows a sinusoidal trend. Moreover, the tracking error of the variable universe fuzzy PID control is compared with the tracking error of the adaptive fuzzy PID and the traditional PID, and is slightly smoother. It is proven that the variable universe fuzzy PID control has good dynamic characteristics and high tracking accuracy. Anti-Interference Performance There are a lot of interferences in the MCCS system that cause pressure changes in the main pipe, such as pressure changes caused by the centrifugal pump itself, valve jitter, water belt contraction and so on. In view of the accumulation characteristics of the MCCS system, the main pipe pressure fluctuation caused by the water source switching is also a factor that cannot be ignored. Set the load simulation ball valve opening to 100%; channel 1, channel 2 and channel 4 run in parallel, channel 3 is on standby. Switch between channel 2 and channel 3. The switching time for channel 2 s valve delays the start by 4 s and 10 s, respectively. The pressure value of the main pipe is set as 0.0392 MPa. According to the pressure of channel 1 before switching, the set pressure of channel 1 is 0.096 MPa. Then, at the same time, the system using different control methods is switched, and the real-time value of the main pipe's pressure is recorded. The main pipe pressure change curves under the control of traditional PID, adaptive fuzzy PID, variable universe fuzzy PID and master and auxiliary composite control are obtained, as shown in Figure 15. It can be seen from Figure 15 that, under the same switching conditions, compared with other controls, the main pipe's pressure fluctuation value is smaller, and the recovery time to the set value is shorter. Therefore, the master and auxiliary composite control method based on the variable universe fuzzy PID has the stronger anti-interference ability, and is a more suitable smooth switching control strategy. Experimental Verification In addition to computer simulation experiments, in order to verify the effective application of the master and auxiliary composite control strategy proposed in this paper in the MCCS system, a physical test platform is built in the laboratory. In the experiment, the AMESim simulation model is replaced by a physical test platform, and the LabVIEW programming controller in the simulation model is modified. Combined with a National Instruments (NI) acquisition card and output card, a 4-20 mA analog quantity is used to simulate the signal of the field sensor and the control signal of the adjusting ball valve. Through the co-simulation technology, the control parameters of the smooth switching process of the MCCS system are obtained, and are fine-tuned during the experiment process. Finally, the following control parameters are obtained: initial PID parameters K p0 = 4.86, K i0 = 3.375 and K d0 = 11; scale factor K Kp = 0.5, K Ki = 0.7 and K Kd = 1.67; quantization factor K e = 165 and K ec = 40. According to this parameter, the two-channel smooth switching is carried out in the test platform. The specific process is as follows: channel 1, channel 2 and channel 4 are connected in parallel for water supply, and channel 3 is in standby; after the start is stable, the pipeline pressures are set for the main pipe, channel 1 and channel 4; the switch is started between channel 2 and channel 3, while channel 2 delays the switch 4 s. The opening degrees of the load simulation ball valve are 100%, 75% and 50%, respectively. Finally, the analysis experimental data are recorded, as shown in Figure 16 and Table 7. The following points can be identified in Figure 16. In different load simulations of the main pipe, adaptive fuzzy PID, variable universe fuzzy PID and master and auxiliary composite control can reduce the maximum deviation of the main pipe pressure, and the performance of the master and auxiliary composite control has a slight advantage. With the increase in the simulated load of the main pipe, the three control methods have improved the control of the deviation rate. This can be inferred from Table 7. However, the basic function of the combined multi-channel concentrated water supply system is to superimpose the water supply and increase the water supply capacity. The simulated load of the main pipe cannot be too large, and the flow rate of the main pipe is lower than the flow rate provided before the aggregation. Comparing the control effect of the same main pipe load (Figures 15a and 16a), it can be seen that the adaptive fuzzy PID, the variable universe fuzzy PID and the master and auxiliary composite control are not as good as the simulation result, that is, the controlled main pipe pressure fluctuation amplitude is higher than the simulation result. The main reason for this experimental result is the dead zone characteristics of the main control ball valve. When the input signal of the main control valve changes a little, the actuator may not act. In this way, when the output of the controller is small, the opening of the main electric valve cannot be corrected in time, resulting in a large amplitude of pressure fluctuation in the main pipe. On the other hand, when the controller parameter settings are more sensitive, the control system is prone to frequent fluctuations. Therefore, in order to avoid frequent fluctuations in the control system, the controller parameters should be set reasonably. From the analysis of the control response time, the response time of the master and auxiliary composite control is obviously lower than that of adaptive fuzzy PID and variable universe fuzzy PID control. Under the same switching fluctuation, the passing time is shorter, and the response speed is faster. The pipeline pressure control is common, but the research on pressure control in the MCCS system for fire-fighting is comparatively sparse. This paper demonstrates an exhaustive study of the variable universe fuzzy PID controller. Furthermore, the auxiliary controller of the branch pipeline is considered based on the dynamic characteristics of the switching process of the MCCS system. Therefore, a thorough study of the control strategy based on the master and auxiliary composite is illustrated for the main pipe pressure change. Additionally, the valve controller and pressure set point coordinated controller are added to the control strategy so as to apply the switching conditions. Traditional PID, single fuzzy PID and variable universe fuzzy PID can also realize the control of main line pressure. However, the outcomes reveal that the overall performance of the master and auxiliary composite control strategy is superior. The main pipe pressure control studies are examined under different delay times and different loads of the MCCS system. It is observed from the simulation and experiment results that the main pipe pressure with the proposed control strategy achieves excellent outcomes. Conclusions This paper analyzes the mathematical model of the MCCS system, and establishes a simulation model based on AMESim. According to the application scenario of the MCCS system, a master and auxiliary composite control strategy with variable universe fuzzy PID as the primary control is proposed, a co-simulation platform based on LabVIEW and AMESim is built, and the master and auxiliary composite controller is designed. Combining simulation and experimental testing, the following conclusions are obtained. By comparing the data of the experiment and the simulation model, the accuracy of the AMESim model is verified from the two aspects of the constant flow and the unsteady flow of the MCCS system. The main controller's precision and adaptive ability are improved by using the idea of the variable theory domain. A master and auxiliary composite control strategy is proposed, which further increases the anti-interference ability of the control system, especially the peak clipping ability of the pressure peak caused by the common switching condition of the MCCS system. According to the co-simulation technology, a co-simulation platform based on LabVIEW and AMESim is built, which can realize the switching of working conditions, the pressure adjustment of each channel, the setting of control parameters, and the data acquisition, processing and storage. It has good operability and an excellent human-computer interaction interface. Through the analysis and comparison of different control schemes, the effectiveness of the main and auxiliary compound control strategies is verified, and the controller parameter values are initially obtained, which reduces the workload of subsequent tests. The experimental results showed that compared with adaptive fuzzy PID and variable universe fuzzy PID control methods, the main and auxiliary compound control strategy has a strong anti-interference ability, a fast dynamic response speed, high stability and a good peak shaving effect. The results presented in this work can be used to guide the design and operation of the control system of the MCCS device, meet the stable and continuous requirements of the water supply system, and improve the efficiency of fire water supply. Of course, the research in this paper has certain limitations. The proposed master and auxiliary composite control strategy is only for pressure fluctuations caused by switching operating conditions. The next research direction of this paper will continue to improve the control scheme and expand its applicable working conditions.
v3-fos-license
2022-04-17T15:07:47.927Z
2022-04-15T00:00:00.000
248215339
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/cin/2022/4144073.pdf", "pdf_hash": "6d6bfae2af2dc79fbc06549bfa84077a96621ea1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2894", "s2fieldsofstudy": [ "Economics" ], "sha1": "615b24af81604129ee44185600346faa53a105d9", "year": 2022 }
pes2o/s2orc
A Quantitative Analysis of Country Relations in Foreign Direct Investment According to Dunning’s eclectic theory, the location advantages play a key role in international investment mode choice, in which the country relations are important determinants. In some previous studies, the country relations and another bilateral factor, the country distance, are often confused, which can result in the inconsistency of conclusions. And excepting political factors, the economic dependence and other relations are insufficiently considered in the literature. This article makes a distinction between relation and distance, and puts forward a simplified analytical framework, the indicator system, and some quantitative methods for country relations. The indicators, including political, economic, and social factors, can better satisfy the horizontal analysis of the outbound investment. The economic and social indicators are determined by the magnitude of interaction as well as the share in the home country, and hence, the evaluation results can reflect the differences between the two countries. Finally, by evaluating the relations of other BRICS countries with China, the rationality is illustrated. Introduction In the literature of location choice of outbound investment, many authors analyzed factors such as the foreign capital policy, economic development level, market size, infrastructure, resource endowment, trade barriers, labor force, and cost in foreign markets based on the eclectic theory of international market proposed by Dunning [1][2][3][4]. In reality, it is a fact often observed that outbound investment and other forms of economic cooperation do not uniquely depend on the unilateral environment of the host country. Indeed, the investment behavior may be found paradoxical sometimes if only the investment environment in the host country is taken into account. Buckley et al. [5] found in their empirical analysis that China's investment would flow into the high political risk areas. Kolstad and Wiig [6] found that China's foreign direct investment outflows tend to seek rich natural resources in weak constitutional system countries. And Ramasamy et al. [7] also found that China's state-owned enterprises do not avoid high political risk in a perspective of enterprise. is seems an irrational risk attitude. e authors ascribe this to the strategic motivation of the Chinese government and believe that existing theories need to be developed to explain such phenomena. e search for resources of Chinese outward foreign direct investment (OFDI) is understandable because China has been the largest manufacturing country for a long time. However, it is impossible for China to ignore high political risks in its strategic considerations. In fact, strategic decision-making should incorporate risk diversification. What are the key factors behind such seemingly political risk-seeking behavior? As a typical country risk, political risk, sometimes refers to geopolitical risk, is the risk that investment returns may suffer from political changes or turbulence in a country. e instability that affects investment returns may result from changes in governments, legislatures, other foreign policymakers, or military control institutions. Evidently, political risk is becoming increasingly important in today's transnational economic and trade activities because of the trade wars and COVID-19 pandemic. e classical country risk theory measures political risk from the perspective of a hypothetical "average person." In other words, all investors from different countries are assumed to face exactly the same political risk in the host country, which is the main reason why classical theory cannot explain some real phenomena. For example, China's OFDI and project contracting decisions mentioned above, as well as particular Chinese investment projects, are banned in some countries. Actually, the country relations are extremely important in the decision-making of international economic and trade activities. Although it is not always the decisive factor, it can strengthen or weaken the effect of other influencing factors. Stable and friendly political relations, past cooperation experience, and cultural proximity among countries can enhance investor confidence, reduce investment uncertainty, and have a positive impact on investment choice. In practice, positive country relations can break through geographical constraints, economic gap, and cultural distance, and greatly improve the breadth and depth of economic and trade cooperation between the two countries, while poor country relations make investors have to give up their projects in spite of superior market conditions of the host country. From the longitudinal perspective, the country relations are of historicality and variability. e historicality refers to the accumulation of bilateral relationships over time, both positive and negative ones. e past and current friendly relationships can drive more exchanges and interactions in the future, so that the relationships will be continuously consolidated and there will be a certain type of inertia, forming a virtuous cycle. e variability refers to the sudden occurrence of conflicts, and the friendship may immediately cool down or even freeze. From the transverse perspective, country relations are the comprehensive result of economic, political, social, and cultural interactions between countries. Political and economic relations influence each other and generally dominate the country relations, while social relations evolve in a subordinate way. is is the complexity of country relations. " e Belt and Road Initiative" proposed by China, for example, put forward the overall layout of policy coordination, connectivity of infrastructure, unimpeded trade, financial accessibility, and people-to-people bonds, which from the perspective of the country relations are to enhance economic relations as the guidance, to strengthen political relations as assistance, while improving the people's exchange and friendship. us, the country relations will be improved further and will have a very positive impact on economic cooperation. It may be because of the historicality, variability, and complexity that the quantitative analysis of country relations is quite difficult and hardly seen in the literature. In contrast, the analysis of unilateral environmental factors of host countries is much easier. In order to study the rationale behind complex decision-making in foreign economic and trade cooperations, it is necessary to investigate the measurement of country relations. is article explores an evaluation framework of country relations from the aspects of politics, economy, and society, based on the historical interactions, cooperation, and exchanges between two countries. e evaluation results can be applied to the decision-making analysis of international trade, transnational investment, and project contracting. Literature Review When studying the development and change of foreign economy and trades, location choice, and other issues, many scholars have taken into account the influence factors of both the home and host countries. For instance, the trade gravity model takes the economic sizes of the two countries as positive factors and the geographical distance of the two countries as a negative factor. is basic model and its extensions, which constantly incorporate market level, resource endowment, cost, risk, and other factors, can better explain the changes in the development of foreign economic trades [8][9][10][11][12][13][14][15]. ose factors can be classified into political relations, economic relations, social aspects, and others. Politics-related bilateral factors mainly include the political connections and interactions as well as the differences in institutional management between the two countries. Li [16] utilized the "event data analysis" method and introduced a "conflict-cooperation model" for the study of international relations. Yan and Zhou [17] argued that the relational scores should be equal to the oneperiod lagged scores plus the impacts of the current period events, and the marginal effect of the new events could vary with the current relation status. Ma et al. [18] adopted the key factors accumulation method, including eight key factors, which improves the comparability between different types of events and consistency of scores. Knill et al. [9] measured the bilateral political relations by UN voting inconsistency. Zhang et al. [19] clarified the theoretical mechanism of the interaction effect between them, including four aspects of bilateral political relations. ere are also many studies on OFDI decision-making from the perspective of institutional distance and corruption distance [20][21][22][23]. Bilateral factors related to the economy are mainly the economic and trade cooperation in the past and the difference between the economic development levels. Chen and Li [24] studied the influence of geographical distance, institutional distance, economic distance, and cultural distance on location choice in " e Belt and Road" countries' international production capacity cooperation. Shi et al. [25] discussed the influence of distance to the multinational enterprise host selection decisions and established a national distance model. Blanc-Brude et al. [26] verified empirically that economic distance could explain FDI location better than geographical distance and administrative distance. One can also see more discussions on the relationship between economic distance and FDI in Cui and He [27]. Bilateral social factors include population movements between the two countries and the differences in language, religion, culture, etc. Based on the Hofstede cultural dimensions, Kogut and Singh [28] defined cultural distance by a normalized Euclidean distance squared. Yin and Lu [29] analyzed the complex nonlinear relationship between cultural distance and international direct investment. Yan and Li [10] found empirically the relation between location choice of Chinese enterprises and country risk, cultural difference, market size, etc. Slangen and van Tulder [30] pointed out that cultural distance and political risk are suboptimal, while the governance quality of foreign countries is a better proxy for external uncertainty. Based on the World Values Survey (WVS), Gustavode et al. [31] determined the cultural distance between countries using cluster analysis with the Euclidean metric. Beugelsdijk et al. [32] tested the impact of home-host national cultural distance on foreign affiliate sales and found the moderating role of cultural variation within host countries. Tang [33] argued that the cultural distance in individualism encourages bilateral FDI activities, while the power distance does not. ere are also other studies on the influences of cultural distance on international economic and trade activities [34][35][36][37][38]. Other frequently considered bilateral factors are geographical distance, knowledge distance, and colonial relations. In Bailey and Li [39]; the geographical distance is defined as the greater circle distance of the geographic center of the two countries. Gao [40] and Xu et al. [41] measured the geographical distance by the straight-line distance between national capitals. Aggarwal et al. [42] studied the impact of geographical distance and cultural distance on foreign portfolio investment using the gravity model. Chen et al. [43] studied the national distance and selected the dimensions of geographical, cultural, institutional, and economy for research. Ghemawat [44] put forward the four dimensions of the cultural, administrative, geographical, and economic distance between countries. Liu et al. [45] comprehended several national distance factors from the aspects of geographical, cultural, economic, political, knowledgeable, diplomatic, and global connective. Drogendijk and Martin [46,47] determined the country distance index from three basic dimensions: physical distance, socio-economic factors, and cultural and historical linkages. In addition, other scholars have studied the influence of corruption distance, institutional distance, psychological distance, migration, and historical relationship on FDI [48][49][50][51][52][53][54]. e measurements of different bilateral factors can be quite different. Bilateral political relations are usually calculated by event-history analysis on political behaviors, leaders' visits, mutual comments, joint engagement in international organizations and military actions. Economic distance is usually calculated by the differences in economic indicators between the two countries. Although the geographical distance is defined as the distance between national capitals, the distance between important ports is a good alternative. In most previous studies, cultural distance is calculated based on the Hofstede cultural dimension data or WVS value survey data. e institutional distance is calculated with the Corruption Perceptions Index (CPI) or the Worldwide Governance Indicators (WGIs). Now, let us comb through, sort out the indicators, and specify some of the terminologies in the literature. e bilateral indicators defined by the difference between the two countries' individual indicators are often called distances. Both country distance and country relations are bilateral indicators, but the difference is that country distance only considers the comparison of the situation of the two countries, while country relations are the exchange between the two countries. If no distinction is made, as in the existing literature, only the country distance is usually considered, and it may be easy to overlook the importance of country relations. Here, we would like to highlight that this may or may not differ from the conventional concept of distance in mathematics. For example, the economic distance refers to the difference between the levels of economic development in the two countries, while the cultural distance is usually calculated by the European distance using the Hofstede cultural data. Indicators are called relations if they are derived from the interactions between home and host countries. For example, the number of high-level exchanges and interaction between the two countries is scored and used to measure bilateral political relations. e historical import and export volume of the two countries is used to measure the bilateral economic relationship. As Figure 1 shows, let A and B be the home country and the host country, respectively, and x A and x B represent certain individual indicators of the two countries, respectively. e indicator f(x A , x B ) derived by a certain kind of difference between x A and x B is called distance. In addition, let x AB be a bilateral interaction between the two countries. A bilateral relationship can be measured by a certain function g (x AB ) of x AB . Having combed the indicators and clarified the concepts of relations and distance, we find that the relation indicators in the literature mainly include bilateral political relations and diplomatic relations, which lay more emphasis on political aspects, and less on economic and social relations. We argue that historical economic exchanges and cooperation are a foundation for future cooperation, and civilian interaction and exchanges play a non-negligible role in country relations, which are usually treated as exogenous environmental variables in overseas investment decisionmaking. In fact, the dependence of the host economy on the home country and the civilian relationships can greatly increase foreign investors' confidence. In turn, the home country's investment and engineering contracting can further promote the country-to-country relations. Hence, the economic and civilian relationships must be incorporated in the research of country relations. Evaluation Framework and a Simplified Model. We propose a theoretical framework for country relations evaluation with three primary indicators of political relations, economic relations, and social relations. e principles of the secondary and tertiary indicator selection are continuous observability, universality, all-sidedness, and relativity as well. Here, continuous observability means that the data are time series. Universality requires that the selected indicators apply to most country-to-country relations. For this reason, the indicators of colonial relations and common currencies are incorporated with dummy variables in our article. All-sidedness means the inclusion of all or nearly all elements or aspects. For example, the indicator of high-level exchanges and interaction should be extended to include common international networks and organizations. Relativity refers to dimensionless relative measurements for economic relations and social relations, such as differences and/or ratios. Suppose that country A is the largest trading partner of country B, while B is not the largest trading partner of A. e economic dependence of B on A is clearly higher than that of A on B. We use trade and FDI shares, instead of volumes and amounts, to measure the economic relations. e political relations are stipulated by five subindicators. ey are high-level exchanges and interaction, relationship statements, international cooperation, political conflicts, and colonial relations. Among them, the relationship statements are the foundation, the frequent high-level exchanges and interaction reflect the activity, international cooperation reflects the extent of the relation, and political conflicts assess the impact of negative events. e indicator of high-level exchanges and interaction includes four levels, that is, mutual visits of heads of state, meeting of heads of state in a third country, mutual visits of prime ministers and ministers of foreign affairs, and meeting of prime ministers and ministers of foreign affairs in a third country. As to political conflicts, being negative events, we also define four levels in the light of Ma et al. [18]: lowering of diplomatic level, serious military conflict, minor military conflict, and declared protest attitude. Generally, the effect of political events diminishes gradually because memory fades due to the mere passage of time, as per the decay theory in psychology. Such a decay effect has not been incorporated into the event studies on country relations up to now. We take it into consideration in our model with an attenuation coefficient. See Section 3.2. e indicators of international cooperation allow to consider the bilateral relations in the context of multilateral interaction. e joint participation in important international organizations is conducive to enhancing cooperation between the two countries under consideration, such as in the Shanghai Cooperation Organization, the ASEAN Regional Forum, the G20, and BRICS, which have greatly boosted political and economic exchanges among participating countries. e indicator of relationship statements on bilateral relationships includes the full diplomatic relations, which manifests stable medium to long-term country relations, and other joint statements on relation augmentation. Four levels of relationship are considered in this article to reflect the different strengths of the relationship between the two countries. e time length of the existence of the relationship is also incorporated into our model to reflect the degree of consolidation of relations. e indicator of economic relations, there are seven secondary indicators, including the share of imports, the share of exports, the share of FDI, the share of OFDI, the closing of trade ports, the raising of tariffs, and common currencies. e negative subindicators of economic relations are sorted out from the positive ones. e positive economic indicators represent the degree of economic exchanges between the two countries, measured with the shares of bilateral trades and investment in the total of that of each country. e negative subindicators embody events such as closing trade ports and raising tariffs. e indicators of social relations include the share of entries, share of exits, and ethnic conflicts. For the social relations, the positive indicator is defined by the total flows between citizens of the two countries, while the negative indicator represents the negative social events occurred in the host country against the home country, such as protests and product boycotts. e structure of our simplified model of country relations is shown in Figure 2. Indicator Measurements. roughout the article, the symbols for the first-level indicators are P for political relations, E for economic relations, and S for social relations. e political relations indicators of high-level exchanges and interaction and political conflicts are measured using the event analysis method. For the details, one can see the conflict-cooperation model proposed for quantitative analysis of Sino-US relations during the eight years of Clinton's administration [16] and the improved scoring method of country relations in the work of Yan and Zhou [17]. Table 1 shows the indicator system, and Table 1data that will be used. e valuation process can be described as follows. As pointed above, the attenuation of the effects of political events must be incorporated. e simplest way is to make use of the famous Ebbinghaus method, also known as the Ebbinghaus forgetting curve [55]. Suppose an event occurs at time 0. e attenuation coefficient, denoted by A t , is defined as the proportion of the event effect preserved in people's memory, providing no interference. e attenuation coefficient in Wozniak et al. [55] can be rewritten as follows, where the unit of time is year: Let A 0 be the scores for bilateral interactive events occurring at time 0. e residual effect in year t is equal to x AB Figure 1: Data sources and classification of different indicators. Computational Intelligence and Neuroscience A t ·A 0 . e real effect attenuated of positive events may differ from that of negative events. For simplicity, we assume the symmetry between the two kinds of attenuation. High-Level Exchanges and Interaction. Four levels of high-level visits and interaction are collected here. e first level refers to visits by heads of state to each other's countries; the second level refers to meetings between heads of state in a third country when participating in international affairs; the third level refers to visits by prime ministers or foreign ministers to each other's countries; the fourth level refers to meetings between prime ministers or foreign ministers in a third country when participating in international affairs. According to the level and importance represented by the four cases, the lowest level is assigned 0.2, and the score is doubled when the level raised by one level, so the four levels are assigned 1.6, 0.8, 0.4, and 0.2, respectively. e scores correspond to the importance level of the event, as shown in Table 2. Let n lt be the times of high-level exchanges and interaction at level l in year t, and hle l be the event scores, hle l ∈ {1. 6 n lt · hle l , ∀t, t � 0, 1, . . . , T. e status of the political relations at T is equal to the cumulative residual scores of the high-level exchanges and interaction during the observation periods: where A t is computed from (1). Relationship Statements. Different official statements of country-to-country relations represent different degrees and characteristics of cooperation. e indicator value can be determined according to the statement of the relationship. Pan and Jin [56] set the value of the partnership country equal to 3, cooperative relationship to 2, diplomatic relationship to 1, and no diplomatic relationship to 0. Zhang et al. [19] set the comprehensive strategic partnership of cooperation and comprehensive strategic partnership equal to 3, the strategic partnership of cooperation and strategic partnership to 2, the comprehensive cooperation partnership or partnership to 1, and other relations to 0. Men and Liu [57] grade the relationship statements by three levels, allweather, strategic, and general. ose authors classified the country-to-country relationship expressions into several different levels, and each level reflects a certain degree of relationship. For the purpose of method demonstration, we analyze all usual relationship expressions of China with countries established diplomatic ties. As of 14 July 2020, 177 out of 192 countries have established diplomatic relations with China. ere are 19 kinds of relationship expressions containing 12 keywords of different frequencies. Table 3 displays all those relationship expressions and keywords. ese keywords stand for the scope, depth, and mode of cooperation between countries and can be classified into several levels. A relationship expression is usually a combination of several keywords. A change of the keyword combination implies an upgrade or an update of the relationship expression. By decomposing keyword, scoring each keyword, and reconstructing the expression, we can score each of the relationship expressions. is makes our scoring method more flexible than the direct assignment of scores to each relationship expression. e latter approach is used in almost all previous articles on the issue. e scores of the keywords are shown in Table 4. For any relationship expression, the score of each keyword involved can be found in Table 4 and then summed up to obtain the score of the relationship expression. Generally, a bilateral relation statement can be regarded as a political event. Its influences will gradually diminish if there is no recall. Let re l be the scores of the keywords of a relationship expression at level l, re l ∈ {0, 1, 2}. e scores of that expression declared P2 in year t is e cumulative score of all relationship expressions declared until the end of year T is where A t comes from (1). Multilateral Interaction. By virtue of the importance the countries attach to a meeting of international or regional organizations, state leaders of different levels attend the meeting. e multiple interactions can then be scored according to the level of state leaders interacting in the context of meetings of international and regional organizations. We consider the scoring criteria in Yan and Zhou [17] as a benchmark and derive the multilateral interaction scores from it. Multilateral interactions in the context of international or regional meetings differ from bilateral ones since they are less influential with a large number of participants and particularly the important bilateral state leaders' meetings are held in one of the two countries, not in the context of multinational leaders' meetings. In general, the larger the number of members of an international organization is, the weaker the influence of such a meeting on the bilateral relations between members is. Suppose there are N t international organizations taken into account in year t. Let m nt be the number of members of organization n t , 1 ≤ n t ≤ N t , and io nt be the scores of the state leader's meetings of organization n t . e total scores of the multilateral interactions in a given year t are equal to e cumulative scores of multilateral interactions in year T are therefore where A t is defined in (1). Economic Relations. As pointed in Section 3.1, bilateral economic interactions can be measured with shares of imports, exports, FDI, and OFDI between two economies. E1 t denotes the share of imports of the home country from the host country in year t; E2 t the share of exports of the home country from the host country in year t; E3 t the share of the inward FDI of the host country from the home Computational Intelligence and Neuroscience country in year t; and E4 t the share of the inward FDI of the home country from the host country in year t. Social Exchange Indicators. e subindicators of the social relations include the share of entries S1 and the share of exits S2, representing how frequent the host country citizens travel to the home country and how frequent the home country citizens travel to the host country, respectively. For a given year t, let entry t be the number of trips of the host country's citizens to the home country in year t; exit t be the number of trips of the home country's citizens to the host country; Outbound host t be the total number of outbound trips of the host country's citizens, and Outbound home t be the total number of outbound trips of the home country's citizens. us, Outbound home t , t � 0, 1, . . . , T. Aggregation of Country Relations Index. It is necessary to point out that the dimensional consistency of all the indicators has been considered in our scoring system; that is, the higher the score of an indicator is, the closer the unidimensional relation is between the two countries. e sum of all the normalized indicators is a trivial aggregation method, which is based on ordinal information. Its underlying hypothesis of the equi-importance of the indicators is undesirable in general. Another simple and most used method is the weighted additive aggregation. However, the justifiability of the weights is generally burdensome to verify. In addition, the additive aggregation methods are compensatory, that is, poor performance in political indicators can be compensated by sufficiently high values of economic and social indicators. e geometric aggregation is also a simple and less compensatory approach. Of course, one can always adopt noncompensatory aggregations, such as the multicriteria analysis. We do not use the last approach in this article because of only a small dataset available for methodological illustration. Based on the scored second-level indicators, we compute the scores of first-level indicators with the arithmetic mean and then the aggregate of the overall country relations using both weighted additive and geometric approaches. In the literature of country risk assessment, political and economic risks are often considered equally important. For example in the country risk reports issued by International Country Risk Guide (ICRG) and Euromoney Country Risk Computational Intelligence and Neuroscience 7 (ECR), political risk and economic-financial risk are of equal weights, while other risks are subordinated to them. e indicator system of Country-Risk Rating of Overseas Investment from China (CROIC) includes five subindicators: economic foundation, solvency, social resilience, political risk, and relations with China, in which the economic indicators (the first two) and political indicators (the third and fourth) are both weighted at 40% while the social resilience indicator is weighted by 20%. In the light of those weighting methods, we set the weights of political relations and economic relations both at 40% and that of the social relations at 20%. e concise calculation method of each index is shown in Table 5, where R + and R * represent the additive and geometric aggregates, respectively. e Initial Value of the Political Relations. Unlike the indicators of economic and social relations, E and S, which are defined by current year data only, the value of the political relations P includes, by the definition, the effect of lagged values cumulative from time −∞. Hence, the true initial value P 0 of P is intrinsically unobservable. One can set P 0 equal to zero if the number of observation periods is large enough. In fact, P 0 � ±1 in the extreme cases and the remaining influence of P 0 on P t is equal to ± A t , which tends to 0. e influence of extreme initial values on R t + is equal to 0.4A t . is implies that any initial value P 0 will create an error in R + 5 less than 0.4A 5 ≈ 0.061 and in R + 10 less than 0.4A 10 ≈ 0.058. In our case study in the next section, we assume that P 0 � 0. In contrast, R * is more sensitive to the initial value assumption. For this reason, R * is not recommended when the number of observation periods is small. ere is no negative event recorded in our datasets, wherefore the five negative event indicators are set to 0. e calculated results are shown in Table 6. e chronological evolution of the political relations, economic relations, social relations, and overall country relations R + and R * is shown in Figures 3-7, respectively. e overall relations of China with Russia are closer than those of China with other BRICS countries. is is consistent with most people's perception. Evidently, the stable China-Russia political relations dominate other relations. Both the additive and geometric aggregates show that the overall relations between China and other BRICS countries can be ranked as China-Russia, China-India, China-Brazil, and China-South Africa. Between China and Brazil, the poorer performance in political and social relations is compensated to a certain extent by the higher E4 � OFDI/total OFDI of the home country S S1 S1 � entry/total outbound of the host country S � (S1 + S2)/2 S2 S2 � exit/total outbound of the home country Case Study Computational Intelligence and Neuroscience Computational Intelligence and Neuroscience values of the economic relations. It is also notable that China-India overall relations are stable although their social relations tend to be alienated. Conclusions Since the relationship between nations could determine the location advantages in the international investment mode selection to a certain extent, this article has studied the evaluation problem of the country relations. First of all, through literature analysis, it was found that the indicators of country relations and distance were often mixed in empirical studies. erefore, this article distinguished the meanings of each, according to the differences in their reference data, which is the first contribution. en, based on the fact that the current country relations mainly focused on political relations, this article added economic relations and social relations, and constructed a relatively reasonable analytical framework from three aspects. A simplified framework for the evaluation of country relations was established, and the indicators at all levels were determined according to the theoretical analysis and the requirements of continuous observability, universality, all-sidedness, and relativity of data. At the same time, the quantitative methods of each indicator in the index system were designed and improved. at is the second contribution of the article. Finally, the practicability and rationality of the evaluation model of country relations were illustrated through a case of China and other BRICS countries. e calculated results were easy to analyze and explain, and could reflect the changes of bilateral relations in time, as well as the characteristics and differences of relations in different aspects and different countries. ere are two points in this article that need further discussion and improvement. e first is the further optimization of the index system, because the current index system is based on theoretical and literature analysis and can be supplemented and verified in terms of completeness by organizing expert discussions. Second, more work is needed to compare and verify whether there is a more reasonable weight design of indicators, so as to get the optimal conclusion. Two related problems are deserved further study. e first is whether there is an interaction between the three aspects of relations and how they work together in international trade and investment. In addition, applying the method proposed in this paper to the study of unreasonable phenomena such as the risk tendency in international trade and investment, such as the investment decision analysis of China in regions of Africa and One Belt and Road countries, we can try to explain some research paradoxes, which will be a meaningful research direction. Data Availability e data that support the findings of this study are available from the corresponding author upon reasonable request. Conflicts of Interest e authors declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
v3-fos-license
2021-06-28T05:09:19.352Z
2021-06-01T00:00:00.000
235654120
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1660-3397/19/6/346/pdf", "pdf_hash": "ca4be616fe469590f555d56377b25f25bb8b9ecf", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2896", "s2fieldsofstudy": [ "Biology" ], "sha1": "ca4be616fe469590f555d56377b25f25bb8b9ecf", "year": 2021 }
pes2o/s2orc
Astaxanthin Protects Dendritic Cells from Lipopolysaccharide-Induced Immune Dysfunction Astaxanthin, originating from seafood, is a naturally occurring red carotenoid pigment. Previous studies have focused on its antioxidant properties; however, whether astaxanthin possesses a desired anti-inflammatory characteristic to regulate the dendritic cells (DCs) for sepsis therapy remains unknown. Here, we explored the effects of astaxanthin on the immune functions of murine DCs. Our results showed that astaxanthin reduced the expressions of LPS-induced inflammatory cytokines (TNF-α, IL-6, and IL-10) and phenotypic markers (MHCII, CD40, CD80, and CD86) by DCs. Moreover, astaxanthin promoted the endocytosis levels in LPS-treated DCs, and hindered the LPS-induced migration of DCs via downregulating CCR7 expression, and then abrogated allogeneic T cell proliferation. Furthermore, we found that astaxanthin inhibited the immune dysfunction of DCs induced by LPS via the activation of the HO-1/Nrf2 axis. Finally, astaxanthin with oral administration remarkably enhanced the survival rate of LPS-challenged mice. These data showed a new approach of astaxanthin for potential sepsis treatment through avoiding the immune dysfunction of DCs. Introduction The immune system, as a tight and dynamic regulatory network, maintains an immune homeostasis, which keeps a balance between the response to heterogenic antigens and tolerance to self-antigens [1]. However, in some diseases, such as sepsis, rheumatoid arthritis (RA), multiple sclerosis (MS), systemic lupus erythematosus (SLE), and inflammatory bowel disease (IBD), this immune homeostasis is broken [2]. Sepsis is a highly heterogeneous clinical syndrome that mainly results from the dysregulated inflammatory response to infection, which continues to cause considerable morbidity and accounts for 5.3 million deaths per year in high income countries [3]. Recently, the incidence of sepsis is progressively increased and sepsis-related mortality cases remain at a high level in China [4]. The host immune response induced by sepsis is a complex and dynamic process. After infection, the conserved motifs of pathogens, termed the pathogen-associated molecular patterns (PAMPs), such as lipopolysaccharide (LPS, cell wall component of gram-negative bacteria) or lipoteichoic acid (cell wall component of gram-positive bacteria), are recognized by the pattern recognition receptors (PRRs) expressed by immune cells, and an overwhelming innate immune response is triggered in septic patients [5,6]. Under physiological conditions, the immune activation contributes to eliminating pathogens and clearing infected cells. However, when driven by sepsis, the immune homeostasis Firstly, the biosafety of astaxanthin was evaluated in the murine DCs. The cells were treated with astaxanthin and the cell viability was analyzed by the CCK-8 assay. The results revealed that the cellular viability was not changed until 24 h after treatment with astaxanthin up to 50 μM ( Figure 1A). Next, we examined the expression of CD69, which is a critical activation marker of DCs. After exposure to LPS (100 ng/mL) for 24 h, the expression of CD69 was upregulated, whereas they were significantly inhibited with treatment of astaxanthin (Figure 2A,B). In addition, we tested whether astaxanthin affected the production of cytokines in LPS-induced DCs. Significantly, pro-inflammatory cytokines (TNF-α and IL-6) were downregulated by astaxanthin in a dose-dependent manner ( Figure 2C,D). Surprisingly, the secretion of IL-10 was not increased ( Figure 2E), implying that the suppressive effect of astaxanthin probably was not mediated through anti-inflammatory cytokine. These results indicated that astaxanthin attenuated the cytokines secreted by LPS-induced DCs. Firstly, the biosafety of astaxanthin was evaluated in the murine DCs. The cells were treated with astaxanthin and the cell viability was analyzed by the CCK-8 assay. The results revealed that the cellular viability was not changed until 24 h after treatment with astaxanthin up to 50 μM ( Figure 1A). Next, we examined the expression of CD69, which is a critical activation marker of DCs. After exposure to LPS (100 ng/mL) for 24 h, the expression of CD69 was upregulated, whereas they were significantly inhibited with treatment of astaxanthin (Figure 2A,B). In addition, we tested whether astaxanthin affected the production of cytokines in LPS-induced DCs. Significantly, pro-inflammatory cytokines (TNF-α and IL-6) were downregulated by astaxanthin in a dose-dependent manner ( Figure 2C,D). Surprisingly, the secretion of IL-10 was not increased ( Figure 2E), implying that the suppressive effect of astaxanthin probably was not mediated through anti-inflammatory cytokine. These results indicated that astaxanthin attenuated the cytokines secreted by LPS-induced DCs. (C-E) Supernatants were collected and TNF-α, IL-6, and IL-10 were detected by ELISA. The data shown are the means ± s.d. of three replicates and are representative of three independent experiments. Statistical significance is assessed by one-way ANOVA analysis to compare the results between different groups. ** p < 0.01. Astaxanthin Reversed the Morphological Changes in LPS-Activated DCs Mature DCs were easily aggregated to form larger clusters and longer extensions [33]. Upon LPS stimulation alone, the size of clusters and the extension morphologies of DCs were increased, compared with the untreated and the astaxanthin-alone group. However, these processes were impaired by astaxanthin ( Figure 3A,C). Meanwhile, the size of clusters and the cell shape index (major axis/minor axis) of each group were measured. As shown in Figure 3B,D, these two indexes were markedly increased after LPS stimulation. Treatment of astaxanthin significantly suppressed the increase of two indexes in LPS-induced DCs. These results indicated that astaxanthin attenuated the morphological changes of LPSactivated DCs. FCM. (C-E) Supernatants were collected and TNF-α, IL-6, and IL-10 were detected by ELISA. The data shown are the means ± s.d. of three replicates and are representative of three independent experiments. Statistical significance is assessed by one-way ANOVA analysis to compare the results between different groups. ** p < 0.01. Astaxanthin Reversed the Morphological Changes in LPS-Activated DCs Mature DCs were easily aggregated to form larger clusters and longer extensions [33]. Upon LPS stimulation alone, the size of clusters and the extension morphologies of DCs were increased, compared with the untreated and the astaxanthin-alone group. However, these processes were impaired by astaxanthin ( Figure 3A,C). Meanwhile, the size of clusters and the cell shape index (major axis/minor axis) of each group were measured. As shown in Figure 3B,D, these two indexes were markedly increased after LPS stimulation. Treatment of astaxanthin significantly suppressed the increase of two indexes in LPSinduced DCs. These results indicated that astaxanthin attenuated the morphological changes of LPS-activated DCs. Astaxanthin Impaired the Phenotypic Maturation of LPS-Induced DCs Maturation is the key step in the DC-mediated regulation of immune responses. To investigate whether astaxanthin modulated the DC maturation, the expression levels of MHCII and costimulatory molecules in DCs were analyzed by FCM. With LPS treatment alone, the expressions of MHCII, CD40, CD80, and CD86 were markedly upregulated, whereas they were down-regulated remarkably with the treatment of astaxanthin ( Figure 4). These data suggested that astaxanthin diminished LPS-activated DC phenotypic maturation and compromised the immunostimulation of the activated DCs. Astaxanthin Impaired the Phenotypic Maturation of LPS-Induced DCs Maturation is the key step in the DC-mediated regulation of immune responses. To investigate whether astaxanthin modulated the DC maturation, the expression levels of MHCII and costimulatory molecules in DCs were analyzed by FCM. With LPS treatment alone, the expressions of MHCII, CD40, CD80, and CD86 were markedly upregulated, whereas they were down-regulated remarkably with the treatment of astaxanthin ( Figure 4). These data suggested that astaxanthin diminished LPS-activated DC phenotypic maturation and compromised the immunostimulation of the activated DCs. Astaxanthin Increased the Endocytosis Capability of LPS-Induced DCs In response to inflammatory stimuli, DCs trigger the process of maturation; downregulation of endocytosis is a hallmark of maturation [34]. To investigate whether astaxanthin modulated the endocytosis of DCs, the fluorescent marker dextran was used. As shown in Figure 5A,B, LPS alone significantly decreased the endocytosis capability of DCs compared to the untreated control, while astaxanthin enhanced the uptake of dextran in LPS-induced DCs. Moreover, confocal laser scanning microscopy (CLSM) images displayed the amount of Alexa Fluor 647-dextran existing in the body of LPS-induced DCs and was enhanced after the treatment of astaxanthin ( Figure 5C). These results suggested that astaxanthin significantly increased the endocytosis capability of LPS-induced DCs. Astaxanthin Increased the Endocytosis Capability of LPS-Induced DCs In response to inflammatory stimuli, DCs trigger the process of maturation; downregulation of endocytosis is a hallmark of maturation [34]. To investigate whether astaxanthin modulated the endocytosis of DCs, the fluorescent marker dextran was used. As shown in Figure 5A,B, LPS alone significantly decreased the endocytosis capability of DCs compared to the untreated control, while astaxanthin enhanced the uptake of dextran in LPS-induced DCs. Moreover, confocal laser scanning microscopy (CLSM) images displayed the amount of Alexa Fluor 647-dextran existing in the body of LPS-induced DCs and was enhanced after the treatment of astaxanthin ( Figure 5C). These results suggested that astaxanthin significantly increased the endocytosis capability of LPS-induced DCs. Mar. Drugs 2021, 19, x 6 of 16 . The results are from one representative experiment of three performed. Bars: 10 μm. Statistical significance is assessed by one-way ANOVA analysis to compare the results between different groups. ** p < 0.01. Astaxanthin Inhibited the Migration Capability of LPS-Induced DCs DCs that are stimulated with inflammatory mediators can mature and migrate from nonlymphoid regions to lymphoid organs for initiating T cell-mediated immune responses. This migratory step is closely related to the CCR7 expression of DCs [35]. To investigate whether astaxanthin modulated the DC migration, the expression levels of CCR7 in DCs were analyzed by FCM. With LPS treatment alone, CCR7 expression was significantly increased, whereas they remarkably declined after the treatment of astaxanthin ( Figure 6A,B). Moreover, chemotaxis assay in transwell chambers was used to examine the DC migration on the basis of attraction of mature DCs for CCL19 or CCL21. The migration of LPS-induced DCs was remarkably inhibited after the treatment of astaxanthin in response to CCL19 ( Figure 6C,D). These results suggested that astaxanthin significantly inhibited the migration capability of LPS-induced DCs. Astaxanthin Inhibited the Migration Capability of LPS-Induced DCs DCs that are stimulated with inflammatory mediators can mature and migrate from nonlymphoid regions to lymphoid organs for initiating T cell-mediated immune responses. This migratory step is closely related to the CCR7 expression of DCs [35]. To investigate whether astaxanthin modulated the DC migration, the expression levels of CCR7 in DCs were analyzed by FCM. With LPS treatment alone, CCR7 expression was significantly increased, whereas they remarkably declined after the treatment of astaxanthin ( Figure 6A,B). Moreover, chemotaxis assay in transwell chambers was used to examine the DC migration on the basis of attraction of mature DCs for CCL19 or CCL21. The migration of LPS-induced DCs was remarkably inhibited after the treatment of astaxanthin in response to CCL19 ( Figure 6C,D). These results suggested that astaxanthin significantly inhibited the migration capability of LPS-induced DCs. alone, LPS (100 ng/mL) alone, astaxanthin (10 μM) plus LPS (100 ng/mL) groups were seeded into the upper wells of a 24well transwell chamber, and CCL19 (200 ng/mL) was included in lower chamber. After 4 h, the number of DCs that were transferred from the upper to the lower wells was counted by FCM. The spontaneous migration of cells (absence of CCL19) was also shown. Data shown are the means ± s.d. of three replicates and are representative of three independent experiments. Statistical significance is assessed by one-way ANOVA analysis to compare the results between different groups. ** p < 0.01. Astaxanthin Impaired the Allostimulatory Capacity of LPS-Induced DCs Mature DCs are potent stimulators of allogeneic T cell proliferation in the mixed lymphocyte reaction (MLR) [36]. To determine the effects of astaxanthin on the ability of LPSinduced DCs to stimulate the MLR, DCs were collected and incubated with allogeneic CD4 + T cells. As shown in Figure 7, LPS-induced DCs stimulated proliferative responses more effectively than untreated DCs, while astaxanthin-treated DCs impaired proliferative responses derived from the LPS stimulation at all ratios of DC: T cell tests. These results suggested that astaxanthin strongly impaired the allostimulatory capacity of LPSinduced DCs. alone, LPS (100 ng/mL) alone, astaxanthin (10 µM) plus LPS (100 ng/mL) groups were seeded into the upper wells of a 24-well transwell chamber, and CCL19 (200 ng/mL) was included in lower chamber. After 4 h, the number of DCs that were transferred from the upper to the lower wells was counted by FCM. The spontaneous migration of cells (absence of CCL19) was also shown. Data shown are the means ± s.d. of three replicates and are representative of three independent experiments. Statistical significance is assessed by one-way ANOVA analysis to compare the results between different groups. ** p < 0.01. Astaxanthin Impaired the Allostimulatory Capacity of LPS-Induced DCs Mature DCs are potent stimulators of allogeneic T cell proliferation in the mixed lymphocyte reaction (MLR) [36]. To determine the effects of astaxanthin on the ability of LPS-induced DCs to stimulate the MLR, DCs were collected and incubated with allogeneic CD4 + T cells. As shown in Figure 7, LPS-induced DCs stimulated proliferative responses more effectively than untreated DCs, while astaxanthin-treated DCs impaired proliferative responses derived from the LPS stimulation at all ratios of DC: T cell tests. These results suggested that astaxanthin strongly impaired the allostimulatory capacity of LPS-induced DCs. Astaxanthin Protected the LPS-Induced Immune Dysfunction of DCs Via Activation of HO-1/Nrf2 Axis To investigate whether astaxanthin modulated the DC maturation by the HO-1/Nrf2 pathway, the expression levels of HO-1 and Nrf2 on DCs were analyzed by FCM. As shown in Figure 8A-D, treatment of LPS-induced DCs with astaxanthin, HO-1, and Nrf2 were significantly upregulated, compared with the LPS-only group. Next, to study whether HO-1 played an important role in the suppression of DC maturation, the cytokine release (TNF-α and IL-10) (Figure 8I,J) and phenotypic markers (CD80 and CD86) ( Figure 8E-H) were detected. The results showed that the effects of astaxanthin in the LPS-induced DCs were diminished when DCs were pretreated with SnPP (a HO-1 inhibitor) ( Figure 8E-J). However, CoPP (a HO-1 inducer) aggravated the inhibitory effect of astaxanthin in the LPS-induced DCs ( Figure 8E-J). Therefore, the Nrf2/HO-1 pathway played an important role in the inhibition of LPS-induced DCs maturation by astaxanthin. Figure 7. Astaxanthin decreased LPS-induced DCs to increase the proliferation of allogeneic T cells. After incubation with astaxanthin or plus 100 ng/mL LPS for 24 h, the collected DCs were used in two graded cell numbers (DC/T-cell ratios: 1:1 (A,C) and 1:5 (B,D)) to stimulate CFSE-labeled naive CD4 + allogeneic T cells (5 × 10 5 responder cells per well). After 5 days, proliferation was detected by FCM. Data shown are the means ± s.d. of three replicates and are representative of three independent experiments. Statistical significance is assessed by one-way ANOVA analysis to compare the results between different groups. ** p < 0.01. Astaxanthin Protected the LPS-Induced Immune Dysfunction of DCs via Activation of HO-1/Nrf2 Axis To investigate whether astaxanthin modulated the DC maturation by the HO-1/Nrf2 pathway, the expression levels of HO-1 and Nrf2 on DCs were analyzed by FCM. As shown in Figure 8A-D, treatment of LPS-induced DCs with astaxanthin, HO-1, and Nrf2 were significantly upregulated, compared with the LPS-only group. Next, to study whether HO-1 played an important role in the suppression of DC maturation, the cytokine release (TNF-α and IL-10) (Figure 8I Astaxanthin Protected LPS-Induced Sepsis in Mice The overwhelming production of pro-inflammatory cytokines and mediators results in tissue damage or lethality. To determine the effects of astaxanthin on the LPS-induced septic lethal rate and production of cytokines in LPS-challenged mice, firstly, the biosafety of astaxanthin was evaluated in mice. As shown in Figure 1B, the body weight of mice was not changed in the astaxanthin group compared with the control group, even if the dose used was up to 300 mg/kg. Next, the changes in body weight and survival rates were monitored after LPS injection for 3 days or 40 h, respectively. As shown in Figure 9A, LPS administration markedly increased the loss of body weight in mice. However, the astaxanthin recovered the change of body weight in the LPS-challenged mice. Moreover, the astaxanthin decreased the mortality of the LPS-treated mice ( Figure 9B). Next, the levels of cytokines in mice serum were detected by ELISA. The results showed that administration of astaxanthin significantly decreased the production of TNF-α, IL-6, and IL-10 (Figure 9C-E). Taken together, these data demonstrated that astaxanthin effectively protected LPS-induced sepsis in mice. Astaxanthin Protected LPS-Induced Sepsis in Mice The overwhelming production of pro-inflammatory cytokines and mediators results in tissue damage or lethality. To determine the effects of astaxanthin on the LPS-induced septic lethal rate and production of cytokines in LPS-challenged mice, firstly, the biosafety of astaxanthin was evaluated in mice. As shown in Figure 1B, the body weight of mice was not changed in the astaxanthin group compared with the control group, even if the dose used was up to 300 mg/kg. Next, the changes in body weight and survival rates were monitored after LPS injection for 3 days or 40 h, respectively. As shown in Figure 9A, LPS administration markedly increased the loss of body weight in mice. However, the astaxanthin recovered the change of body weight in the LPS-challenged mice. Moreover, the astaxanthin decreased the mortality of the LPS-treated mice ( Figure 9B). Next, the levels of cytokines in mice serum were detected by ELISA. The results showed that administration of astaxanthin significantly decreased the production of TNF-α, IL-6, and IL-10 ( Figure 9C-E). Taken together, these data demonstrated that astaxanthin effectively protected LPS-induced sepsis in mice. Discussion Here, we explored the immunosuppressive properties of astaxanthin on the activation and maturation of DCs for the first time. Our data indicated that astaxanthin reduced the expression of activation markers (CD69), LPS-induced pro-inflammatory (TNF-α and IL-6), and anti-inflammatory (IL-10) cytokines by DCs; reversed the morphological changes of LPS-activated DCs; decreased the LPS-induced expression of phenotypic markers by DCs, including MHCII, CD40, CD80, and CD86; promoted the endocytosis levels in LPS-treated DCs; and hindered the LPS-induced migration of DCs via downregulating CCR7 expression. Furthermore, astaxanthin abrogated allogeneic T cell proliferation by LPS-induced DCs. Finally, astaxanthin enhanced the survival rate of LPS-challenged mice and inhibited the production of inflammatory cytokines in serum, suggesting that astaxanthin can strongly protect LPS-induced sepsis ( Figure 10).These results powerfully implied that astaxanthin may have a potential application in the treatment of sepsis. Discussion Here, we explored the immunosuppressive properties of astaxanthin on the activation and maturation of DCs for the first time. Our data indicated that astaxanthin reduced the expression of activation markers (CD69), LPS-induced pro-inflammatory (TNF-α and IL-6), and anti-inflammatory (IL-10) cytokines by DCs; reversed the morphological changes of LPS-activated DCs; decreased the LPS-induced expression of phenotypic markers by DCs, including MHCII, CD40, CD80, and CD86; promoted the endocytosis levels in LPStreated DCs; and hindered the LPS-induced migration of DCs via downregulating CCR7 expression. Furthermore, astaxanthin abrogated allogeneic T cell proliferation by LPSinduced DCs. Finally, astaxanthin enhanced the survival rate of LPS-challenged mice and inhibited the production of inflammatory cytokines in serum, suggesting that astaxanthin can strongly protect LPS-induced sepsis ( Figure 10).These results powerfully implied that astaxanthin may have a potential application in the treatment of sepsis. Toll-like receptor (TLR) 4 signaling, leading to secretion of inflammatory productions, has been considered as a critical pathway in sepsis pathophysiology. LPS from gramnegative bacteria interacted with TLR4 to cause phagocytic cells to robustly generate a variety of proinflammatory cytokines [37]. CD69, as a type II C-type lectin, is known as a very early activation marker, which is first upregulated upon primary activation [38,39]. In our study, we found that astaxanthin reduced the activation level of LPS-treated DCs by downregulating CD69 expression, suggesting that the immunosuppressive ability of astaxanthin was involved in the early inflammatory response. After DC activation, a mass of inflammatory cytokines was released. TNF-α, as a rapid proinflammatory cytokine, can strongly accelerate DC maturation [40]. Furthermore, TNF-α also can regulate other inflammatory cytokines, especially for IL-6 [41], implying that astaxanthin might suppress the secretion of TNF-α, and then result in the down-expression of IL-6 in DCs. At the late stage of sepsis, the anti-inflammatory state may appear, showing a high expression of IL-10, which may result in a further impaired immune response with an increased risk of nosocomial infections [42]. Therefore, we evaluated the effects of astaxanthin treatment in LPS-induced IL-10 expression, and found that IL-10 was also decreased, and thereby, astaxanthin plays a remarkable inhibition role on both pro-and anti-inflammatory stages. Toll-like receptor (TLR) 4 signaling, leading to secretion of inflammatory productions, has been considered as a critical pathway in sepsis pathophysiology. LPS from gram-negative bacteria interacted with TLR4 to cause phagocytic cells to robustly generate a variety of proinflammatory cytokines [37]. CD69, as a type II C-type lectin, is known as a very early activation marker, which is first upregulated upon primary activation [38,39]. In our study, we found that astaxanthin reduced the activation level of LPS-treated DCs by downregulating CD69 expression, suggesting that the immunosuppressive ability of astaxanthin was involved in the early inflammatory response. After DC activation, a mass of inflammatory cytokines was released. TNF-α, as a rapid proinflammatory cytokine, can strongly accelerate DC maturation [40]. Furthermore, TNF-α also can regulate other inflammatory cytokines, especially for IL-6 [41], implying that astaxanthin might suppress the secretion of TNF-α, and then result in the down-expression of IL-6 in DCs. At the late stage of sepsis, the anti-inflammatory state may appear, showing a high expression of IL-10, which may result in a further impaired immune response with an increased risk of nosocomial infections [42]. Therefore, we evaluated the effects of astaxanthin treatment in LPS-induced IL-10 expression, and found that IL-10 was also decreased, and thereby, astaxanthin plays a remarkable inhibition role on both pro-and anti-inflammatory stages. DCs possess two major states, including immature DCs (iDCs) and mature DCs (mDCs). The iDCs have a strong antigen capture ability with lower expression of phenotypic markers. After antigen uptake, iDCs were transformed into mDCs, which have a strong ability to stimulate the proliferation and differentiation of T cells by upregulating the surface levels of MHCII and costimulatory molecules. Moreover, DCs can easily mature into inflammatory DCs, thereby sustaining a continuous activation of the adaptive DCs possess two major states, including immature DCs (iDCs) and mature DCs (mDCs). The iDCs have a strong antigen capture ability with lower expression of phenotypic markers. After antigen uptake, iDCs were transformed into mDCs, which have a strong ability to stimulate the proliferation and differentiation of T cells by upregulating the surface levels of MHCII and costimulatory molecules. Moreover, DCs can easily mature into inflammatory DCs, thereby sustaining a continuous activation of the adaptive immune response at inflammation sites [43]. However, iDCs were able to induce immune tolerance, and have therefore been introduced as a therapy for systemic lupus erythematosus (SLE) [44,45]. In our data, astaxanthin can effectively inhibit LPS-induced phenotypic markers of DCs, including MHCII, CD40, CD80, and CD86, suggesting that astaxanthin was able to prevent the transformation from iDCs into mDCs. In addition, LPS-induced DCs with astaxanthin treatment possessed a strong antigen capture ability, indicating that the DCs remain in an immature state. Furthermore, once DCs mature, the chemokine receptor CCR7 displays a high-upregulation, which will guide the DCs to migrate toward a draining lymph node, a T cell-rich area with a high expression of CCL19 and CCL21 (CCR7 ligands), for an expanded immune response [46]. Our data suggested that astaxanthin could probably block the connection between DCs and draining lymph nodes via down-regulating CCR7 expression, and lead to limit extensive immune responses. Even if contact happened, LPS-induced DCs with astaxanthin treatment were hardly promoted to a proliferation of allogeneic T cells in our allogeneic mixed lymphocyte reaction assay, which might be associated with the down-regulation of MHCII, costimulatory molecules, and cytokines. Inflammation is the most common feature of many chronic diseases and complications. Previous studies have revealed that the transcription nuclear factor erythroid 2-related factor 2 (Nrf2) contributes to the anti-inflammatory process by orchestrating the recruitment of inflammatory cells and regulating gene expression through the antioxidant response element (ARE) [47]. Heme oxygenase-1 (HO-1) is the inducible isoform and rate-limiting enzyme that catalyzes the degradation of heme into carbon monoxide (CO) and free iron, and biliverdin to bilirubin [48]. Several studies have demonstrated that HO-1 and its metabolites have significant anti-inflammatory effects mediated by Nrf2 [49]. It has been reported that activation of Nrf2 prevents LPS-induced transcriptional upregulation of pro-inflammatory cytokines, including IL-6 and IL-1β [50]. Here, we have demonstrated that astaxanthin inhibited the maturation of LPS-induced DCs via the activation of the HO-1/Nrf2 axis. Interestingly, astaxanthin is a potential antioxidant, and the HO-1/Nrf2 axis is also a key known antioxidative pathway; whether astaxanthin utilizes its antioxidant property to activate the HO-1/Nrf2 pathway and then to initiate an anti-inflammatory response needs to be further investigated. LPS and other PAMPs are related in the pathogenesis of sepsis and the activation of immune responses, resulting in tissue pathological injury and multiple organ failure [51]. Management of excessive inflammatory response is a key strategy for sepsis treatment [52]. In the present study, we performed a series of experiments to determine the anti-inflammatory activities of astaxanthin using LPS-challenged mice. Our results showed that administration of astaxanthin promoted the survival rate of LPS-challenged mice. Additionally, administration of astaxanthin reduced the levels of inflammatory cytokines in serum, including TNF-α, IL-6, and IL-10, which was in line with the result of DCs in vitro. These results implied that DC-targeted anti-inflammatory strategies have great potential in the treatment of sepsis. Ethics Statement The Jiangsu Administrative Committee for Laboratory Animals approved all of the animal studies according to the guidelines of Jiangsu Laboratory Animal Welfare and Ethical of Jiangsu Administrative Committee of Laboratory Animals (Permission number: SYXKSU-2007-0005). Generation of DCs Male C57BL/6 mice, 4-6 weeks old, were from the Animal Research Center of Yangzhou University (Jiangsu, China). The mice were housed under specific pathogen-free conditions for at least 1 week before use. DCs were isolated and cultured as our improved method [53]. Briefly, bone marrow cells were extracted from the tibias and femurs of mice, and then cultured in complete medium (RPMI 1640 supplemented with 10% FBS, 1% streptomycin and penicillin, 10 ng/mL GM-CSF and 10 ng/mL IL-4). After 60 h of culture, medium was gently discarded and fresh medium was added. On day 6, non-adherent and loosely adherent DC aggregates were harvested and sub-cultured overnight. On day 7, only cultures with >90% cells expressing CD11c by flow cytometry (FCM) were used. Cell Viability Assay The cytotoxicity assay of astaxanthin with different doses was performed in DCs using the CCK-8 kit in accordance with the manufacturer's instructions. Briefly, 5 × 10 3 cells were cultured in 96-well plate. After treatment, 10 µL CCK-8 was added to each well, and the cells were incubated for an additional 1 h. The absorbance was measured at 450 nm, and the results were compared as a percentage of the control group. Cytokine Assay In vitro, the DCs were incubated with astaxanthin and/or LPS for 24 h. Next, the levels of TNF-α, IL-6, and IL-10 in the culture supernatants were measured by using ELISA kits (eBioscience) and were performed according to the manufacturer's instruction. Phenotype Assay DCs were harvested and washed twice with PBS, and incubated with FITC-MHCII, PE-CD40, PE-CD80, FITC-CD86, or their respective isotypes, at 4 • C for 30 min as per the manufacturer's guidelines. After being washed three times with PBS, DCs were analyzed by FCM. Endocytosis Assay The harvested DCs were incubated with 1 mg/mL FITC-Dextran at 37 • C for 30 min as previously described [54]. After incubation, DCs were washed twice with PBS and analyzed by FCM. In addition, 4 • C control was also performed to exclude adhesion. Migration Assay The chemotaxis of DCs was performed in a 24-well transwell chamber (pore size, 5 µm; Corning) as described previously [55]. DCs (1 × 10 5 cells) were then seeded onto the upper chambers and CCL19 (200 ng/mL) was added in the lower chamber. After incubation for 4 h, the migrated cells were collected from the lower chamber, and the number of cells was counted by FCM. Allogeneic Mixed Lymphocyte Reaction Assay Male BALB/c mice, 6 weeks old, were from the Animal Research Center of Yangzhou University (Jiangsu, China). Responder T cells were purified from mice splenic lymphocytes using a CD4 + T cell isolation kit and labeled with CFSE according to the manufacturer's instructions. Next, these cells were cocultured in duplicate with DCs (DC/T cell ratios of 1:1 or 1:5) in 5% CO 2 incubator at 37 • C for 5 days and detected by FCM. HO-1 and Nrf2 Protein Expression Assay The treated DCs were incubated with Alexa Fluor 647 HO-1, PE-Nrf2, or the respective isotypes for 30 min at 4 • C. The cells were analyzed using FCM. Body Weight Change Assay Six-week-old C57BL/6 mice were divided into five groups (n = 10/group). In the treatment group, the mice were given astaxanthin orally for 4 days every 24 h, and the doses of astaxanthin were 50, 100, and 200 mg/kg, respectively; 48 h after the firstly oral administration, the mice received LPS (10 mg/kg body weight) by intraperitoneal injection, body weight changes were monitored for 3 days. Survival Rate and Cytokine Assay 48 h after 1st oral administration, the mice received LPS (20 mg/kg body weight) by intraperitoneal injection, survival rates were monitored for 40 h as described previously [56]. The mice were euthanized and blood was collected at 4 h after LPS injection, the levels of cytokines (TNF-α, IL-6, and IL-10) in plasma were measured by an ELISA kit according to the manufacturer's protocol. Statistical Analysis Results were expressed as the means ± SD. Statistical significance between the 2 groups was determined by unpaired Student's two-sided t-test. To compare multiple groups, one-way ANOVA with Tukey's post hoc test was performed by using SPSS 17.0. * p < 0.05, ** p < 0.01. Conclusions In summary, our findings showed that astaxanthin inhibited the immune dysfunction of DCs induced by LPS via the activation of HO-1/Nrf2 axis in vitro, and enhanced the survival rate of LPS-challenged mice in vivo, which might be used as a potential candidate strategy for clinical sepsis.
v3-fos-license
2018-07-18T20:51:37.779Z
2017-05-05T00:00:00.000
20701468
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/anie.201700966", "pdf_hash": "d1c5f72dfcadfaccea377a1c27351d9376904f1d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2897", "s2fieldsofstudy": [ "Biology" ], "sha1": "d1c5f72dfcadfaccea377a1c27351d9376904f1d", "year": 2017 }
pes2o/s2orc
Ultrasensitive Measurement of Ca2+ Influx into Lipid Vesicles Induced by Protein Aggregates Abstract To quantify and characterize the potentially toxic protein aggregates associated with neurodegenerative diseases, a high‐throughput assay based on measuring the extent of aggregate‐induced Ca2+ entry into individual lipid vesicles has been developed. This approach was implemented by tethering vesicles containing a Ca2+ sensitive fluorescent dye to a passivated surface and measuring changes in the fluorescence as a result of membrane disruption using total internal reflection microscopy. Picomolar concentrations of Aβ42 oligomers could be observed to induce Ca2+ influx, which could be inhibited by the addition of a naturally occurring chaperone and a nanobody designed to bind to the Aβ peptide. We show that the assay can be used to study aggregates from other proteins, such as α‐synuclein, and to probe the effects of complex biofluids, such as cerebrospinal fluid, and thus has wide applicability. Conditions for α-synuclein aggregation Monomeric α-synuclein was incubated at a concentration of 70 μM in 25 mM Tris-HCl with 100 mM NaCl (pH 7.4) with constant shaking at 200 rpm for 5 h at 37 °C, conditions shown previously to result in the formation of oligomeric species [6] . CSF Sample The CSF sample was collected from a healthy individual (aged 65 years) by lumbar puncture. Standardized protocols for the collection and storage of CSF (www.neurochem.gu.se/TheAlzAssQCProgram) were followed. In short, the lumbar puncture was performed between 9 a.m. and 12 noon to collect 15 mL of CSF in sterile polypropylene tubes. The sample was divided into 1 mL aliquots that were frozen on dry ice and stored at −80 °C in Sarstedt 2mL tube. The time between sample collection, centrifugation, and freezing was maximum 1 h. Preparation of the nanobody Nb3 and clusterin Nb3 was prepared as previously described [7][8][9] . Briefly, it was recombinantly expressed in Escherichia coli [9] and purified using immobilized metal affinity chromatography and size-exclusion chromatography [7] . The concentration was measured by UV absorbance spectroscopy using a molecular extinction coefficient, which was calculated based on the sequence of the protein at 280 nm of 21,555 M −1 cm −1 . Clusterin was obtained as previously described [10,11] , and purified from human serum by IgG affinity chromatography or by affinity chromatography using MAb G7 [12] . Optimization of the dye filled vesicle preparation Initially we screened a series of different dye molecules for this assay. To ensure that we could attach vesicles to the surface and for probing surface coating protocols we used the dye rhodamine (Rh6G) for encapsulation. Thereafter, we tested the Ca 2+ -sensitive dyes Fluo-4, Fluo-8 and Cal-520 (Stratech Scientific Ltd, Newmarket, UK) and found that we detected the strongest increase in localized fluorescence intensity using the dye Cal-520. We tested vesicles of varying sizes (50, 100 and 200, 400 nm) and found that all these vesicles can be used in this assay. We probed vesicles containing varying concentrations of incorporated dye,1-100 μM, and found that improved signals can be detected at higher dye concentrations. Higher concentrations of the dye were found to be preferable for focusing of the instrument on samples incubated in L15 medium or samples that did not induce Ca 2+ influx. However, the incorporation of higher concentrations of dye molecules into the vesicles resulted in the surrounding solution containing a high concentration of free dye, we therefore performed size exclusion chromatography in order to remove free dye molecules from the surrounding solution. We tested both nonpurified and purified vesicle samples and found that we observed considerably less background signal using purified vesicles. Finally, based on these optimizations and our calculations (see Supporting Information Note 1 and 2, Supporting Information Fig. 1) we performed our experiments using purified vesicles composed of POPC with an average size of 200 nm containing Cal-520 at a concentration of 100 μM. To separate non-incorporated dye molecules from the vesicles, size-exclusion chromatography was performed in buffer using a Superdex TM 200 Increase 10/300 GL column attached to an AKTA pure system (GE Life Sciences) with a flow rate of 0.5 mL/min (Supporting Information Fig. 3). Preparation of PEGylated slides and immobilization of single vesicles Initially we screened a variety of surface treatment protocols [13][14][15][16][17][18] and for our experiments we optimized and followed a previously described protocol [18] with slight modifications to perform the actual experiments. Imaging using Total Internal Reflection Fluorescence Microscope Imaging was performed using a homebuilt Total Internal Reflection Fluorescence Microscope (TIRFM) based on an inverted Olympus IX-71 microscope. This imaging mode restricts the detected fluorescence signal to within 100-150 nm from the glass-water interface. A 488 nm laser (Toptica, iBeam smart, 200 mW, Munich, Germany) was used to excite the sample. The expanded and collimated laser beam was focused using two Plano-convex lens onto the back-focal plane of the 60X, 1.49NA oil immersion objective lens (APON60XO TIRF, Olympus, product number N2709400) to a spot of adjustable diameter. The fluorescence signal was collected by the same objective and was separated from the excitation beam by a dichroic (Di01-R405/488/561/635, Semrock). The emitted light was passed through an appropriate set of filters (BLP01-488R, Semrock and FF01-520/44-25, Semrock) ( Figure S14). The fluorescence signal was then passed through a 2.5x beam expander and imaged onto a 512 × 512 pixel EMCCD camera (Photometrics Evolve, E VO-512-M-FW-16-AC-110). Images were acquired with a 488nm laser (~10 W/cm 2 ) for 50 frames with a scan speed of 20 Hz and bit depth of 16 bits. Each pixel corresponds to 100 nm. All the measurements were carried out under ambient conditions (T=295K). The open source microscopy manager software Micro Manager 1.4 was used to control the microscope hardware and image acquisition [19,20] . Performing the Ca 2+ influx assay using TIRFM Single vesicles tethered to PLL-PEG coated borosilicate glass coverslides (VWR International, 22x22 mm, product number 63 1-0122) were placed on an oil immersion objective mounted on an inverted Olympus IX-71 microscope. Each coverslide was affixed at Frame-Seal incubation chambers and was incubated with 50 µL of HEPES buffer of pH 6.5. Just before the imaging, the HEPES buffer was replaced with 50 µL Ca 2+ containing buffer solution L-15. 16 (4×4) images of the coverslide were recorded under three different conditions (background, in the presence of Aβ42 and after addition of ionomycin (Cambridge Bioscience Ltd, Cambridge, UK), respectively). The distance between each field of view was set to 100 μm, and was automated (bean-shell script, Micromanager) to avoid any user bias ( Figure S3). After each measurement the script allowed the stage (Prior H117, Rockland, MA, USA) to move the field of view back to the start position such that identical fields of view could be acquired for the three different conditions. We screened surface treatment protocols, PEG: biotin-PEG ratios, vesicle size, different encapsulate Ca 2+ binding dyes and their concentrations to maximize the sensitivity of this assay. Images of the background were acquired in the presence of L15 buffer. For each field of view 50 images were taken with an exposure time of 50 ms. Thereafter, 50 µL of the aggregation reaction, diluted to a concentration of twice the targeted value, was added and incubated for 10 min. Importantly we made sure that the glass coverslides were not moved during the addition of samples and then images were recorded. Next, 10 µL of a solution containing 1 mg/mL of ionomycin (Cambridge Bioscience Ltd, Cambridge, UK) was added and incubated for 5 min and subsequently images of Ca 2+ saturated single vesicles in the same fields of view were acquired. Experiments with recombinant Aβ42 in CSF To study the influence of the presence of a complex environment on the Ca 2+ influx, we have taken samples of recombinant Aβ42 aggregation reactions corresponding to t2 and serially diluted it in the CSF to measure the concentration dependence of the Ca 2+ influx. Firstly, we imaged the coverslides in presence of 15 µL of L15 buffer. Then aliquots of recombinant Aβ42 were diluted in 15 µL of CSF which was added to the coverslides and incubated for 10 min before images were acquired as described previously. Thereafter, we added ionomycin to the sample and imaged the identical fields of view using automatic stage movement to determine the Ca 2+ influx. Data analysis and quantification of the extent of Ca 2+ influx The recorded images were analyzed using ImageJ [21,22] to determine the fluorescence intensity of each spot under the three different conditions, namely background (Fbackground), in the presence of an aggregation mixture (Faggregate), and after the addition of ionomycin (FIonomycin). The relative influx of Ca 2+ into an individual vesicle due to aggregates of Aβ42 peptide was then determined using the following equation: The average degree of Ca 2+ influx was calculated by averaging the Ca 2+ influx into individual vesicles. Supporting Information Note 1: Calculation of the concentration of an individual dye molecule entering into a vesicle The volume of a single vesicle with a diameter d can be calculated using equation (1). We used vesicles with diameters of approximately 200 nm, which have a volume of 4.186 x 10 -18 L. For a single molecule, the number of moles can be calculated using Avogadro's number (Navogadro = 6.023 x 10 23 ). Using this value we can determine the concentration of a single molecule that enters a vesicle using equation (2). Thus, the concentration of a single molecule entering a vesicle with a diameter of 200 nm is 396 nM (Supporting Information Fig. 1). Supporting Information Note 2: Rationalization for using vesicles with a diameter of 200nm The effectiveness of our single vesicle assay is primarily determined by two parameters with different dependencies on the size of a vesicle -(i) high dynamic range and (ii) high sensitivity (Supporting Information Fig. 1). High dynamic range is the capacity to detect differences in fluorescence intensity over a range of varying amounts of Ca 2+ influx within individual vesicles, without reaching saturation. The dynamic range of the assay described here is related to the maximum amount of measurable Ca 2+ influx (e.g. when all dye molecules within a single vesicle are saturated with Ca 2+ ), which is directly proportional to the volume of a vesicle. For example, using vesicles with a larger volume enables the encapsulation of more Cal-520 dye molecules, therefore reaching saturation of the maximum fluorescence intensity at larger amounts of Ca 2+ influx compared to a vesicle of a smaller volume with less Cal-520 dye molecules incorporated.
v3-fos-license
2023-08-14T15:03:24.810Z
2023-08-10T00:00:00.000
260877954
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2023.1238269/pdf", "pdf_hash": "5a2feb41cf76525cbc2263cecc5442d33d1ed2b5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2898", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "9ac923eefe8d0f795e04a66755311160ac8af8c6", "year": 2023 }
pes2o/s2orc
Association between alleles, haplotypes, and amino acid variations in HLA class II genes and type 1 diabetes in Kuwaiti children Type 1 diabetes (T1D) is a complex autoimmune disorder that is highly prevalent globally. The interactions between genetic and environmental factors may trigger T1D in susceptible individuals. HLA genes play a significant role in T1D pathogenesis, and specific haplotypes are associated with an increased risk of developing the disease. Identifying risk haplotypes can greatly improve the genetic scoring for early diagnosis of T1D in difficult to rank subgroups. This study employed next-generation sequencing to evaluate the association between HLA class II alleles, haplotypes, and amino acids and T1D, by recruiting 95 children with T1D and 150 controls in the Kuwaiti population. Significant associations were identified for alleles at the HLA-DRB1, HLA-DQA1, and HLA-DQB1 loci, including DRB1*03:01:01, DQA1*05:01:01, and DQB1*02:01:01, which conferred high risk, and DRB1*11:04:01, DQA1*05:05:01, and DQB1*03:01:01, which were protective. The DRB1*03:01:01~DQA1*05:01:01~DQB1*02:01:01 haplotype was most strongly associated with the risk of developing T1D, while DRB1*11:04-DQA1*05:05-DQB1*03:01 was the only haplotype that rendered protection against T1D. We also identified 66 amino acid positions across the HLA-DRB1, HLA-DQA1, and HLA-DQB1 genes that were significantly associated with T1D, including novel associations. These results validate and extend our knowledge on the associations between HLA genes and T1D in Kuwaiti children. The identified risk alleles, haplotypes, and amino acid variations may influence disease development through effects on HLA structure and function and may allow early intervention via population-based screening efforts. Introduction Type 1 diabetes (T1D) is a multifactorial autoimmune disorder, affecting over 8.7 million people worldwide and posing a major challenge to global healthcare systems (1).The aetiology of T1D is complex, involving a series of immunological and environmental factors that can trigger the disease in genetically susceptible individuals.The precise mechanism underlying b-cell destruction, leading to absolute deficiency of insulin and hyperglycaemia, is largely unknown.Hyperglycaemia develops after 80-90% of pancreatic b-cells are destroyed, providing a narrow window for therapeutic intervention (2).Insulitis is a major aspect of T1D pathogenesis, which is characterized by the infiltration of mononuclear cells, such as T cells, B cells, and macrophages, into pancreatic islet cells (3).T1D may lead to serious secondary complications involving neuropathy, nephropathy, and retinopathy (4); hence, early diagnosis is crucial in the treatment and management of the disease.Clinically, T1D is diagnosed by the presence of autoantibodies against pancreatic islet cells, including insulin autoantibodies (IAA), glutamic acid decarboxylase autoantibodies (GADA), islet antigen 2 autoantibodies (IA-2A), and zinc transporter 8 autoantibodies (ZnT8A) (5).Genetic predisposition to T1D has been evidenced by a positive family history and a heritability rate of over 50% in monozygotic twins (6). The human leucocyte antigen (HLA) gene region, spanning a 7.6 Mb region on chromosome 6p21.3, is considered to be the strongest predictor of the disease, accounting for 40-50% of disease heritability (7).HLA class I and II genes are widely associated with several chronic debilitating autoimmune diseases, such as multiple sclerosis, lupus, thyroiditis, and T1D (8).Allelic and haplotypic combinations of three HLA genes, namely DRB1, DQA1, and DQB1, are widely associated with the development of T1D (7,9).Allele-specific sequence motifs within the HLA-DQ and HLA-DR regions possibly determine the shape of the peptide binding groves and modulate T cell repertoire activity (8,10).For instance, substitution of aspartic acid at amino acid position 57 of the HLA-DQ b chain tends to impart resistance, while replacement with non-Asp-57 has been associated with susceptibility to T1D in Caucasians (8,11).Similarly, in individuals carrying the different HLA-DR4 subtypes, sequence variations at position b71 (engaged by glutamic acid/lysine/arginine), b74 (engaged by alanine/glutamic acid), and b86 (engaged by glycine/valine) lead to seven motifs (EAV, KAG, RAG, RAV, REG, REV, and KAV) that have a preferential impact on conferring resistance or susceptibility to T1D (10).According to the literature, multiple amino acid residues possibly impact the size and polarity of specific HLA anchor pockets and are likely to play a superior role in binding of autoantigen epitopes and presenting them T helper cells needed for specific islet autoantibody production, indicating its potential role in T1D pathogenesis (10,(12)(13)(14)(15). As per the International Diabetes Federation (30), Kuwait ranks third among countries with an increased rate of incidence ofT1D (30).The incidence of T1D in children under the age of 14 years increased from 17.7 in 1992-1994 to 40.9 per 100,000 per year in 2011-2013 (31).Despite this, few studies have explored the impact of HLA variants on T1D pathogenesis (32, 33) in the Kuwaiti population.In the present study, we aimed to evaluate the association and contribution of HLA class II alleles with the risk of T1D in the paediatric Kuwaiti population using next generation sequencing.We intend to catalogue the entire spectrum of HLA class II alleles that impart susceptibility to, or render protection against, T1D. Ethics statement and study cohort The study protocol was approved by the Ethical Review Committee of Dasman Diabetes Institute and was in accordance with the guidelines of the Declaration of Helsinki and the United States Federal Policy for the Protection of Human Subjects.The study cohort consisted of unrelated individuals with T1D (95) and controls (150).Participants with T1D were recruited from the registry initiated and maintained at Dasman Diabetes Institute, called the Childhood-Onset Diabetes eRegistry, which is based on the DiaMond protocol.The criteria for recruiting patients with T1D and information on participant consent are discussed in detail previously (31).The controls recruited in this study were nondiabetic individuals above 38 years of age. Targeted HLA data For individuals with T1D, an Omixon Holotype HLA V3 kit (Omixon, Hungary) was used on genomic DNA (0.8-1.2 µg) extracted by the QiAmp DNA blood mini kit, following the manufacturer's protocols.The HLA typing kit generated DNA libraries and sequences for 11 loci, and among them were the DQA1, DQB1, and DRB1 genes.The protocol involved long-range PCR amplification of HLA genes using locus-specific master mixes, followed by quantitation and normalization of the resulting PCR amplicon, using QuantiFlour dsDNA system (Promega, USA).Amplicons were then subjected to enzymatic fragmentation, were end repaired and adenylated, followed by index ligation.The resulting single pool of indexed libraries were selected using AMPure XP magnetic beads (Beckman Coulter, USA) and were quantified using the qubit fluorometer (Thermofisher Scientific, USA).Next-generation sequencing (NGS) was carried out on an Illumina Miseq (Illumina, USA) sequencer, following the manufacturer's protocols. NGS exome data For healthy controls, a Nextera Rapid Capture Exome kit (Illumina Inc.USA) was used on high quality genomics DNA for exome sequencing enrichment using an Illumina HiSeq 2500 platform (Illumina Inc.USA). HLA typing Targeted and whole exome FastQ files were used as input for HLA-HD tool version 1.4.0 (34) to identify alleles in HLA class II genes (HLA-DRB1, HLA-DQA1, and HLA-DQB1) by comparing the reads to a reference panel from the IPD-IMGT/HLA database (35) version 3.46 (2021 October) build 2d19adf.The database can be accessed at https://www.ebi.ac.uk/ipd/imgt/hla/licence/. Testing for presence of celiac disease and Hashimoto's thyroiditis All T1D patients were tested for the presence of other comorbid conditions.Presence of celiac disease (CD) was tested using Anti-Tissue Transglutaminase: IgG, Anti-Tissue Transglutaminase: IgA (IU/ml) and Anti-Endomysial Ab (AEA) tests, while Hashimoto's thyroiditis (HT) was tested using thyroid peroxidase antibody test. Statistical tests Phenotype associations between haplotypes, alleles, and amino acids in HLA class II genes, including calculation of the Hardy-Weinberg equilibrium (HWE), confidence intervals (CI), odds ratios (OR), and P-values were analysed using Bridging Immunogenomic Data-Analysis Workflow Gaps (BIGDAWG) tool (36) on R console version 3.6.2(https://www.R-project.org/).The associations between alleles and haplotypes were analysed based on high-resolution sequence-based HLA typing (3-field).In addition, alleles and haplotypes with low frequencies were combined into one group (binned) and discarded from the analysis.A P value of <0.05 was considered statistically significant.To adjust for multiple comparisons, Bonferroni correction was used where adjusted P < 0.05 (denoted as Pc*) was considered statistically significant. Clinical characteristics The average age of individuals with T1D (52 males and 43 females) was 13 years, with an average body mass index (BMI) of 21 kg/m 2 .Whereas the average age of healthy participants (50 males and 100 females) was 57 years, with an average BMI of 32 kg/m 2 .The age of onset in our T1D cohort was divided into 3 groups; <5 years old: 34%; 5-10 years old: 44%; and >10 years old: 22%. Comparison of HLA-DRB1, HLA-DQA1, and HLA-DQB1 allele frequencies between individuals with T1D and controls The number of alleles identified in HLA-DRB1, HLA-DQA1, and HLA-DQB1 were 52, 21, and 40, respectively.All the identified alleles in HLA-DRB1, HLA-DQA1, and HLA-DQB1 passed the HWE test in participants with T1D and controls.Results of the associations between the three HLA class II genes among individuals with T1D and controls are shown in Supplementary Table 1. Comparison of HLA-DRB1, HLA-DQA1, and HLA-DQB1 haplotype frequencies between children with T1D and controls In total, we identified 100 unique DRB1~DQA1~DQB1 haplotypes.Table 2 portrays results of the association between the DRB1~DQA1~DQB1 haplotypes.Haplotypes with few counts were binned as one haplotype.Two haplotypes conferred susceptibility to T1D; the most highly frequent and significant haplotype was HLA-DRB1*03:01:01~HLA-DQA1*05:01:01~HLA-DQB1*02:01:01 and the haplotype was more frequently expressed in controls, which may suggest its protective role against T1D; however, it is to be noted that it does not pass Bonferroni-corrected P-value though the OR is 0; thus, this allele cannot be considered as protective.Similarly, other well-known haplotypes were identified in our analysis, however it only shows significance in un-adjusted P values.We examined the distribution of zygosity at two significant T1D risk HLA haplotypes across different age groups at onset.The first HLA haplotype, DRB1*03:01:01~DQA1*05:01:01~DQB1*02:01:01, had the highest percentage of homozygous individuals in the < 5 years age group at 11.7%, followed by 3.9% in the > 10 years group, and 0% in the 5-10 years group.The heterozygous individuals had the highest percentage in the 5-10 years age group at 22%, followed by 13% in the < 5 years group, and 6.5% in the > 10 years group.For the second haplotype, DRB1*04:05:01~DQA1*03:03:01~DQB1* 03:02:01, the homozygous individuals had the highest percentage in the > 10 years age group at 1.3% and 0% in both the < 5 years and 5-10 years age groups.The Heterozygous individuals had the highest percentage in the < 5 years age group at 5.2%, followed by 2.6% in both the 5-10 years and > 10 years age groups. Comorbidity with celiac disease and Hashimoto's thyroiditis Upon testing the T1D patients for the presence of celiac disease (CD) and Hashimoto's thyroiditis (HT), we observed 2 individuals with CD and 3 individuals with HT. CD was identified in a 10-year-old female child with T1D and HT.She belongs to a family with two siblings presented with a young onset age of 3 years for T1D.The patient is positive for Anti TPO antibodies (363.4IU/ml), Anti-Endomysial Ab (AEA), Anti-Tissue Transglutaminase IgG (15.2 IU/ml) and Anti-Tissue Transglutaminase (IgA >200 IU/ml) tests.The second patient was a 6-year-old female child positive for Anti-Endomysial Ab (AEA) test indicating celiac disease, alongside T1D. HT was confirmed in 3 female children aged less than 15 years, presenting T1D at a young age of less than 3 years.They confirmed HT diagnosis with an anti-TPO antibody level of >120.7 IU/ml.Two out of the 3 HT patients and 1 out 2 CD patients carried the risk DRB1 03:01:01~DQA1 05:01:01~DQB1 02:01:01 haplotype, while in the other two patients no known risk haplotypes were detected. Discussion The current study identified frequencies of significant alleles, haplotypes, and amino acid variants of major HLA class II genes between Kuwaiti children with T1D and controls. The extent of zygosity at the significantly identified T1D risk haplotypes differed across the groups of age at onset.The DRB1*03:01:01~DQA1*05:01:01~DQB1*02:01:01 haplotype exhibited a higher frequency of homozygosity in the group of early age at onset, indicating that this haplotype in homozygous form confers a higher risk of developing T1D at an early age.Although DRB1*04:05:01~DQA1*03:03:01~DQB1*03:02:01 homozygous haplotype is seen less frequent in our cohort to draw conclusion, its rarity is uniformly seen across the three groups of age at onset.It is possible that with increased cohort sizes in future studies, associations in haplotypes with low frequencies would be revealed.In addition, this study considers all the three fields of alleles in performing haplotype analysis.It may be pointed out that it is also possible to perform the analysis using only the first two fields since the significance of the third field remains unclear as the polymorphisms are not associated with amino acid changes and the field is very much the same in alleles defined by the first two fields. Siblings of T1D children can exhibit increased risk for developing T1D risk.However, it is not possible to us to assess this as our study is not a long-term follow-up protocol.Nevertheless, we present results of longitudinal studies from literature on the subject.Generally, the overall risk of an individual developing T1D in a population is 0.4% (48).Nevertheless, the risk is higher for siblings of affected children (49).The estimated risk can significantly increase depending on the T1D proband's age at onset, the presence of specific high-risk HLA alleles, and whether the siblings are monozygotic twins.For instance, siblings of T1D individuals with an early onset of less than 5 years have a higher cumulative risk of developing diabetes by age 20 years (11.7%), compared to 3.6% and 2.3% for those with onset between ages 5 and 9 years and between ages 10 and 14 years, respectively (50).In addition, sharing both HLA DR3/4-DQ8 haplotypes with a T1D proband elevates the risk of islet autoimmunity in siblings to 63% by age 7 and 85% by age 15, compared to those who do not share both haplotypes (20% by age 15).Of those sharing both haplotypes, 55% develop diabetes by age 12, compared to 5% without both haplotypes.Siblings without the HLA DR3/4-DQ8 genotype, despite carrying the same haplotypes with their T1D proband, had only a 25% risk of T1D by age 12 (51).Moreover, monozygotic twins are at higher risk (over 40%) of developing T1D and positive autoantibodies compared to non-twin siblings and dizygotic twins.Additionally, monozygotic twins with the HLA DQ8/DQ2 genotype have a greater risk of progressing to T1D and positive autoantibodies than those without (52). Amino acid variations within the HLA genes and their association with T1D is understudied in Arab populations, as compared to studies on alleles and haplotypes.Although a modest attempt was carried out previously (32), with advancements in precise HLA genotyping techniques, such as NGS, the current study identified 66 amino acids positions that were significantly associated with T1D.In the present study, most of the significant amino acid positions either comprised protective or susceptibility attributes associated with T1D.Some of the significant amino acid positions identified on the HLA-DRB1 gene were previously reported in the Omani population, such as DRß1-11 and DRß1-71 (29), and the European populations, including DRß1-13, DRß1-70, DRß1-71, and DRß1-74 (10,12,37,38).To the best of our knowledge, significant associations between T1D and changes in amino acids at position DRß1-26, DRß1-33, DRß1-37, DRß1-58, DRß1-67, DRß1-73, DRß1-96, DRß1-133, DRß1-140, DRß1-142, and DRß1-180, have not been reported previously, highlighting the novelty in our findings.Additionally, several amino acid changes that were significantly associated with T1D identified on the HLA-DQA1 gene have not been reported before, such as DQa1 (10,(13)(14)(15)(37)(38)(39).The identified amino acid positions that are significantly associated with T1D on the HLA-DRB1, HLA-DQA1, and HLA-DQB1 genes, whether previously reported or novel, might have a functional impact on the threedimensional structure of the HLA genes, including antigen binding sites, and may either cause T1D or influence the age of T1D onset.Many of the significant amino acid positions that we identify are supported by many previous studies (at least 8 independent studies) as listed in the Results sections.However, it is to be noted that the observed amino acid variations have not been characterised for impact on the structural and functional features of the protein(s). We additionally tested the prevalence of haplotypes predisposing to celiac disease and HT in our cohort.CD comprises only 0.02% of our T1D cohort; it is interesting to note that 47.4% carry DQA1*05:01/DQB1*02:01 encoding a DQ2.5 protein, which represents the strongest risk haplotypes associated with the celiac disease and additionally shared by T1D (53, 54).Similarly, HLA DR3-DQ2.5 and DR4-DQ8 are the major risk haplotypes associated with T1D in our study.More than 90% of the patients with celiac disease are reported to carry HLA DR3-DQ2.5 haplotype (55).Certain common predisposing alleles specifically DQB1*02:01:01 and HLA DQA1*05:01:01 are observed in significantly increased frequency in our T1D cohort compared to controls (Table 1).Though autoimmune thyroid conditions such as HT and Graves' disease are recurrently associated with T1D, only one of the forms namely Hashimoto's was detected in our cohort at a frequency of 0.03%.Limited studies have investigated the link between HLA class II alleles and HT, DR3 and DR4 haplotypes are the common haplotypes associated with the disease (56,57).Each of autoimmune condition can co-exist with T1D especially if they have the same high-risk HLA profile, nevertheless, the diagnosis of one does not necessarily imply the presence of others especially at the same time.In our study T1D cohort we have limited number of individuals with CD and HT which highlight the complex multi-factorial nature of the autoimmune disorders. In summary, our findings contribute to the growing body of knowledge about the genetic factors influencing the risk of developing T1D in children.This information has clinical implications for diagnosis, risk assessment, and personalized management of T1D, which can ultimately help improve the lives of affected individuals and their families. In our current study, we utilized a higher typing resolution to investigate the association between the classical HLA class II genes and T1D in the Kuwaiti population.This approach allowed us to examine amino acid variations that were not explored in previous studies conducted in Kuwait (32,33,46).Furthermore, most T1D studies in Arab populations, with the exception of one conducted in Saudi Arabia (28), have allele resolutions ranging from 1 to 2 fields.This variation in resolution may potentially impact the overall association results (27,29,(42)(43)(44)(45). Furthermore, this study provides several novel results that may offer great clinical and research benefits.Despite these strengths, the results of our study come with few limitations.First, the sample size of people with T1D is relatively small even though its larger than prior studies performed in Kuwaiti population (32).Nevertheless, a larger sample size may provide a comprehensive portfolio of variations in allele, haplotype, and amino acid frequencies and allow association tests within specific T1Drelated alleles (13,14).Second, we carefully screened our control group to exclude any individuals with a family history of T1D or symptoms suggestive of adult-onset T1D.While our control group had a higher proportion of females than males, genomic autosomal HLA risk haplotypes do not generally differ based on sex (58).However, there is suggestive evidence for existence of sex-dependent differences in islet autoimmunity for T1D high-risk haplotypes (59), which we acknowledge as a potential confounder which could not be addressed in our study due to non-availability of full autoimmunity profiles of the study participants.Lastly, we cannot rule out mistype alleles resulting from algorithmic error by the HLA typing software, as this has been reported in other HLA typing tools such HLAforest, HLAminer, and PHLAT (60). Conclusion The significant findings on the association between alleles, haplotypes, and amino acid variations and T1D in the Kuwaiti population are not far from what has been previously reported in the Arab and European populations.Moreover, we further uncovered novel haplotypes and amino acid positions within HLA class II genes that are associated with T1D, which may shed some light on the understanding of immunogenetic influences on T1D. TABLE 1 Distribution of significant DRB1, DQA1, DQB1 alleles in children with T1D and controls. TABLE 2 Distribution of the DRB1~DQA1~DQB1 haplotypes among children with T1D and controls.
v3-fos-license
2021-01-13T14:26:10.673Z
2021-01-12T00:00:00.000
231589696
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-020-80112-8.pdf", "pdf_hash": "f441af23330329d54faf0fb60587241df7e267c3", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2899", "s2fieldsofstudy": [ "Medicine" ], "sha1": "2f20c47f44d5e46c7c23c9fd0ff3ac1d634a636f", "year": 2021 }
pes2o/s2orc
Assessment of differential intraocular pressure response to dexamethasone treatment in perfusion cultured Indian cadaveric eyes The purpose of the present study was to assess the differential intraocular pressure response (IOP) to dexamethasone (DEX) treatment at two dose levels (100 or 500 nM) in perfusion cultured Indian cadaveric eyes to investigate glucocorticoid (GC) responsiveness. In a human organ-cultured anterior segment (HOCAS) set-up, the eye pressure was monitored for every 24 h post DEX infusion (100 or 500 nM) or 0.1% ethanol treatment for 7 days after baseline stabilization. The expression of DEX-inducible proteins such as myocilin and fibronectin in HOCAS-TM tissues was assessed by immunostaining. Elevated IOP was observed in 6/16 eyes [Mean ± SEM (mΔIOP): 15.50 ± 1.96 mmHg; 37.5% responders] and 3/15 eyes (Mean ± SEM mΔIOP: 10 ± 0.84 mmHg; 20% responders) in 100 nM and 500 nM dose groups respectively. Elevated IOP in GC responder eyes was substantiated with a significant increase in myocilin (11.8-fold; p = 0.0002) and fibronectin (eightfold; p = 0.04) expression as compared to vehicle-treated eyes by immunofluorescence analysis. This is the first study reporting the GC responsiveness in Indian cadaveric eyes. The observed GC response rate was comparable with the previous studies and hence, this model will enable us to investigate the relationship between differential gene expression and individual GC responsiveness in our population. Glucocorticoids (GC) have been the mainstay for the management of inflammatory eye diseases due to its potent anti-inflammatory, anti-angiogenic and immune-modulatory properties 1 . Chronic use of GC induces ocular hypertension (GC-OHT) and GC-induced glaucoma (GIG) in susceptible individuals (GC responders) 2,3 . It is reported that 40% of the patients in the general population showed an increased intraocular pressure (IOP) with topical dexamethasone use, of which, 6% are likely to develop glaucoma 2,4,5 . More than 90% of the glaucoma patients are GC responders which further complicates clinical management as a GC response can effect IOP control increasing the susceptibility of losing vision in these patients 3,6 . Both GIG and primary open angle glaucoma (POAG) share similarities in clinical presentation such as an open angle, increased IOP, characteristic optic neuropathy and loss of peripheral vision 7,8 . However, the molecular mechanisms for the pathogenesis of GIG are not completely understood 3 . The trabecular meshwork (TM) is an important component in the conventional aqueous humor outflow pathway which plays a crucial role in maintaining the IOP homeostasis. GCs are known to induce alterations in TM structure and function including inhibition of cell proliferation and migration 9 , cytoskeletal rearrangement (formation of cross-linked actins (CLANs) [10][11][12] , increased TM cell and nuclear size 7 , accumulation of excessive extracellular matrix 11,13 , decreased phagocytosis 14 and alterations in cellular junctional complexes 15 . These cellular, biochemical and morphological changes result in increased outflow resistance and decreased outflow facility. www.nature.com/scientificreports/ Several in vitro, in vivo and ex vivo models have been developed to understand the pathogenesis of GI-OHT/ GIG at a cellular and molecular level 16 . Perfused organ-cultured anterior segment (OCAS) has been used as a standard ex vivo model to examine the aqueous outflow pathway in glaucoma research for nearly 30 years [17][18][19][20] . This model serves as an intermediate between in vitro and in vivo systems and offers unique opportunity to study the physiology, biochemistry and morphology of the outflow pathway for a number of days (up to 1 month) in viable tissues 19 . In addition, the GC responder rate of perfusion cultured non-glaucomatous human eyes was 30% which is very close to the response rate observed in human subjects 2,10 . However, such a high GC responsiveness rate was not reported by other groups 18 . To our knowledge, no studies have been reported to date in the Indian population. Therefore, the purpose of the present study was to utilize the human organ-cultured anterior segment (HOCAS) ex vivo model to investigate the GC responsiveness of Indian cadaveric eyes to dexamethasone (DEX) treatment at two dose levels (100 or 500 nM). This study identified 37.5% of eyes showed a GC response with the 100 nM dose of DEX and interestingly, a 500 nM DEX dose resulted in a 20% response rate. No dose-dependent increase in the GC response rate was observed in the studied eyes despite of increasing the DEX dose to fivefold. The elevated IOP of the GC responder eyes was substantiated with a significant increase in mean fluorescence intensity of myocilin (11.8-fold; p = 0.0002) and fibronectin expression (eightfold; p = 0.04) as compared to vehicle-treated and non-responder eyes by immunofluorescence analysis. Thus, the HOCAS model provides a platform to investigate the molecular mechanisms contributing to differential responses in the TM to GCs and the heterogeneity of glucocorticoid receptor (GR) signaling both in health and diseased conditions. Results A total of 43 human donor eyes (7 paired; 29 single eyes) with the mean age of 73.0 ± 9.50 years were used for the present study and their demographic details are summarized in Supplementary Table S1. All anterior segments were cultured within 48 h of death (mean ± SD: 29.71 ± 14.89 h). Differential IOP response to DEX treatment. Human eyes were perfusion cultured with either DEX or 0.1% ethanol (ETH) as vehicle control for 7 days as described previously 10,21 . The DEX-induced elevated IOP was studied at two dose levels (100 nM and 500 nM) to check the dose-dependent response rate (RR). Out of 43 eyes, 16 eyes received 100 nM DEX, 15 eyes received 500 nM DEX and 12 eyes received 0.1% ethanol (ETH) for 7 days. Elevated IOP was observed in (6/16; RR = 37.5%) eyes in 100 nM dose group whereas in 500 nM dose group, 3/15 eyes showed a significant elevated IOP (RR = 20%). A significant and progressive increase in IOP was observed in DEX-responder eyes after treatment with the mean Δ (± SEM) IOP of 15.5 ± 1.96 mmHg in 100 nM dose group and 10.0 ± 0.84 mmHg in 500 nM dose group. The mean ΔIOP in DEX-non-responder eyes was found to be 1.19 ± 0.54 mmHg and 1.32 ± 0.47 mmHg in 100 nM and 500 nM DEX respectively. The vehicle treated eyes remained stable throughout the study and the mean pressure was well below 5 mmHg (1.19 ± 0.46 mmHg). The DEX-treated responder eyes were statistically different from non-responder eyes and vehicle-treated eyes (p < 0.001) (Fig. 1a,b,c). In addition, the elevated IOP between 100 and 500 nM DEX-treated responder eyes were statistically significant (p = 0.04). The mean (± SEM) basal outflow facility of total eyes (n = 43) was found to be 0.17 ± 0.01 µl/minute/mmHg at the perfusion rate of 2.5 µl/minute. In the DEX 100 nM dose group, the basal outflow facility of GC responder and GC non-responder eyes were 0.19 ± 0.04 and 0.16 ± 0.02 µl/minute/mmHg respectively; whereas in 500 nM dose group, the basal outflow facility of GC responder and GC non-responder eyes were 0.16 ± 0.01 and 0.16 ± 0.01 µl/ min/mmHg respectively. The basal outflow facility of ETH treated eyes was calculated to be 0.19 ± 0.01 µl/min/ mmHg. There was no significant difference found among three groups which further confirms that the observed elevated IOP in GC responder eyes was purely due to DEX treatment and not due to endogenous differences in the outflow facility and IOP of the studied eyes. The raw data of IOP and outflow facility of all the studied eyes are summarized in Supplementary Table S2A, 2B respectively. Effect of DEX on morphology and tissue viability. A deposition of high extracellular debris was found in DEX-treated anterior segments as compared to vehicle-treated eyes (Fig. 2a). TUNEL assay revealed that the tissue viability was not affected by either DEX/ETH treatment (Fig. 2b). DEX-induced myocilin and fibronectin expression in HOCAS-TM tissues. The effect of DEX on the expression of myocilin and fibronectin was investigated in TM tissues after HOCAS experiment by immunofluorescence analysis and the representative images are shown in Fig. 3a,b. Interestingly, upon quantification a significant increase in mean fluorescence intensity of myocilin expression was found in GC responder eyes (11.8-fold) as compared to vehicle-treated eyes (p = 0.0002) and GC non-responder eyes (p = 0.0004) whereas fibronectin showed a eightfold increase in its expression (p = 0.04) (Fig. 3c). This clearly indicates that the elevated IOP correlates with a significant increase in myocilin and fibronectin expression. Discussion The present study demonstrated the differential IOP response of perfused human cadaveric eyes to DEX treatment at two dose levels (100 and 500 nM). The dose of 100 nM DEX was chosen based on the concentration of DEX found in the aqueous humor of the human eyes following topical administration of a single drop of 0.1% DEX formulations 22 . The fivefold increase in DEX was chosen to explore any dose-dependent variations in IOP response and also in some in vitro studies, 500 nM dose was used to explore the gene /proteomic alterations in response to DEX treatment in cultured human TM cells 23 www.nature.com/scientificreports/ The eye pressure of the anterior segments in culture was acquired using Power Lab data acquisition system (AD Instruments, NSW, Australia) and analyzed using LabChart Pro software (ver.8.1) as described in detail in methods section. The basal IOP on day 0 (before DEX treatment) was set at 0 mmHg. In 100 nM dose group, treatment with DEX showed a significant elevated IOP in 6/16 eyes (Mean ± SEM-mΔIOP: 15.50 ± 1.96 mmHg; Response rate: 37.5%) whereas in 500 nM dose group, 3/15 eyes showed a very significant elevated IOP (Mean ± SEM-mΔIOP: 10 ± 0.84 mmHg; Response rate: 20%). ETH treated eyes showed mean ± SEM-mΔIOP of 0.92 ± 0.54 mmHg. Data were analyzed by unpaired 2-tailed Student's t test on each treatment day. *p < 0.05; **p < 0.001; ***p < 0.0001; ****p < 0.00001. Frequency Plot of the (b) IOP data and (c) Outflow facility data. The m∆IOP and outflow facility of ETH treated, DEX-responder and DEX-non-responder groups were plotted for both 100 and 500 nM dose groups. The m∆IOP was increased after DEX treatment in responder eyes as compared to non-responder and ETH-treated eyes whereas outflow facility was decreased significantly after DEX treatment (100 and 500 nM) in responder group as compared to non-responder and ETH-treated groups. www.nature.com/scientificreports/ The change in IOP in response to DEX treatment ("ΔIOP", defined as the maximum IOP after treatment minus the baseline IOP) was determined as a positive response for all studied eyes. The "GC responder eyes" were defined according to the criteria described earlier with a positive ΔIOP ≥ 5 mmHg above baseline IOP following DEX treatment and the "GC-non-responder eyes" are those with their ΔIOP ≤ 5 mmHg above baseline IOP 10 . Based on these criteria, the GC responsive eyes of the present study were found to be 37.5% at 100 nM dose. Most of the eyes were found to be moderate responders (ΔIOP range: 8-15 mmHg). The observed GC response rate for 100 nM dose is comparable with the previous observation in perfusion cultured human eyes 10 . GC responsiveness mainly depends upon the potency, route of administration, dosage and duration of GC exposure 3 . Interestingly, in our study by increasing the dose of DEX by fivefold (from 100 to 500 nM) did not show any dose-dependent increase in GC-response rate for the 500 nM group (20%) and their response rate was 1.9-fold less as compared to 100 nM dose group (37.5%). The inability of higher dose to elicit a dose-dependent increase in 500 nM dose group could be due to the saturation effect of GC receptors which may be responsible for the observed progressive decline in IOP response in the present study (Fig. 1). In this study, a 7 days treatment regimen was chosen to investigate the IOP response of the studied eyes to DEX because the observed lag time of high IOP was between 3 and 5 days. Therefore, a 7 days treatment regimen was sufficient to get a positive GC response in our studied eyes. In contrast, the previous study observed the lag time of high IOP after 5-6 days and hence the eyes needed longer exposure time of 10-15 days for DEX treatment 10 . This could be due to variations in the flow rate used, as the present study utilized the flow rate of 2.5 µl/min whereas in the Clark et al. 10 , study, 2 µl/minute was used. The flow rate between 2 and 5 µl/min is mainly used to mimic the physiological human aqueous humor turnover rate and it is also well documented that this flow rate range preserves the health of the TM in perfusion culture 17 . The tissue viability data of the present study also supports this finding. , DEX-responder (n = 7) and non-responder eyes (7)(8)(9). Fluorescence images of five consecutive sections of TM from DEX treated and vehicle control eyes were analyzed and quantified for myocilin and fibronectin expression (green) in TM using Image J software [https :// image j.nih.gov/ij/]. A significant increase in mean fluorescence intensity of myocilin (p = 0.0002) and fibronectin (p = 0.04) was found in GC responder eyes as compared to vehicle-treated eyes. Data are shown as mean ± SEM. *p < 0.05; ****p < 0.00001; Un-paired t test. TM-Trabecular meshwork; SC-Schlemm's canal and CB-Ciliary body. www.nature.com/scientificreports/ DEX-treatment is known to induce the expression of several genes and proteins in the TM including myocilin 3 . Myocilin is a glycoprotein and its physiological function is not clearly understood in the TM and in other ocular tissues 27 . It was first identified as a major GC-responsive gene and protein in the TM and is also found in the aqueous humor of patients with POAG 27,28 . Therefore, in the present study, the expression of myocilin in the HOCAS-TM tissues and its association with elevated IOP was investigated. Interestingly, our data revealed that a highly significant increase in myocilin expression (11.8-fold) was observed in TM region of the responder eyes compared to vehicle-treated eyes (p = 0.0002); and there was a fourfold increase in myocilin expression in GC responder eyes (p = 0.0004) compared with GC non-responder eyes. Such induction of myocilin upon DEX treatment was reported previously in trabecular meshwork monolayer cells and cultures of anterior segments, and showed the increase in myocilin expression was time and dose-dependent and correlated with the timing and increase of IOP 29,30 . It is well accepted fact that, the induction of myocilin in response to DEX treatment may not contribute to IOP raise in GC-OHT 31 . Recently, it is demonstrated that the induction of myocilin is mediated through a secondary activation of an inflammatory signaling pathway involving calcineurin and transcription factor NFATC1 25 . Glucocorticoids are known to induce ECM changes in TM which are responsible for the aqueous outflow resistance in POAG. One such ECM protein is fibronectin which gets accumulated in the TM upon DEX treatment 29 . In the present study, we also found a eightfold increase in mean fluorescence intensity of fibronectin expression in TM of the GC-responder eyes as compared to vehicle-treated eyes (p = 0.04; n = 7) by immunohistochemical analysis. This corresponds to a previous study in perfusion cultured human eyes wherein a denser distribution of fibronectin was seen after DEX treatment in the JCT/inner endothelial cells of TM tissues of GC responder eyes as compared to control eyes 10 . Very recently, an increased fibronectin and other ECM proteins were also found in the TM region of ex-vivo cultured human corneo-scleral segments after DEX treatment 32 . This observation clearly supports the fact that GC responsiveness may be associated with fibronectin induction and ECM alterations in the TM. It is interesting to note that, to date, 20 isoforms of fibronectin are generated in humans due to alternative splicing and it is not well understood how these fibronectin isoforms contributes to elevate IOP in glaucoma including GIG 33 . The induction of fibronectin in response to GCs could be mediated through TGF-β2 and there are elevated levels of TGF-β2 in cultured TM cells exposed to GC treatment and in a mouse model of GC-induced glaucoma 30 . Recent study suggested that the expression of constitutively active fibronectin-extra domain A itself is capable of inducing elevated IOP through TGF-β signaling in mice 34 . In the present study, the levels of TGF-β2 were not measured in perfusate of HOCAS (either by ELISA or western blotting), however, it would be worth investigating the relationship between TGF-β2 mediated fibronectin inductions upon DEX treatment and IOP response in perfusion cultured human anterior segments. The limitations of the present study include that only limited number of paired eyes were available to assess the differential IOP response to DEX treatment due to high experimental rejection rate. The high experimental rejection rate was due to unstable baseline pressure during stabilization period. Therefore, both single eyes and paired eyes with stable baseline pressure were used for the present study. All the eyes used in the present study were from elderly donors (mean age of 73.0 ± 9.50 years) with no known ocular history. Hence, the prior history of GC treatment for any inflammatory conditions for the studied donors was not known. In addition, the observed GC sensitivity might have been greatly influenced with aged donors used in the present study. In conclusion, this is the first study demonstrating the GC response rate in perfused human cadaveric eyes of Indian origin. The observed GC response rate at a 100 nM dose of DEX was similar to previously reported studies in perfusion cultured human eyes and clinical subjects. Increasing the DEX dose by fivefold showed no dose-dependent increase in GC-response rate. The known DEX-inducible proteins, such as myocilin and fibronectin, were found to have a positive association with elevated IOP and GC responsiveness. Thus, this study raises the possibility of identifying genes and proteins which are uniquely expressed by the GC responder eyes in the Indian population and further understanding into how these genes contribute to the differential responsiveness to GC therapy in all populations. Materials and methods Ethical statement. The donor eyes not suitable for corneal transplantation due to insufficient corneal endothelial cell count were included in this study. The written informed consent of the deceased donor or next of kin was also obtained. The study protocol was approved by the Institutional Ethics Committee of Aravind Medical Research Foundation (ID NO. RES2017006BAS) and was conducted in accordance with the tenets of the Declaration of Helsinki. Human donor eyes. Post-mortem human eyes were obtained from the Rotary Aravind International Eye Bank, Aravind Eye Hospital, Madurai, India. The donor eyes were enucleated within 4 h of death (mean elapsed time between death and enucleation was 2.86 ± 1.18 h) and kept at 4 °C in the moist chamber until culture. All eyes were examined under the dissecting microscope for any gross ocular pathological changes and only eyes without such changes were used for the experiments. The presence or absence of glaucomatous changes in the study eyes were confirmed by histo-pathological analysis of the posterior segments as described earlier by our group (data not shown) 35 . The characteristics of donor eyes for this study is summarized in Supplementary Table S1. DEX-induced ocular hypertension (DEX-OHT) in perfused human cadaveric eyes. Paired / single post-mortem eyes were used to establish HOCAS by the method as described earlier 36 After baseline stabilization, one eye of each pair or designated single eyes received 5 ml of either 100 or 500 nM DEX at the flow rate of 200 µl/minute and the contralateral eye of the paired eyes or designated single eyes received 0.1% ethanol (ETH) as vehicle control and resumed the flow rate to 2.5 µl/minute with respective treatments until 7 days (3 doses). The eye pressure was monitored continuously using pressure transducers (APT 300 Pressure Transducers, Harvard Apparatus, MA, USA) with data recorder (Power Lab system (AD Instruments, NSW, Australia) with LabChart Pro software (ver.8.1). Measurement of pressure change after DEX treatment. The intraocular pressure (IOP) was calculated every hour as the average of 6 values recorded every 10 min, beginning 4 h before the drug infusion and continued for the duration of the culture. The average IOP of 4 h before drug infusion was taken as baseline IOP for calculation. Mean IOP was calculated for every day (24 h) after respective treatments. Then ΔIOP was calculated using the formula: (Actual IOP averaged over 24 h-Basal IOP of individual eyes before drug treatment). The increase in IOP in response to DEX treatment was examined for all treated eyes. The eyes were categorized as GC responder (mean ΔIOP was > 5 mmHg from the baseline) and non-responder eyes (mean ΔIOP ≤ 5 mmHg from the baseline) after DEX treatment for 7 days as described earlier 37 . The raw data of the IOP and the outflow facility are given in Supplementary Table S2A and 2B respectively. Morphological analysis of the outflow tissue. At the end of the drug treatment, anterior segments were fixed by perfusion with 4% paraformaldehyde. Perfusion fixed anterior segments were processed for histological examination using standard protocol. TM was considered normal if trabecular cells remained in their usual position on the lamellae (subjective assessment), and no or minor disruption of the juxtacanalicular tissue and trabecular lamellae were seen 32 . TUNEL staining. The effect of DEX on TM apoptosis was assessed using the terminal uridyl nick end labeling (TUNEL) in situ cell death detection kit (Roche Diagnostics GmbH, Mannheim, Germany) as per the manufacturer's instructions. Briefly, the de-paraffinized sections were permeabilized with 0.2% Triton X-100 in 0.1% sodium citrate at 4° C for 2 min and incubated with the provided fluorescein-conjugated TUNEL reaction mixture in a humidified chamber at 37 °C for 1 h in the dark. The TUNEL labeling solution without terminal transferase on tissue section was used as negative control. Tissue sections treated with DNase I served as a positive control. All sections were counterstained with DAPI and examined on fluorescent microscope (AXIO Scope A1, Zeiss, Germany) for the presence of apoptotic positive and non-apoptotic cells in the TM region of the anterior segments. Immunofluorescence analysis. Immunohistochemistry was carried out as described previously with some modifications 20 . Briefly, 5 μm tissue sections were de-paraffinized in xylene and rehydrated twice each with 100%, 95% and 75% for 5 min. To unmask the antigen epitopes, heat induced antigen retrieval was performed with 0.1 M citrate buffer, pH 6.4 for 10 min at 95 °C and permeabilized using 0.2% Triton-X100 in PBS for 10 min. Tissue endogenous biotin was blocked using avidin -biotin blocking system for 10 min. . Images were captured using fluorescence microscope (AXIO Scope A1, Zeiss, Germany) or confocal microscope (Leica SP8 Confocal Microscope, Leica, Wetzlar, Germany). Tissue sections without primary antibody served as a negative control. Fluorescence images of five consecutive sections of TM from DEX treated and vehicle control eyes were analyzed for myocilin and fibronectin expression (green) in TM. Fluorescence images of TM was carried out for fluorescence intensity quantification using Image J software (National Institute of Health (NIH), Bethesda, USA) with minor modificationsyyy 38 . The fluorescence intensity was background corrected by subtracting with average background intensity values by the formula, as "Corrected Fluorescence Intensity = Fluorescence Intensity in TM-(Area of TM * Mean background fluorescence)/ Area of TM". Five sections per tissue (Responder group, n = 7, Non-Responder group, n = 9 and vehicle treated group, n = 7) were analyzed. The mean fluorescence values of DEX treated group were compared with vehicle treated group. Statistical analysis. Statistical analysis was carried out using Graph Pad Prism (ver.8.0.2) (Graph Pad software, CA, USA). All data are presented as mean ± SEM or otherwise specified. Statistical significance between two groups was analyzed using unpaired 2-tailed Student's t test. p < 0.05 or less was considered as statistically significant.
v3-fos-license
2019-02-16T03:34:04.145Z
2019-02-12T00:00:00.000
67857086
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcmedinformdecismak.biomedcentral.com/track/pdf/10.1186/s12911-019-0740-0", "pdf_hash": "4dc731e82e8049163587cb3427b1444aee258b93", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2904", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "sha1": "4dc731e82e8049163587cb3427b1444aee258b93", "year": 2019 }
pes2o/s2orc
A basic model for assessing primary health care electronic medical record data quality Background The increased use of electronic medical records (EMRs) in Canadian primary health care practice has resulted in an expansion of the availability of EMR data. Potential users of these data need to understand their quality in relation to the uses to which they are applied. Herein, we propose a basic model for assessing primary health care EMR data quality, comprising a set of data quality measures within four domains. We describe the process of developing and testing this set of measures, share the results of applying these measures in three EMR-derived datasets, and discuss what this reveals about the measures and EMR data quality. The model is offered as a starting point from which data users can refine their own approach, based on their own needs. Methods Using an iterative process, measures of EMR data quality were created within four domains: comparability; completeness; correctness; and currency. We used a series of process steps to develop the measures. The measures were then operationalized, and tested within three datasets created from different EMR software products. Results A set of eleven final measures were created. We were not able to calculate results for several measures in one dataset because of the way the data were collected in that specific EMR. Overall, we found variability in the results of testing the measures (e.g. sensitivity values were highest for diabetes, and lowest for obesity), among datasets (e.g. recording of height), and by patient age and sex (e.g. recording of blood pressure, height and weight). Conclusions This paper proposes a basic model for assessing primary health care EMR data quality. We developed and tested multiple measures of data quality, within four domains, in three different EMR-derived primary health care datasets. The results of testing these measures indicated that not all measures could be utilized in all datasets, and illustrated variability in data quality. This is one step forward in creating a standard set of measures of data quality. Nonetheless, each project has unique challenges, and therefore requires its own data quality assessment before proceeding. Electronic supplementary material The online version of this article (10.1186/s12911-019-0740-0) contains supplementary material, which is available to authorized users. Background The increased use of electronic medical records (EMRs) in Canadian primary health care practice [1][2][3] has resulted in an expansion of the availability of EMR data. These data are being put to uses such as quality improvement activities related to patient care, and secondary purposes such as research and disease surveillance [4,5]. This has shifted the traditional use of medical records as an aide-memoire to that of a data collection system [6]. Yet the nature of the data that a primary health care practitioner requires for the care of patients can differ from what is needed for other purposes, for example, research [7]. Therefore, the overall assessment of the quality of these data can vary depending on their intended use. This characteristic of data quality is aligned with the concept of "fitness for purpose", i.e. are the data of appropriate quality for the use to which they are going to be applied [8,9]. Electronic medical records contain data that do not exist elsewhere, and can inform questions about primary health care; these data offer a unique window into patient care. As the foundation of the health care system, primary health care is where the majority of patient care is provided, and thus is a significant part of the system for which to consider data quality [10,11]. Stakeholders interested in primary health care EMR adoption and use in Canada have recognized the importance of understanding data quality [12]. Current information regarding Canadian primary health care EMR data suggests there is variability in levels of quality. In particular, issues have been identified in the completeness of risk factor information [13,14] chronic disease documentation [15], recording of weight and family history [14], and socio-demographic data quality [16] . This echoes the evidence from other countries [17][18][19], from studies conducted in the past [20][21][22] and in other health care settings [23]. Overall, these results reinforce that EMR data quality is an ongoing issue, particularly for researchers. It is incumbent upon us therefore, as potential users of primary health care EMR data, to understand their quality in relation to the uses to which they are applied. For example, primary health care practitioners require tools that use EMR data to support the increasingly complex care of their patients [24]. Additionally, high quality data are needed for reporting on quality of care provision [25]. Decision support functions of the EMR work best when the system contains accurate information [26]. Researchers need data of high quality to reduce bias and the risk of erroneous conclusions in their studies. Decision-makers also seek standardized, aggregated PHC data (across EMRs) for policy-making and planning. Tests of data quality, when defined in terms of fitness for purpose, thus vary across these three perspectives: clinical, research, and decision-making. Having measures in place with which to assess EMR data quality is a precursor to any assessment activity, and needed to underpin all three perspectives. While some guidance exists regarding data quality evaluation (please see Additional file 1: Appendix A), much of the recent primary health care EMR data quality literature focuses on either process steps [27], or the results of data quality assessments in one domain, such as completeness [13][14][15]17]. In addition, there currently is no consensus on how data quality assessments should be approached, nor the measures of data quality that should be used [8]. In the following, we describe a process of conceptualizing, developing, and testing a set of measures of primary health care EMR data quality, within four domains: comparability; completeness; correctness; and currency. We share the results of applying these measures in three EMR-derived datasets, and discuss what this reveals about the measures and EMR data quality. This builds on previous EMR data quality work (see above and Additional file 1: Appendix A), but differs because we developed and tested multiple measures of data quality, within four domains, in three different EMR-derived primary health care datasets. Herein we propose a basic model for assessing primary health care EMR data quality, comprising a set of data quality measures within four domains. This model is offered as a starting point from which data users can refine their own approach, based on their own needs. Methods Basic model of primary health care EMR data quality Four overall tasks were completed in developing the basic model of primary health care EMR data quality: 1) conceptualizing data quality domains; 2) developing data quality measures; 3) operationalizing the data quality measures; and 4) testing the data quality measures. Conceptualizing data quality domains Focusing on the assessment of EMR data quality from the research perspective, we conceptualized the measurement of EMR data quality within four domains. The first is comparability which is aligned with the concept of reliability [28]. In the context of EMR data quality we can extend this concept to mean the degree to which EMR data are consistent with, or comparable to, an external data source [29,30]; results of this comparison affect the generalizability of our analyses. Second, is completeness which is referred to by Hogan and Wagner as "..the proportion of observations made about the world that were recorded in the CPR [computer-based patient records].." [31]. Third, correctness has been defined as "..the proportion of CPR observations that are a correct representation of the true state of the world.." [31]. This dimension is reflective of the concept of validity, i.e. "..the degree to which a measurement measures what it purports to measure" [28]. Finally, the fourth domain is currency or timeliness [32,33] -the latter asks, "Is an element in the EHR [electronic health record] a relevant representation of the patient state at a given point in time?" [33]. We used a series of process steps to develop and test a set of EMR data quality measures, (defined as metrics or indicators of data quality) within these domains. Developing the data quality measures In the development phase, the research team conducted a literature review to identify measures of EMR data quality that had been used previously, as well as developing de novo measures. We were interested in creating measures that could be tested using structured EMR data, that were applicable across multiple EMRs, that were readily applied using the data within the EMR itself, and that addressed the four domains of comparability, completeness, correctness, and currency. Thus, through an iterative process of assessing the benefits and drawbacks of each potential measure according to these criteria, we created an initial set of measures. Operationalizing the data quality measures We conducted three steps to operationalize the measures. First, we identified test conditions to be used with the measures. The research team generated a list of thirteen conditions based on their prevalence in primary health care practice, previous use in EMR data quality research, and clinician team member input. After a process of assessment regarding the clinical importance of the conditions, the availability of relevant data in the EMR (i.e. would the condition be recorded in the cumulative patient profile or the problem list), and the feasibility of finding the data (i.e. presence of data in the structured portion of the EMR data vs. notes portion of the record), six conditions were selected for use: diabetes, hypertension, hypothyroidism, asthma, obesity, and urinary tract infection. Second, we needed to create case definitions so that patients with the test conditions could be identified (see Additional file 2: Appendix B). We could not use existing validated EMR case definitions that contain a billing code [34] because for two of the measures we needed to compare the proportion of patients who actually had diabetes and hypertension (according to our definition) against the proportion with a billing code for these conditions. Three family physician members of the team (SC, JNM, JS) assessed the case definitions that were created according to expected patient treatment practices and recording patterns in the EMR. Information including the problem list, medications, laboratory results, blood pressure readings, and BMI data contained in the databases was used. Multiple steps were undertaken to process each EMR data element used in the definitions. For example, free text recording of medication names and problem list entries were screened and verified by the clinical research team members. Third, we determined the specific details of each measure, for example the age ranges of the patients as applicable. Finally the statistical tests for the appropriate measures were determined. Please see Table 1 for details. Testing of the data quality measures Next we tested the measures sequentially in three datasets built from data extracted from three different EMR software products (herein referred to as dataset A, B, and C). The details of the datasets are as follows: dataset A -43 family physicians from 13 sites contributed data for 31, 000 patients from Jan 1, 2006 to Dec 31, 2015; dataset B -15 family physicians contributed data for 2472 patients from July 1st, 2010 to June 30, 2014; dataset C -10 family physicians from 1 site contributed data for 14,396 patients from March 1st, 2006 to June 30, 2010 (please see Table 2). These datasets were created for the Deliver Primary Healthcare Information (DEL-PHI) project; this study is part of the DELPHI project. De-identified data are extracted from primary health care practices in Southwestern Ontario, Canada and combined to create the datasets which form the DELPHI database. The datasets included in the DELPHI database are extracted from the EMR as a set of relational tables. For example, there is one table to store patient sex and age, and another table to store their scheduled appointments -these are linked by a unique patient identifier. The structure of the tables depends on the EMR software provider. For example, some EMRs provide discrete fields to enter height or weight information and specify the metric to be used, and drop down menus to select diagnosis codes. Other EMRs provide open fields for the provider to enter free text. Each dataset was analyzed separately to identify the location of the fields used in the data quality assessment. Datasets A and B had a higher proportion of structured fields for data entry, while Dataset C had several areas of free text that were searched and coded for analysis. Written consent was obtained from all physician participants in the DELPHI project. The physicians are the data custodians of the patient's EMR. DELPHI data extraction procedures, consent processes, and methods are described more fully elsewhere [35]. The DELPHI project was approved by The University of Western Ontario's Health Sciences Research Ethics Board (number 11151E). Within the process of testing the measures, several from the initial set were modified, or dropped, while others were added through the course of the study (e.g. % of patients with one or more entries on the problem list). We could not calculate several measures in dataset C (due to absence of laboratory values in a specific format for diabetes, and the different format of the problem list). However, we were able to calculate the remainder of the measures in the three datasets. This resulted in a final set of eleven measures (see Table 1). Data quality assessment Comparability We found that comparability was high among the practice population and the Canadian census population (on age bands and sex) in dataset C, while in dataset A and B significant differences in the population distributions were noted (see Figs.1, 2, 3 and Table 3). The comparability of disease prevalence differed based on condition, for example, the prevalence of diabetes and hypertension was higher than published population prevalence figures, while asthma was lower. Two conditionshypothyroidism and obesity were comparable. Completeness Variability in sensitivity values for the test conditions was found, ranging from 12% for obesity in dataset A, to 90% for diabetes in dataset B (see Table 4). For the "consistency of capture" measure, completeness varied from a low of 11% for allergy recording in dataset C, to a high of 83% for medication recording in dataset C. Completeness of blood pressure recording was over 80% in all three datasets, while height ranged from 29% in dataset B to 71% in dataset A, and weight ranged from 60% in dataset B to 78% in dataset A. Significant differences in recording by sex were found for blood pressure, height and weight in datasets A and B, with females having a higher level of recording, while dataset B showed no difference in level of recording by sex. In contrast, significant differences were observed by age group for blood pressure, height and weight recording in all three datasets, with the highest level of recording for patients aged 45-59 years of age. The proportion of patients with diabetes who had a blood pressure recording was high (ranging from 81% in dataset A to 97% in dataset B). For patients taking hypertension medications, completeness of recording of blood pressure was also high -ranging from 76% in dataset A to 100% in dataset B. Correctness Positive predictive values were found to be variable for the test conditions and across datasets, ranging from 4% for obesity in dataset B, to 80% for diabetes in dataset A (see Table 5). The presence of a tetanus toxoid conjugate vaccination among those 10 years of age and older was 0% in all three datasets. Currency Recording of weight for patients with obesity within one year of their last visit ranged from 62% in dataset A to 86% in dataset C (see Table 6). Office visits within two months for patients with a positive pregnancy test result ranged from 15% in dataset A, to 63% in dataset C. Blood pressure recording no more than one year prior to a patient's last visit ranged from 64% in dataset A to 94% in dataset B. Significant differences were observed for males and females in dataset A and C, and by age in all three datasets for blood pressure. For height recording no more than one year prior to a patient's last visit, values ranged from 30% in dataset A to 42% in dataset C. Significant differences for height by sex were found only for dataset A, however significant differences were found in height recording by age across all three datasets. For weight recording no more than a year prior to a patient's last visit, values ranged from 45% in dataset A to 62% in dataset B. Significant differences by age were observed for weight recording across all three datasets, while differences by sex were found in dataset A alone. Discussion In this study we developed eleven measures of primary health care EMR data quality, and tested them within three EMR-derived datasets. We were not able to calculate results for several measures in one dataset because of the way the data were collected in that specific EMR. [15]. The results of this study pertaining to recording of measures such as height and weight differ from Tu et al. (2015), however, overall patterns such as less frequent recording of weight versus blood pressure were similar. Some of this variability is to be expected. For example, one could anticipate blood pressure would be recorded less frequently among younger age groups. Similarly, the high level of completeness of blood pressure recording among patients with diabetes and those taking hypertension medications is perhaps not surprising. However, other results such as no difference in the completeness of blood pressure, height, and weight recording for male and female patients in dataset B versus datasets A and C, do not have an obvious explanation. Some practice sites may have decided that blood pressure, height, and weight should be universally recorded among males and females. In general, practices may record height less frequently than weight, because height varies less over time than weight. This speaks to the importance of understanding the nature of the data in the context of their potential use. The measures developed for this study help illuminate some of the nuances associated with primary health care EMR data. For example, researchers seeking to answer a question regarding patients with hypertension may want to be aware that these patients could have higher levels of blood pressure recording than other patients, and thus may want to consider a study of medication adherence among these patients as opposed to a study of the prevalence of high blood pressure. Despite advancement in the field, the most recent primary health care EMR data quality literature focuses mainly on describing process steps regarding the assessment of data quality, or on determining one aspect of data quality such as completeness. Reporting guidelines exist for studies using routinely collected health data [36,37], which highlight the importance of data quality. However, a small proportion of studies using EMR data report on quality assessments [38], with the exception of studies associated with well-established primary health care EMR databases [39,40]. This may be partly because there is a lack of consensus on the process steps for assessing data quality, the measures to be used, and finally, what acceptable levels for primary health care EMR data quality are [8]. Creating these standards is a challenging task, given that different data are required for different questions, and the level of quality needed varies with types of data use. Developing and testing measures of primary health care EMR data quality is a necessary foundational step in this task. Assessing primary health care EMR data quality is a complex process. There are many factors that play into how these data come to be, including: how users interact with the EMR and enter data; the EMR system itself; practice characteristics, such as how external data are incorporated in the EMR [8], and the nature of patient populations [41]. The user of primary health care data needs to be aware of the possible impact of these factors. For example, some software programs provide a cumulative patient profile or "problem list" area of the EMR where current diagnoses can be recorded for a patient in a free text field, while others provide a structured "health condition" section with drop-down lists and coded diagnoses, or both. Thus, even within the datasets in our own database we found we could not calculate all the measures we had developed because of differences in EMR structures. This is a particular difficulty that applies to the Canadian context where a plethora of EMRs are utilized by primary health care practitioners, each with its own configuration [27]. Furthermore, different data extraction tools can produce different results [42], adding an additional layer of complexity to this picture. While the measures presented here are meant to assess overall EMR data quality, each question that one hopes to answer using EMR data is unique. Therefore, when assessing the "fitness" of the data for its intended purpose [9] one needs to apply both broad considerations captured in the aforementioned frameworks, including the provenance of the data [43], and narrow onesapplying specific quality measures to the data elements that are to be used [8,37,44]. If we stay true to a broad conceptualization as fitness for purpose, then each question posed that will be answered through the use of EMR data can be considered unique in the context of data quality. Measures serve as tools that can be deployed in a data quality assessment activity, but they are not sufficient in and of themselves to properly assess data quality in terms of a particular question or project. However, a sustained program of testing measures in a wide variety of jurisdictions, across EMR typescould allow the creation of a standard set of measures of data quality for general use. Over time, these measures could be collected into a library (to be shared widely) which would assist those who seek to conduct and report on their own data quality assessments. We recommend that data users examine the suite of measures available and determine which would be the most applicable in their own particular context as they are conducting data quality assessments. From a broader perspective, guidance also exists in the literature regarding data quality management and the governance of health information [45]. Strengths and limitations There are several potential limitations of this study. The first is that our assessment of data quality is focused on the structured data elements within the three EMR datasetsnot the narrative or notes portion of the record. This limitation reflects a choice made by DELPHI researchers not to extract the narrative portion of the EMR data, for patient privacy reasons. Based on our understanding of our EMR datasets, the majority of the data needed for the analysis would be found in the structured portion of the EMR data. Second, our assessment of data quality will be generalizable only to three types of Canadian EMR software products. Third, in the Canadian context, diagnostic codes are submitted for billing purposes (used in our case definitions for the test conditions), while in other jurisdictions, diagnoses are not linked to billing. Despite these factors, the three datasets are based on EMR data from a large number of practitioners working within many practice types and communities in Southwestern Ontario. It was not within the scope of this study to systematically assess the individual recording practices among all the DELPHI sites; this would have allowed us more fully explain some of the results. A strength of this study is that it focuses on assessing data quality primarily using data within the EMR itself. This approach is the most feasible method to implement on a wide scale, in contrast to methods using external reference data. Conclusion This paper proposes a basic model for assessing primary health care EMR data quality. We developed and tested multiple measures of data quality, within four domains, in three different EMR-derived primary health care datasets. The results of testing these measures indicated that not all measures could be utilized in all datasets, and illustrated variability in data quality. This is one step forward in creating a standard set of measures of data quality. Nonetheless, each project has unique challenges, and therefore requires its own data quality [46] assessment before proceeding. Additional files Additional Availability of data and materials De-identified patient data contained in the DELPHI database are collected from physicians who have consented to participation in this study. Participants agreed to share these data for the purposes of the DELPHI study only, therefore these data are not publically available.
v3-fos-license
2019-04-27T13:09:06.725Z
2018-03-14T00:00:00.000
134883399
{ "extfieldsofstudy": [ "Geology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=83074", "pdf_hash": "26dff52b7f03e90aa91c9b2b0f89da216d783048", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2908", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "bc383037c1fea263479dd5554b22e14f3c1a7c2a", "year": 2018 }
pes2o/s2orc
Knowledge, Attitudes and Practices of the Population of the District of Ahomadégbé (Municipality of Lalo) in Benin on Methods of Water Treatment at Home Water is an indispensable resource for life. In the district of Ahomadégbé in Benin, although most of the population has access to improved water sources, in their homes, residents consume poor water quality due to microbiological contamination during transport and storage. To identify necessary actions needed to improve household drinking water quality, the present study aims to analyze the knowledge, attitudes, and practices the district of Ahomadégbé’s population regarding household drinking water treatments methods. A study was conducted, where 377 residents were interviewed using an individual questionnaire and 82 participants were selected for eight focus groups to determine the population’s knowledge, attitudes, and practices. More than 65% of the district’s population knew some methods of water treatment at home. In practice, however, they lacked the knowledge to apply the different water treatment methods and only 6.1% of the population used at least one method of water treatment at home, even if it was not always adapted. The water treatment methods residents used were Alum (KAl(SO4)2∙12 H2O, chemical decantation method), filtration on tissues, and disinfection by boiling. Ineffective home water treatment methods, such as oil and cresol were also used. The population is aware of water contamination during transport and storage. Unfortunately, most residents surveyed do not treat water before consumption, and those who treat it, use inappropriate How to cite this paper: Amoukpo, H., Bachirou, Z.S., Diez, G., Akuesson, L., Lanignan, R., Degnonvi, H., Barogui, Y., Boni, G., Boko, M. and Johnson, R.C. (2018) Knowledge, Attitudes and Practices of the Population of the District of Ahomadégbé (Municipality of Lalo) in Benin on Methods of Water Treatment at Home. Journal of Water Resource and Protection, 10, 251-265. https://doi.org/10.4236/jwarp.2018.103015 Received: February 3, 2018 Accepted: March 13, 2018 Published: March 16, 2018 Copyright © 2018 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/ Open Access Background Water is a natural resource whose availability in sufficient quantity and acceptable quality contributes to the maintenance of health.Although 91% coverage of drinking water has been achieved globally, and 6.6 billion people have access to improved water sources [1], much of the world's population, especially those living in rural areas, continue to consume water of poor microbiological quality. In sub-Saharan Africa, 319 million people live without access to an improved water source and 102 million people still use surface water [1].In Benin, water issues are still a major problem for the population, especially those living in rural areas where only 72% have access to drinking water [1]. In the municipality of Lalo, Benin, households' drinking water sources are boreholes, standpipes, modern wells, cisterns, and surface water [2].Specifically, in the district of Ahomadégbé, household water sources are improved water sources (91.4%) and unimproved water sources (8.6%) [3].Despite the district of Ahomadégbé's good coverage from improved water sources, microbiological analyses of water samples collected at the source and during transport and storage, have shown increasing microbiological contamination between source and storage [4]. More than 340,000 children under the age of 5, or almost 1000 per day, die each year from diarrheal diseases due to poor sanitation, poor hygiene, or unsafe water [1].Diarrheal diseases are the third leading cause of death among children under 5.Despite all the progress, there is no guarantee that the population is consuming water of good microbiological quality.In rural areas, even when people have access to improved water sources, they must travel long distances before getting water.In the absence of a home piping system, access to water means water must be transported and stored at home [5] [6].Several studies have shown that the lack of hygiene during the transport and storage of drinking water is at the root of the microbiological contamination of household water [4]- [12]. To limit water contamination, a process must be in place that includes the protection of water sources, the selection and implementation of drinking water treatment methods, and the proper management of risks in water distribution networks.Several interventions to improve the quality of drinking water are possible: source or collection point interventions, environmental interventions, and household-level interventions [13].Household-level interventions help to improve water during storage, as they ensure that water quality is improved at the point of consumption [14].Moreover, household-level interventions are twice as effective in preventing diarrhea as interventions at the source [13]. These interventions require effort from heads of household to: treat water properly, always have treated water available, avoid recontamination, and refrain from drinking untreated water [13].Several home water treatment methods have been developed over the years and are widely used around the world.The most common are chlorination and filtration.These methods can improve the quality of drinking water and prevent disease when properly applied.Although proven effective in the laboratory, the effectiveness of these methods do depend on external factors, such as the user, the ease of use of the technology, and the levels of hygiene and sanitation [15].Unfortunately, in rural areas the population is often insufficiently informed about home water treatment methods and therefore applies them incorrectly. To ensure that population consumes water free from microbiological contamination in the district of Ahomadégbé, it is first necessary to establish a diagnostic process that identifies the actions to be taken.This study aims to analyze the knowledge, attitudes, and practices the district of Ahomadégbé's population regarding household drinking water treatments methods. Study Site This study was conducted in the district of Ahomadégbé, which is in the municipality of Lalo, Benin (Figure 1).The municipality of Lalo is an administrative subdivision of the Couffo department and includes eleven (11) districts.The district of Ahomadégbé is subdivided into four villages, with a total population estimated at 5403 inhabitants [16]. Description of the Study This is a cross-sectional study that aims to analyze knowledge, attitudes, and practices (KAP) on home water treatment methods in the district of Ahomadégbé.The study ran from April 24, 2016 to May 8, 2016.Three hundred and seventy-seven (377) people, 342 women and 35 men residing in the villages of Ahomadégbé and Adjaïgbonou, were interviewed using an individual questionnaire. Focus Group Survey The venue was chosen to ensure accessibility for all, absolute neutrality, and a relaxed and quiet atmosphere.The date and time of the meeting considered the personal constraints of most participants.Each participant was contacted the day before the meeting date to ensure their presence and to answer any questions.Arrangements were also made to record all discussions. An experienced sociologist moderated all focus groups.In addition to handwritten notes during the focus groups, the discussions were recorded and later transcribed and translated into French.All questions were open questions. The topics covered were: water and disease, the quality of water sources used for drinking, the sources of contamination of drinking water during transport and storage, and the measures to be taken to limit the contamination of water and home water treatment methods known and used in the district of Ahomadégbé. The privacy and confidentiality of the interviewees, and positive interactions between the individuals and the interviewer, were maintained during data collection. Additionally, 82 participants were selected for eight (8) focus groups.Women and children were the main subjects for the following reasons: Journal of Water Resource and Protection -Women are generally responsible for household water management (watering and domestic use); -Women were helped by children in transport, and children are in more contact with the storage container either to serve themselves or to serve adults.However, men's opinions were also gathered on the question of drinking water hygiene. The groups consisted of a mix of water point users and managers to confront the behaviors and practices around the water points witnessed by the two subject groups.The number of participants in each focus group ranged from eight to twelve.Four (4) focus groups were conducted with women, two (2) with men, and two (2) with children (Table 1). Data Processing Data processing from the questionnaire survey included: manual count and coding of the questionnaires; development of an input mask using SPSS version 19.0; entry of coded data; and, correction of any errors after data entry. Data Analysis Data was analyzed with SPSS 19.0 and EpiInfo7 software. Descriptive Aspect The variables were described by their size and frequency. Analytical Aspect We performed a bivariate analysis to investigate the association between the dichotomous qualitative dependent variable and the independent variables with adequate parametric tests.The association was considered significant for independent variables with a p-value less than 0.05.The focus group data (the recorded discussions) were transcribed using Word 2007 software and triangulated with the data obtained through the questionnaire survey. Ethical Considerations The ethical protocol that authorized this study has been validated by the National Committee of Ethics for Health Research (No. 123/MS/DC/SGM/+DFR/ Description of Socio-Demographic and Economic Characteristics of Populations More than 90% of our sample is represented by women, 97.9% of which are of the Tchi ethnic group and 70% are peasants/fishermen.It should be noted that 57% of those surveyed have no education and 13.76% have a daily income of more than 500 FCFA.Socio-demographic and economic characteristics are summarized in Table 2. Description of Behavioral Factors Influencing the Quality of Drinking Water Approximately 86.5% of the participants surveyed consume water from an improved water source, 37.9% use improved water sources for other uses, and 78.2% use the same container for water transportation water for drinking and water for other uses (Table 3).The focus groups revealed that the repeated failures of the Adjaïgbonou water point are one of the main reasons for the use of water from unimproved water sources, especially rainwater.In the hamlet of Tozounmè, the population must cross the Couffo River before stocking up on an improved water source.This difficulty is also a reason why the population consumes the Couffo River water."It is difficult for us to cross the river with the basin of water in the canoe.So, we prefer to take water directly from the river." Among residents surveyed, 70.6% estimate that the distance between the source of water and their house is between 10 and 100 meters (Table 3).In Adjaïgbonou, this is not always the case."The pump regularly breaks down and we stay several days without water and we have to travel about 3 km to look for water in Ahomadégbé." About 74.3% of the participants understand that water may be contaminated between source and storage and during storage.Most of the district Ahomadégbé's population (93.9%) cleans the transport container before taking water (Table 3).They clean the transport container of the house: "At the pump, we have neither the time nor the space to clean the basins.When our turn comes, we must serve without waiting for others."About 26.3% of the population covers the container during the transport of drinking water.The population uses uncovered basins or cans with or without a lid for transporting drinking water.The reasons often mentioned are: "The container does not have a lid;" "The water point is near the house or in the house itself." Regarding the coverage of the drinking water storage containers, 97.9% of respondents do so (Table 3).The population knows that" The containers (jar, plastic bucket and can) must be washed with soap before and once filled with water, they must remain closed."A minority (16.2%) uses the drinking-water drinking cup for other purposes and 22.3% of the population washes their hands before taking drinking water in the storage container.The observation made in the field is that the same cup is used by the whole family to collect water from the storage container and then to drink.The participants know that" The water can be contaminated at the precise moment of its consumption if the cup is not clean or if the hands are dirty."More than 32.6% of the population conserves drinking water for more than 3 days.They know that:" The duration of the storage of the water must not exceed seven (7) days;" and, "The water can be contaminated if it stays too much (1 week) in the bucket or jar.We must then replace it." In conclusion, the participants surveyed are aware that the lack of hygiene can favor the contamination of water during transport and during storage.But, some behavioral factors promote microbiological contamination of water. Description of Home Water Treatment Methods According to Table 4, 65.3% of participants have heard about home water treatment methods at least once: approximately 24% know about disinfection by boiling, 9.3% Aquatabs tablets, 16.3% tissue filtration, 12.6% Alum (KAl(SO 4 ) 2 •12 H 2 O, chemical decantation method), and 25.2% of respondents know about oil 4.1% camphor and 2% cresol.The population believes that the most effective home water treatment method is Alum."Alum is the most effective method: as soon as you put in the water, it becomes clear." The population knows some methods, but do not know the role or at what stage of the water treatment process each method can be used.Table 5 shows that only 6.1% of participants use at least one home water treatment method.According to the focus groups, the methods often used are: cresol, Alum, or oil."If there is cresol, we can put a little because cresol kills microbes or we can use Alum.""We put some oil inside so that it does not have any larvae in the bottom of the jar."Other methods are sometimes used:" We also boil water or use Aquatabs, but after the water does not have a good taste." And for those who do not treat water, they mentioned the following reasons:" The water is already drinkable;" "We do not know how to treat water;" "We do not always have the treatment product available to us." In practice, the participants do not know how to use these different methods of home water treatment and others use inappropriate methods. Factors that Significantly Influence the Implementation of Home Water Treatment Methods From the analysis in Table 6, it appears that only the association between knowledge of home water treatment methods and the practice of home water treatment methods (having used at least one method) is statistically significant. Discussion The objective of our study was to analyze the knowledge, attitudes, and practices of the population of the district of Ahomadégbé regarding methods of treating drinking water at home.Non-probability sampling was used for household selection, which allowed for a representative sample.The data was collected by a combination of techniques and tools, namely questionnaire survey and focus group.Given the language barrier, we translated the questionnaire from the French language into the local language, which could be the source of some information bias.Moreover, the inability to verify some of the participants' information could also constitute information biases.In the district of Ahomadégbé, only 37.9% of the participants use improved water sources for uses other than drinking mainly because of repeated failures of the only improved water source in the village of Adjaïgbonou and the necessity of villagers to cross the Couffo River in the hamlet of Tozounmè before accessing an improved water source .Distance is therefore a factor that determines the choice of water source used for drinking and for other uses in this borough.The easy access to a water source is assessed in relation to the distance between the residence and the supply point, and the time set to get water [17].Overall, when Adjaïgbonou's supply point is operating, all participants have access to an improved source within 1000 meters.These results are similar to Kouakou et al.'s study of Abidjan, where they found that water sources were all located less than one kilometer from the households, guaranteeing basic access to the distance criterion for access to water [18].In the district of Ahomadégbé, more than 62% use unimproved water sources for other uses although 70.6% obtain their water from a source located within 100 meters of their home.Howard and Bartram argued that when the distance between the water source and the house is less than 100 meters from the residence, all aspects of personal hygiene are assured [19].Yet, when the distance between the water source and the residence is between 100 meters and 1000 meters, hand washing and basic hygiene are possible, but showering and laundry are difficult to ensure unless they are done at the source [19]. Nearly 98% of the population covers storage containers for drinking water.In Ahomadégbé, 67.4% of the population retains drinking water for one to three days.This result is different from that of Lalanne.In the province of Ganzourgou in Burkina Faso, 25% of the population gets their supplies twice a day and 75% collect water once a day [6]. In terms of knowledge of home water treatment methods, 65.3% of the population is familiar with home water treatment methods.This result is superior to that of Lalanne, who found that 48% of participants have knowledge of home water treatment methods [6].The methods known by the district of Ahomadégbé population are among others: Alum, tissue filtration, boiling disinfection, Aquatabs tablets, palm branch, lemon, oil, camphor, and cresol.However, the population believes that the most effective home water treatment method is Alum.There is confusion between sedimentation and water disinfection because the use of Alum accelerates the sedimentation process.For effective water treatment, the following three physical and microbiological processes must be complementary: sedimentation, filtration, and disinfection [20].Participants know good and bad methods, and do not know at what stage of the water treatment process the right methods should be used. In practice, 93.9% of the population does not treat drinking water.In the province of Ganzourgou, 90% of the participants do not treat the water before its consumption because the boreholes are of good water quality and thus treatment is unnecessary [6].Joshi et al. found that the supposed potability of water, the high cost of the methods, and the ignorance of these methods are reasons for not treating water before its consumption [21].In the district of Ahomadégbé, 6.1% of the population uses at least one treatment method.In the peri-urban zone in Abidjan, 3% of the population treats water [18].Yet, Ndiaye et al. found that in 79% of cases studied drinking water was treated in Senegalese rural areas.[22]. In the district of Ahomadégbé, those who treat drinking water primarily use it as a method that is non-detrimental method to their health: Alum, tissue filtration, boiling disinfection, but these methods are used incorrectly.Each household uses one or the other method.A survey in Benin showed that few households treat drinking water even if the water source is not improved [23].And, most of the time they do not use treatment methods according to the recommended procedures [23]. Methods like cresol, camphor, and oil are also used.These results corroborate those of Akowanou et al. who found that in the Mono and Couffo departments, people use oil and crushed palm leaves as home water treatment methods [23].If these methods prevent the emergence and multiplication of larvae, they are dangerous to human health because they cause chemical contamination of the water. In general, the most common home water treatment methods used in rural Benin are boiling, adding chlorine, filtration (tissue, ceramic filter or some other filter), and solar disinfection [24].Yelognissè's work reveals that in rural Benin some women use white tissues for filtration and Alum, while other women use boiling or decantation of water as endogenous methods of drinking water treatment [25].In the state of Katsina in Nigeria, tissue filtration is the most used method, followed by boiling and adding chlorine [11].In India, a study has shown that people use filtration and boiling as water treatment methods [21]. Generally, in developing countries boiling, filtration, or chlorination are effective for improving the microbiological quality of drinking water [26].But in the district of Ahomadégbé, the population prefers Alum, which represents one of the phases for the effective treatment of drinking water.The study of factors influencing the application of home water treatment methods revealed an association between knowledge of home water treatment methods and the application of those methods.For better implementation of home water treatment methods, it is necessary to bring the knowledge to the people through various awareness programs, whether in the community, schools, or health centers, or in educational or learning centers. Conclusion Our study of the knowledge, attitudes, and practices of the population in the district of Ahomadégbé regarding home water treatment methods revealed that the population is aware of water contamination during transportation and storage.Unfortunately, only 6.1% of the participants surveyed use at least one water treatment method, but use water treatment methods improperly.This study provides basic information for any intervention to improve the quality of home 2. 3 . 1 . Questionnaire SurveyThe questionnaires were designed to take approximately 30 minutes, including open and closed questions.The questionnaire was organized into three main sections: socio-demographic and economic characteristics; knowledge, attitudes and practices on sources of drinking water contamination; and knowledge, attitudes and practices of home water treatment methods.The questionnaire was created in French, translated into the local language Fon, and pre-tested for all translation errors.The pre-test was done before data collection in the district of Sèdjè-Dénou, municipality of Zè.Journal of Water Resource and Protection Figure 1 . Figure 1.Location map of the municipality of Lalo, Benin. Table 1 . Distribution of participants in the focus groups. 82Journal of Water Resource and Protection CNERS/SA).Agreement with municipality's sanitary authorities was obtained before starting data collection. Table 2 . Demographic and socio-economic characteristics of the respondents. Table 3 . Behavioral factors influencing the quality of drinking water. Table 4 . Knowledge of home water treatment methods. *Only those who claim to know the methods of water treatment at home.CI: confidence interval. Table 5 . Attitudes and practices of home water treatment methods. *Only those who claim to use at least one method.CI: confidence interval. Table 6 . Association between the application of home water treatment methods and socio-economic, behavioral, and environmental factors. *Chi 2 test.**Fisher test.Journal of Water Resource and Protection
v3-fos-license
2020-10-12T13:04:29.635Z
2020-10-12T00:00:00.000
222277849
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.2020.0013", "pdf_hash": "33d7b471914cf485c8c36e7b56e9791ad103f9d6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2909", "s2fieldsofstudy": [ "Physics" ], "sha1": "33d7b471914cf485c8c36e7b56e9791ad103f9d6", "year": 2020 }
pes2o/s2orc
Progress and opportunities for inertial fusion energy in Europe In this paper, I consider the motivations, recent results and perspectives for the inertial confinement fusion (ICF) studies in Europe. The European approach is based on the direct drive scheme with a preference for the central ignition boosted by a strong shock. Compared to other schemes, shock ignition offers a higher gain needed for the design of a future commercial reactor and relatively simple and technological targets, but implies a more complicated physics of laser–target interaction, energy transport and ignition. European scientists are studying physics issues of shock ignition schemes related to the target design, laser plasma interaction and implosion by the code developments and conducting experiments in collaboration with US and Japanese physicists, providing access to their installations Omega and Gekko XII. The ICF research in Europe can be further developed only if European scientists acquire their own academic laser research facility specifically dedicated to controlled fusion energy and going beyond ignition to the physical, technical, technological and operational problems related to the future fusion power plant. Recent results show significant progress in our understanding and simulation capabilities of the laser plasma interaction and implosion physics and in our understanding of material behaviour under strong mechanical, thermal and radiation loads. In addition, growing awareness of environmental issues has attracted more public attention to this problem and commissioning at ELI Beamlines the first high-energy laser facility with a high repetition rate opens the opportunity for qualitatively innovative experiments. These achievements are building elements for a new international project for inertial fusion energy in Europe. This article is part of a discussion meeting issue ‘Prospects for high gain inertial fusion energy (part 1)’. VTT, 0000-0001-7532-5879 In this paper, I consider the motivations, recent results and perspectives for the inertial confinement fusion (ICF) studies in Europe. The European approach is based on the direct drive scheme with a preference for the central ignition boosted by a strong shock. Compared to other schemes, shock ignition offers a higher gain needed for the design of a future commercial reactor and relatively simple and technological targets, but implies a more complicated physics of laser-target interaction, energy transport and ignition. European scientists are studying physics issues of shock ignition schemes related to the target design, laser plasma interaction and implosion by the code developments and conducting experiments in collaboration with US and Japanese physicists, providing access to their installations Omega and Gekko XII. The ICF research in Europe can be further developed only if European scientists acquire their own academic laser research facility specifically dedicated to controlled fusion energy and going beyond ignition to the physical, technical, technological and operational problems related to the future fusion power plant. Recent results show significant progress in our understanding and simulation capabilities of the laser plasma interaction and implosion physics and in our understanding of material behaviour under strong mechanical, thermal and radiation loads. In addition, growing awareness of environmental issues has attracted more public attention to this problem and commissioning at ELI Beamlines the first 2020 The Authors. Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/ by/4.0/, which permits unrestricted use, provided the original author and source are credited. Introduction Sustainable production of large amounts of energy at affordable prices and with a limited effect on the environment is a challenging and unresolved problem. A controversy between the growth of population, increasing inequality in access to natural resources, education and decent living conditions across the world and increasing stress of human activity on the environment and climate can only be resolved by coordinated efforts from all developed countries in improving the energy production and distribution. Development of renewable energy sources and more efficient modes of energy consumption are indispensable elements of the overall energy programme but without sustainable nuclear energy production these measures are incompatible with the growing ecological, economical and political demands. Nuclear fission is a viable method of massive energy production, but its attractiveness is significantly undermined by the unresolved problems of treatment of radioactive waste, of danger in operating nuclear reactors in near-critical conditions and the high risk of uncontrolled proliferation of nuclear weapons. Nuclear fusion presents evident advantages in all these issues: it does not produce highly radioactive long-living elements but on the contrary, may incinerate them with energetic neutrons. It is also intrinsically stable and the only dangerous elementtritium-can be produced and consumed in place. However, while the fission energy technology has been developed very fast in the 1950s-1960s, the fusion energy has remained at a research level for more than 50 years and prospects for the construction of a commercial fusion reactor and reliable energy production are still undefined. It is evident that the fusion energy production is a much more complicated process than fission because it requires the maintenance of fuel at extremely high temperatures, but it is also evident that the present scheme of organization of research on inertial fusion energy in the European Union is not sufficiently programme-oriented; it is conducted on a governmental level as basic research without strong contacts to the industry. Scientists are not yet able to propose viable technical solutions attractive for the industrial sector and private companies. In this paper, I consider the progress and difficulties of inertial fusion research in Europe and opportunities that could be realized in fusion science and technology in the near future. Background The major difference between fission and fusion are that the former is initiated by neutral particles-neutrons-there is no electrostatic barrier and a continuous chain of fission reactions can be produced at near equilibrium conditions and at reasonably low temperatures of several hundred degrees Celsius for a long time. The most vulnerable elements exposed to intense neutron irradiation-the fuel rods-can be safely replaced without perturbing the energy production process. By contrast, fusion reactions involve positively charged particles and the necessity to overcome the Coulomb barrier implies that the fuel must be maintained at very high temperatures of several tens of million degrees Celsius in a plasma state without direct contact with any material. This strict condition of high-temperature thermal equilibrium poses strong and as-yet unresolved physics and technical problems. Two methods of fusion plasma confinement are investigated: magnetic and inertial. Magnetic confinement offers a possibility of continuous quasi-steady reaction, but the available magnetic fields of a few tesla limit the plasma density to such a low value that the minimum plasma volume is about a few hundred cubic metres, implying a large minimum size of energy production unit and a very high construction cost assuming that all technical problems could be resolved [1]. Moreover, a reactor with a large plasma volume and strong magnetic fields poses a large number of secondary problems such as a large tritium inventory, co-existence of hot plasma environment with magnetic coils maintained at cryogenic temperatures and the system of heat and alphaparticles removal from the burning plasma. All these problems will be addressed when the ITER becomes operational in the 2030s. 1 Inertial fusion operates in a pulsed regime, where the fuel is compressed and heated so fast that a significant fraction of fuel is burnt off during the expansion time [2]. A quantity of energy released in the explosive process is limited by the mechanical, thermal and radiation resistance of chamber walls, so it cannot be more than a few hundred MJ, equivalent to a hundred of kilograms of high explosives. Therefore, the fuel mass in a single ICF pellet is limited to just a few milligrams and compression and burn take place on very small spatial and temporal scales of a few millimetres and several nanoseconds. Potentially, such small size targets may be a base for a compact reactor, assuming the plasma facing materials and driver would be able to withstand the corresponding thermal, mechanical and radiation loads. Compared to the magnetic fusion reactor that will be operating in a stationary regime with the efficiency defined by a power balance, inertial fusion is a pulsed process and a positive energy balance has to be achieved separately in each explosion. This feature, together with a high reaction temperature, imposes special constraints on the inertial fusion process. The intrinsic energy yield in a fusion of hydrogen isotopes, deuterium and tritium, D + T → He 4 + n, is a ratio of the total energy that could be released in the fusion reaction, E f = 1 2 N DT (where N = N D + N T is the total number of hydrogen ions with N D = N T and DT = 17.6 MeV is the energy of the fusion products) to the thermal energy of the hydrogen plasma, 3NT ig (including electrons) at the temperature T ig 9.5 keV (corresponding to 10% of the maximum reaction rate) needed for ignition of fusion reaction. This intrinsic yield for the DT fusion is about 300, which is a large number by itself but, unfortunately, insufficient for compensating the losses related to incomplete fuel burn (typically 30%), heating and compression efficiency (typically less than 10%), energy conversion efficiency (less than 40% for a thermal process) and the laser driver efficiency (today of a few per cent). A fusion yield in the inertial process can be significantly increased by using the 'hot spot' ignition scheme. In fact, not all fuel needs to be heated to the ignition temperature, but only a small fraction, called the hot spot, which initiates the burn in a cold fuel shell. The energy released in the hot spot should be sufficient to compensate losses and to further increase its temperature and to trigger a burn wave in the cold fuel. This condition provides a criterion on the hot spot areal density and temperature similar to the Lawson criterion in magnetic fusion [2,3]. The remaining fuel needs only to be compressed, which requires much less energy than heating. The time of burning wave propagating into a cold fuel increases with its areal density, so one could burn up to one-third of the fuel loaded in the target. The hot spot ignition approach is the dominant paradigm of inertial fusion [3,4]. The mainstream conventional scheme consists of achieving both the goals-compression of the fuel and ignition of the hot spot-in a single process by appropriately designing the laser intensity temporal profile and the target structure. Alternative approaches have been also proposed where fuel compression and hot spot ignition are performed with two different laser pulses [5]. They could be more efficient but more demanding in terms of laser power and performance and have not yet been tested experimentally on a real scale because no appropriate laser facility is available. This short description shows the principal difficulties mounting on the way to inertial fusion. Although no 'show stopper' has been identified so far, realization of this approach is extremely challenging in terms of the precision of target fabrication, laser performance and focusing, synchronization of implosion and ignition and energy recuperation. Inertial fusion was the major driver for laser development and enormous progress in laser technology has been made since the invention of the laser 60 years ago. A unique laser facility capable of demonstrating inertial fusion with yield larger than one-National Ignition Facility (NIF)-has already been operating for more than 10 years. However, ignition has not yet been achieved [4] and the laser repetition rate is far below the level needed for sustainable energy production from fusion. Research in inertial fusion also faces difficulties in funding because of the duality of its applications for simulation of nuclear weapons and energy production. The major research projects and major laser facilities are supported by national defence programmes in the USA, France, UK, China and Russia. Academic research on inertial fusion energy has very limited support; it is not sufficiently coordinated and funded and there are only two multi-beam laser installations-Omega in the USA and Gekko-XII in Japan-of intermediate energy of 30 and 3.5 kJ, respectively. This situation certainly limits our capacity to develop original and efficient fusion schemes and test them in experiments. Moreover, the existing large-scale laser facilities, NIF in the USA, Laser MegaJoule (LMJ) in France and Shenguang III (SGIII) in China, are designed for the indirect drive implosion and are not well suited for testing direct drive schemes, which are more efficient and better adapted for energy production. In the indirect drive scheme, the laser radiation is transformed in soft X-ray radiation with a near thermal spectrum corresponding to effective temperature of 300-400 eV, which is then used for target ablation and implosion [3]. While X-ray radiation creates a higher ablation pressure and provides a more homogeneous implosion, it implies an efficiency reduction by a factor 5-10 and the increase of mass involved in the process of X-ray generation by a factor of a thousand. Although an inertial fusion reactor for energy production based on the indirect irradiation scheme LIFE [6] has been designed as a part of the NIF project, its viability in terms of efficiency and ecological compatibility is questionable. The project ended in 2014 after NIF failed to demonstrate ignition in the indirect drive scheme. By contrast, the direct drive approach promises a more efficient use of laser energy and a higher fusion energy gain by applying laser radiation directly on a target [7]. It is better suited for energy production as targets are much lighter but it requires a much better control of homogeneity of laser irradiation. This approach is being developed by scientists from the Laboratory for Laser Energetics (LLE) at the Rochester University hosting Omega and Omega-EP lasers [8]. In addition to these two mainstream ICF schemes there are several alternative approaches. The fast ignition approach aims at the creation of an ignition spark by irradiation of a compressed target by an intense short pulse petawatt laser. It is led by Japanese scientists from the Institute of Laser Engineering at the Osaka University hosting Gekko-XII and LFEX lasers [9]. Another approach is based on a cylindrical implosion of laser-preheated magnetized plasma in a Z-pinch geometry [10]. It is developed by American scientists from the Sandia laboratory and LLE [11]. Inertial fusion energy research in Europe Europe has several kJ-class laser facilities in the Czech Republic, Germany, France and UK, which are suited for studying processes of laser-plasma interaction, but do not allow for the performance of implosion and integrated fusion experiments. European scientists have strong experience in international collaboration and pioneered the organization of the first international project dedicated to inertial fusion energy, HiPER [12], but do not have use of a dedicated laser facility. The HiPER consortium brought together 26 laboratories from 10 countries and was supported by the European Research Council. Its mission was to go beyond ignition and provide the scientific, technological and economic basis for construction of a prototype of a commercial inertial fusion reactor. Unfortunately, despite interesting and promising results obtained by the consortium, the project was not extended beyond 2013 because of a lack of national support and competition with a more successful project for the construction of ultra-high intensity lasers: Extreme Laser Infrastructure (ELI). Since that time, only low-level support provided by the EuroFusion consortium and the International Atomic Energy Agency within the Coordinated Research Projects 2 has maintained the inertial fusion collaboration. HiPER consortium has selected shock ignition as a baseline scheme for energy production. This choice is motivated by a thoughtful analysis of multiple constraints and conditions. A shock ignition scheme first proposed by Shcherbakov [13] and further advanced by Betti et al. [14] is a direct-drive implosion approach using an additional strong shock for boosting the hot spot temperature and facilitating ignition. It allows a more efficient use of laser energy for ablation of the external part of a spherical shell target and compression of the fuel inside. As opposed to the conventional direct-drive scheme, here the shell is imploded at a lower velocity and at a lower entropy in the fuel, thus permitting to achieve higher fuel densities with a lower risk of excitation of the damaging hydrodynamic Rayleigh-Taylor instability when the shell is imploded. The temperature of the hot spot formed in the target centre when the shell collapses is, however, insufficient for ignition. The missed energy is transported to the hot spot with a strong converging shock, which is excited by a special laser pulse-spike, and its propagation is synchronized with the shell implosion. Calculations show [15] that the required laser spike power of 300-500 TW is within reach of present-day high-energy laser facilities, and thus this scheme could be tested in full scale on NIF or LMJ. There is, however, the caveat that these facilities are optimized for indirect drive, paying a significant penalty in drive efficiency and quality of implosion when operated in direct drive. A target for shock ignition is as simple as a target for the conventional direct-drive implosion. It consists of a double-layer shell filled with a DT gas. The inner shell of a solid deuterium-tritium mixture is covered by an ablator (plastic or carbon). It does not contain other heavy elements such as a gold cylinder (hohlraum) for conversion laser radiation in X-rays in the indirect-drive scheme, or a gold cone for guiding igniting laser pulse in the fast ignition scheme. This is a significant advantage for a power plant's operation as it produces much less activated debris and highspeed macro-particles that may damage the focusing optics and the reactor first wall. As such, the implosion phase in the shock ignition scheme can benefit from the knowledge already acquired in the conventional direct-drive approach. In this context, LLE scientists recently demonstrated an impressive improvement in direct-drive implosion manifested by tripling the fusion yield in Omega experiments [16]. (a) Physics of laser plasma interaction under shock ignition conditions Studies of shock ignition schemes are focused therefore on the characterization of strong shock excitation by an intense laser pulse and its propagation in the target [17]. Laser intensities needed for strong shock creation are one order of magnitude higher than in the conventional direct-drive approach. It is difficult to achieve them in the standard conditions with available laser systems and the physics of laser-plasma interaction under such conditions is largely unexplored. These processes are in the focus of our studies both experimentally and theoretically. In addition to collisional absorption of laser energy in plasma, nonlinear processes are playing an important role. Parametric instabilities, in particular stimulated Brillouin (SBS) and Raman (SRS) scattering and two plasmon decay (TPD), significantly affect the energy balance in the target and produce large amounts of energetic electrons. Experiments conducted in a planar and spherical geometry demonstrate generation of energetic electrons carrying up to 10-15% of laser energy. They are correlated mainly with SRS excitation and affect the shock strength and amplitude. Depending on their energy, hot electrons may depose energy downstream the shock front and increase its amplitude, or penetrate upstream the shock front, preheat the cold fuel and decrease the shock strength. The first option is beneficial for shock ignition, while the second one is deleterious. For the moment, experiments in a planar geometry did not succeed in generating shock with amplitude larger than 120 Mbar because of limited laser energy and large lateral losses [18]. By contrast, a strong shock excitation has been demonstrated on the Omega facility in a spherical geometry [19]: by using tightly focused laser beams without temporal smoothing the authors succeeded in exciting a shock with amplitude exceeding 300 Mbar on the surface of a solid spherical target. When converging to the centre, it produced pressures largely exceeding a Gbar level. This experiment needs to be extended to a megajoule laser energy. According to theoretical estimates, the shock pressure enhancement is related to the hot electrons generated by SRS and depositing their energy downstream the shock [20]. If synchronized, such a strong shock should be sufficient for hot spot ignition. The shock ignition approach has required significant improvements in the theoretical model of laser plasma interaction. A description of parametric instabilities and hot electron transport is out of the scope of standard hydrodynamic models of ICF. A full kinetic and electromagnetic description of laser-plasma interaction requires resolution of microscopic spatial and temporal scales, which are incompatible with a macroscopic hydrodynamic model. A simplified treatment of parametric instabilities is possible if the laser intensity in plasma is known. Several methods for evaluation of laser intensity in plasma have been developed accounting for convergence or divergence of neighbouring optical rays [21], representing laser beams as an ensemble of Gaussian (thick) rays [22] or by using an inverse ray-tracing technique [23]. While these techniques are still under development, they are already implemented in three-dimensional hydrodynamic codes and used for interpretation of experiments [24]. The latter approach shows a significant improvement in description of the cross beam energy transfer [25]. Other processes such as temporal and spatial laser beam smoothing in plasma, resonance absorption, excitation of SRS and TPD instabilities and generation of hot electrons can be also accounted for [26]. These developments are important not only for shock ignition but also for all other ICF schemes including direct and indirect drive. Another important development is related to modelling of energy transport in ICF plasmas. Both electron and radiative transport in fusion plasmas are non-local, they cannot be described in a standard diffusion approximation and more accurate models are needed. The multigroup approach for the photon transport is well-developed and implemented in radiation hydrodynamic codes. The multi-group approach for the electron transport is more complicated as it has to be treated self-consistently with electric and magnetic fields in plasma. An efficient multigroup electron transport model proposed by French scientists [27] has been tested extensively by comparison with several different kinetic Fokker-Planck codes and demonstrated quite good accuracy [28]. It is of interest for both inertial and magnetic confinement fusion. This model is implemented in several radiation hydrodynamic codes in Europe and the USA. It is, however, limited to the cases of thermal transport without magnetic field and does not account for electrons produced in parametric instabilities and resonance absorption. A more general model based on solution of a kinetic equation with a simplified collision integral is under development [29]. Potentially, it can be incorporated in radiation hydrodynamic codes and provide a more general framework for transport of energetic electrons produced by different sources and accounting for self-consistent electric and magnetic fields. (b) Physics of inertial fusion beyond ignition While the major activities related to inertial fusion in Europe are focused on the laser-plasma interaction physics and achieving ignition, studies of reactor physics are also in the scope of our interests. The HiPER project aimed at the demonstration of fusion energy production assuming that ignition will be achieved on NIF shortly. This project gave a strong impulse for the reactor design and material studies for the inertial fusion. A two-step strategy has been proposed: first, construction of an experimental 'test' reactor, which will be operating in a safe burst mode of several tens or hundreds of consecutive shots with a low yield and permitting us to test the integration of supplying, control and energy recovery systems and to address the material technology such as final optics, first wall performance and lifetime, tritium breeding, debris handling and target manufacturing. The second step will be construction of a 'prototype' power plant for development of a competitive energy production technology. 3 After the end of the preparatory stage of the HiPER project the work on the reactor design has been stopped, unfortunately, but research on the materials for inertial fusion continues, and it is further supported by the IAEA within the Coordinated Research Projects (see footnote 2). Several important results concerning the plasma facing components, neutron irradiation assessment and protection of final optics have been obtained. As a plasma facing material of the reactor first wall, tungsten has been considered. It has the best proprieties with respect to thermo-mechanical stresses and hydrogen retention. However, it was demonstrated that a coarsegrained tungsten is not sufficiently resistant to the radiation loads [30,31]. It cannot withstand more than 1000 laser shots with a fusion energy release of 250 MJ. Cracks appear at the surface of a sample manifesting fatigue and loss of structural stability. Much more promising properties are demonstrated by a nano-structured tungsten. Its performance has been studied with multiscale numerical simulations and experiments showing that neutron-induced vacancies are readily attached to the grain boundaries and effectively annealed with interstitials at temperatures about 600 K [31,32]. Another issue of high importance is survival of the final optics, which is directly exposed to the particle and radiation fluxes. Studies of the silica performance under the fast ion irradiation show that swift ions make deep tracks in the material, provoke bond breaking and massive material disorder [33]. No method for mitigation of the ion damage has been proposed so far. A system of electric and magnetic fields protecting the optics from charge particles might be considered. A danger of the neutron direct irradiation of optics consists in creation of colour centres, which absorb laser light and may dramatically reduce the lens transmission. The proposed mitigation method consists of annealing the colour centres by maintaining the optics at a sufficiently high temperature above 800-900 K. However, lenses need to be brought to the working temperature before the reactor operation and temperature homogeneity needs to be maintained with a precision of ±20 K, which presents a serious technical problem [34]. Perspectives This short description of ongoing research shows serious and partially unresolved issues on the way to ignition and from demonstration of ignition of fusion reactions to a commercially viable inertial fusion power plant. However, during the last 10 years many interesting and promising results have been obtained: the physics related to laser plasma interactions and target implosion is better modelled and verified in experiments. Significant progress has been made in the material science. However, the scale of activities in Europe relevant to inertial fusion is rapidly decreasing. Fewer people are working in this domain, and fewer papers are published in journals and presented at the conferences. Apparently, the interests of the European community are shifting to more fundamental neighbouring problems of high-energy density science such as laboratory astrophysics, high-field physics and laser-driven particle accelerators. This decline in the research activities in inertial fusion is a result of the general European politics with respect to the laser research and technology development. Building of multipetawatt laser facilities and X-ray free electron lasers in several European countries is a strong long-term investment in the fundamental science, but this is also a strong blow to the inertial fusion research. Europe never had any laser facility dedicated to inertial fusion; we do not have any academic laser system with energy larger than 1 kJ and capable of performing implosion experiments. There are only two multi-beam laser systems, Orion in the UK and LMJ in France, but they both are defence-funded with very limited access to the academic community. There is a serious risk that in a few years all academic research in inertial fusion will move outside Europe and the knowledge will be lost. This situation is, however, in evident contradiction with the growing understanding in society that a safe and abundant nuclear energy and, in particular, fusion energy, is indispensable for sustainable evolution of mankind. It would be a big mistake to invest all funds in magnetic fusion research and abandon all other options of fusion energy. This societal awareness is manifested by the surprising appearance of more than 20 private companies investing in fusion research. 4 each of them investigates different paths to fusion energy, the common denominator is the quest for a compact and commercially attractive fusion reactor that can be operational in the next 15-20 years. These companies perform important work by facilitating links between the research organizations and industry and benefit from high level spin-offs offered by the fusion technology development. The first European laser fusion company 'Marvel Fusion' was created last year. 5 It aims to build an experimental laser facility and develop a prototype fusion power plant based on a direct drive implosion and fuel ignition driven by fast ions. The increasing activity of these private companies demonstrates that the actual level of academic research in inertial fusion energy is insufficient and does not correspond to the needs of society. It is, however, evident that private companies alone are not able to address the enormous complexity of fusion energy technology, which is not only an outstanding technical problem but also an unresolved scientific problem. At the present stage of knowledge, inertial fusion is a valid option, which may provide a technically viable and commercially attractive solution for an efficient and a rather compact reactor, but it needs more public attention and support. A government and private-supported, well-coordinated international programme and a dedicated modern laser facility are needed to boost this research in Europe. The recent advancements described above and the high level of European scientists involved justify such a coordinated European research programme. Since the failure of the National Ignition Campaign in the USA in 2013, a large number of experiments have been conducted on NIF and other facilities around the world, addressing salient issues of the laser-plasma interaction physics and implosion hydrodynamics. Several alternative implosion schemes for indirect and direct drive have been tested and promising results have been obtained [35]. Significant improvements in the theoretical toolbox and numerical models provide a more accurate and predictable guide in experiments. A bright example of a dynamic coordination between the theory and experiments is a series of integrated experiments in the direct-drive geometry at the Omega facility [16]. The use of an iterative approach between numerical simulations and experiments enabled the improvement of the fuel areal density and the neutron yield by a factor of 3 with the same laser energy. When scaled to the NIF energy, this result corresponds to a higher neutron yield than the one achieved in the best indirect drive shots. This evident success in understanding of the physics of inertial fusion is accompanied with a significant progress in the target fabrication and laser technology. There are several academic laboratories and private companies in Europe and the UK that are developing technologies for mass target fabrication and delivery and are able to produce rather complicated targets at acceptable prices. Moreover, a new generation of high power lasers operates with pulse energy of a few joules and a repetition rate up to a few Hz, compatible with what is expected in inertial fusion reactors. The next step consists of increasing the laser pulse energy to a kJ level at a high repetition rate. This step will be attained shortly at the ELI Beamline facility, where laser pulses at a kJ energy, ns pulse duration and with a repetition rate of a few minutes will be available for experiments at the end of this year [36]. Such a kJ ns −1 module could be a building block for a multi-beam laser facility with total energy of a few hundred kJ fully dedicated to the inertial fusion programme. Transition to experiments at high repetition rates poses new challenges for diagnostic performance, data storage and manipulation and debris management that have very much in common with the problems of operation of an inertial fusion reactor. Therefore, the scientific and engineering aspects could be addressed jointly and most efficiently within a common international project aiming at commercial energy production and promising many high-level short-time spin-offs. There are dedicated ICF programmes in the USA and China. Europe is in evident need of such a programme and has a strong scientific and technical background in the domain. That is demonstrated by recent research results and the establishment of private joint ventures in inertial fusion research. However, companies alone cannot shoulder the whole load of ICF research. The private efforts need to be coordinated with a publicly funded ICF research programme in the European Union. Theoretical, experimental and engineering research have to be supported by construction of an ICF-dedicated modern direct drive laser facility capable of testing innovative ideas in physics and technology and technical solutions. Such a facility on the energy scale of a hundred kJ based on state-of-the-art laser technology and current advanced knowledge of laserplasma and capsule implosion physics can be constructed within the next 10 years and will be the major step on the way to commercial fusion energy production. It will put the European Union at the forefront of research and technology in fusion energy. Data accessibility. This article does not contain any additional data. Authors' contributions. All authors contributed to the writing and revision of the manuscript. Competing interests. I declare I have no competing interests. Funding. I received no funding for this study.
v3-fos-license
2021-05-17T00:04:01.564Z
2020-09-01T00:00:00.000
234606084
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.ijee.net/article_114913_6c321449dd18bb8676ced0e5eb3a758e.pdf", "pdf_hash": "68be1a3f2e67eba7582659eb60e610c7e5ece84f", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2910", "s2fieldsofstudy": [ "Engineering" ], "sha1": "bf910b1ccc818015f44b51d08df022b123352f00", "year": 2020 }
pes2o/s2orc
Optimization of Biodiesel Production Conditions Using Chlorella vulgaris Microalgae Cultivated in Different Culture Medium: Statistical Analysis The effect of cultivation culture on the biodiesel yield produced from in-situ transesterification of Chlorella vulgaris microalgae was assessed. Firstly, the algae were cultivated in Moh202, sterilized wastewater (SW), unsterilized wastewater (USW) mediums. It was found that around ten days were suitable to receive maximized growth of microalgae; while, maximum and minimum growth was detected in Moh202 and SW media. Before assessment, the effect of cultivation medium on the biodiesel content, the transesterification reaction conditions such as catalyst (NaOH) concentration, reaction time and amount of methanol were investigated by algae cultivated in Moh202 medium via fractional factorial design as statistical methodology. In the range of the study, catalyst concentration and reaction time were the most important effective parameters on the biodiesel yield. Moreover, the interaction between reaction time with catalyst concentration and amount of methanol was also important. In short reaction time and its interaction with catalyst concentration had positive effect, while catalyst concentration, amount of methanol and interaction of reaction time and amount of methanol had negative impact on the biodiesel yield. The yields of the algae cultivated in Moh202, sterilized and unsterilized wastewater media at the optimum conditions of 1 wt.% of catalyst, 9 mL methanol/g biomass and reaction time of 4 hours were 95.5%, 83.9% and 75.5%, respectively. Although the difference between biodiesel yields of Chlorella vulgaris Microalgae cultivated in the wastewater medium compared to sterilized wastewater medium was observed, wastewater can be used as a medium for cultivation of algae for biodiesel production to reduce the biodiesel production costs. INTRODUCTION 1 Widespread demand for energy due to population growth and industrialization leads to many problems such as environmental pollution and climate changes. Hence, there are many environmentally friendly solutions that renewable energy sources are one of them to reduce the energy demand and environmental crisis. Biodiesel as a renewable, biodegradable and clean fuel is considered as one of the best alternative fuels [1,2]. Biodiesel is a mixture of fatty acid methyl esters (FAMEs), which are respectively produced by esterification, and transesterification of free fatty acids (FFAs) and triglyceride [3][4][5]. Biodiesel is mainly derived from renewable biological sources such as vegetable oils, animal fats which have disadvantages such as high cost and threat of human food supplies [6,7]. Microalgae is well known for its rapid growth, short production cycle, high rate of oil production, and low competition or no competition with food production, *Corresponding author E-mail: H.nayebzadeh@esfarayen.ac.ir (H. Nayebzaheh) which were studied in the biodiesel production processes [8][9][10]. Chorella sp., Dunaliella salina, Botryococcus braunii and Nannochloropsis sp. are some of appropriate algae species used in biodiesel production processes due to their high lipids content [11,12]. Depending on many parameters such as algae species and conditions of growth, the dry weight of algae can contain more than 80% lipids [13,14]. The selection of algae species is important in determination of the rate of lipid production, which Chlorella vulgaris present high potential compared to other species [15,16]. Talebi et al. [17] analyzed biomass productivity and lipid productivity as criteria for estimating the potential of different microalgae species which cultivated in Moh202 for production of biodiesel. They reported that Chlorella vulgaris had a high biomass (0.46 gL -1 day -1 ) and volumetric lipid productivity (79.08 mgL -1 day -1 ). Another important parameter in the biodiesel production from microalgae is lipid extraction method, which is an essential step towards an economical biodiesel production [18][19][20]. Although physicochemical techniques such as ultrasonic blow-in, microwave, autoclave, bead-beating and sonication are commonly used for destruction of microalgae cell, this step is considered non-economical process [21][22][23]. Therefore, researchers focus on the direct or in-situ production of biodiesel from the microalgae, which unnecessary complex oil extraction stage will be eliminated. Tsigie et al. [24] reported that in-situ alkali catalyzed transesterification from dry algae biomass could result in a higher conversion (77.6%) in short time than when an acid catalyst was used. Nautiyal et al. [25] studied biodiesel production from algae species i.e. Spirulina Chlorella and pond water algae. These microalgae were cultivated in BG-11 medium and the oil was extracted to transesterification reaction. They also examined simultaneous extraction and transesterification using different solvents. Maximum biodiesel yield was obtained using hexane as a solvent. In addition, Chlorella Sp. had the highest growth rate and cell dry weight than the other two algae species. These researchers also concluded that biodiesel production efficiency in one step (oil extraction and simultaneous exchange of stearic) was higher than the two separate stages process. One of the limitations of microalgae cultivation is the availability of food supplies at the industrial scale [26,27]. A well-known, microbial culture system can play a valuable role in wastewater treatment, since microalgae are able to harvest and remove nutrients, heavy metals, organic matter and pathogens from wastewater [28]. Therefore, wastewater medium can be utilized as one of available and cost-effective medium for cultivation of microalgae [29,30]. Feng et al. [31] were benefited of Chlorella vulgaris microalgae for treatment of sewage, reported the highest lipid (42%) and lipid efficiency (147 mgL -1 d -1 ), while nutrients for micro-algae growth (COD and +NH4) were removed from culture environment with 86% and 97%, respectively. Lim et al. [32] have shown that C. vulgaris was able to grow in textile wastewater (TW). The High rate algal pond (HRAP) system used in this study for bioremediation of TW remove up to 50% of color besides reducing pollutants such as COD, 4 − and 4 − . Yuan et al. [33] assayed the cultivation of Chlorella zofingiensis in piggery wastewater for wastewater treatment and biodiesel production. Pollutants in autoclaved wastewater and NaClopretreated wastewater were utilized by Chlorella zofingiensis cultivated in doors. The FAME yield of Chlorella zofingiensis grown in autoclaved medium and NaClo-pretreated medium reached 10.18% and 10.15% of dry weight, respectively. In this study, biodiesel production from Chlorella vulgaris algae cultivated in the three culture media via insitu transesterification reaction (without extraction of oil) was investigated. Chlorella vulgaris microalgae were cultured in Moh202, sterilized and unsterilized wastewater media and the growth rate was evaluated. Then, the transesterification was performed to investigate the effect of cultivating medium on the biodiesel yield. Before that, the in-situ transesterification reaction conditions were optimized using full factorial design in two levels, which reaction time, amount of methanol and catalyst concentration were selected as independent variables and the yield was the response. Microalgae cultivation Chlorella vulgaris microalgae were supplied from Karaj Biotechnology Center. At first, Chlorella vulgaris microalgae were cultivated in the Moh202 medium as a blank sample to compare the ability of other medium (sterilized wastewater (SW) and unsterilized wastewater (USW)) for cultivating Chlorella vulgaris microalgae. Moh202 medium contains the following compounds that are provided by Merck (Darmstadt, Germany): 1.25 g/L of NaHCO3, 0.2 g/L of KH2PO4, 0.12 g/L of Vitamin B1, 0.1 g/L of KNO3, K2HPO4, MgSO4 and Vitamin B12, 0.03 g/L of CaCl2 and NaCl , and 1 g/L Hunter's trace element. After adjusting the pH of medium and sterilizing at 121 °C for 15 min, the microalgae were mixed by Moh202 medium at volume ratio of 1:9. The oxygen of medium was provided by an aeration pump with a flow rate of 1.5 L/min after passing through a filter. The fluorescent light (3000 lux on a 16:8 hours light to dark cycle) was also used to supply constant light intensity for the algal medium and the temperature was set at 25±3 °C [15]. Figure S1 in the supplementary material depicts the cultivation medium. Wastewater was collected from the entrance of Parkandabad wastewater's treatment plant in Mashhad and was clearly filtered to eliminate solid particles. It was used as a culture medium for algae growth. For this purpose, it was split to two parts such that one of them was utilized in the growing step without further treatment, while another part was sterilized as discussed in the above method (121 °C for 15 min). 20 mL of the blank sample cultivated in Moh202 was mixed with 200 mL of wastewater (and sodium bicarbonate as a carbon source was added. Then, the mixture was aerated and exposed under fluorescent light such as those cultivated in the Moh202 medium. The growth rate of Chlorella vulgaris microalgae was daily measured by optical density measurement using spectrophotometry (UNICO2100, UV-VIS2100, USA) at 680 nm to assess the relationship between biomass concentration and optical density. After reaching the algal cells to the maximum growth, they were separated by centrifuging at 10000 rpm for 10 min and washed with distilled water to remove the remaining cells. Finally, these were dried in an oven at 45 °C for 24 h for further studies. Figure S2 in the supplementary material illustrates the dried microalgae. In-situ transesterification reaction The in-situ transesterification reaction was carried out in a 100 mL glass reactor connected to the condenser, which was poured by 10 g microalgae and desirable amount of catalyst (NaOH) and alcohol (methanol), which was determined using statistical analysis. The reactions were performed around the boiling point of methanol (65±3 °C), which was stabilized by an oil bath, for desirable duration. After completing the reaction, the mixture was discharged into the separation funnel and a small amount of heptane was added to facilitate the separation process. The upper layer containing biodiesel was separated from glycerol and carcass of microalgae layers. Then, it was washed twice by warm deionized water to eliminate the alkaline catalyst. Finally, the biodiesel was purified by heating to evaporate heptane, methanol and water. The biodiesel production process along with the obtained biodiesel from Chlorella vulgaris microalgae is illustrated in Figure S3 in the supplementary material. The FAME compositions of the microalgae biodiesel were determined by gas chromatograph (GC, Agilent 7890A, USA) equipped with a mass spectrometer detector of the Agilent 5975C type and the HP-5 Mine Column (30m×0.25μm×0.25mm). Helium gas with a flow rate of 1 mL/min was used as carrier gas. The temperature of detector and injector were set at 250 °C [34]. Design of experiments The experimental design was used for assessment the interaction of independent variables (reaction time (A), methanol amount (B) and catalyst concentration (C)) in the biodiesel production process from Chlorella vulgaris microalgae during in-situ transesterification reaction. The full-factorial design was utilized using the variables in two levels as listed in Table 1 to specify the amount of each material in the reaction and requirement reaction time. Moreover, two center points were selected in the middle range of all variables to fit the actual results with predicted equation, accurately. Design Expert software (version 6.0.2) was used to verify the data and expression of an appropriate equation, effect of interaction between independent variables on transesterification reaction and obtaining the optimum conditions to achieve the maximum efficiency. Methanol amount (M) B mL/g biomass 9 12 Catalyst concentration (Cat.) C g/g biomass 1 3 RESULTS AND DISCUSSION Algae growth rate Duration of Chlorella vulgaris microalgae growth in three cultivation media was determined by examination of its growth pattern by daily measurements of optical density as shown in Figure 1(a). As well known, the growth of microalgae is generally characterized in five stages [35]. The first stage as called lag or induction phase is attributed to the physiological adaptation of the cell metabolism to growth that was two days for USW and Moh202 culture media and three days for SW culture medium. In the second stage, the growth of microalgae started as the plot shows a slope between second/third day and tenth day for Moh202 culture medium and ninth day for SW and USW culture media that is related to increase of concentration of microalgae in the culture medium. At the end of exponential phase, the growth rate reduced due to reduction the nutrients, light, pH, carbon dioxide or other physical and chemical factors caused to limit the growth [36]. This stage lasted two days for Moh202 and USW culture mediums and three days for SW culture medium. After receiving to the maximum growth, the growth maintained constant as called stationary phase (It was measured only for Moh202 culture medium that was three days). In this stage the limiting factors and the growth rate are balanced, which results in a relatively constant cell density. Reduction in the optical density was observed in fifth stage that is corresponded to decreasing the nutrients, as well as increasing the concentration of microalgae and the lack of light penetration [37]. Therefore, it seems that eleven days are appropriate for Moh202 and USW culture mediums and twelve days for SW culture medium to obtain the maximum microalgae growth. Growth aspect of algal population has often been defined by specific growth rate. The specific growth rate of Chlorella vulgaris microalgae can be determined according to Equation (1) [38]. As well-known, this should be calculated only in the exponential phase of growth. where μ is the specific growth rate, N(t) and N(0) are algae density (cell L -1 ) at the time t and t=0, respectively and t is the cultivation time. The specific growth rate of Chlorella vulgaris microalgae is shown in Figure 1(b). It observes that the specific growth rate is in the range of 0.15-.60, 0.23-0.56 and 0.29-0.77 for Moh202, USW and SW culture mediums, respectively. The growth rate in the USW culture medium is the lowest that can be related to existence of undesirable components in the culture medium, the high levels of toxics and the low light availability arising from self-shading at high algal density and other particles present in USW, which can affect the growth of microalgae [39]. On the other hands, the specific growth rate of microalgae was significantly increased by cultivating the microalgae in the SW culture medium, because of appropriate availability of nutrients [16,40]. Statistical assessment of the transesterification reaction conditions The experimental design matrix and the yield of produced biodiesel from Chlorella vulgaris microalgae cultivated in Moh202 medium are presented in Table 2. The results revealed that the highest conversion (90.7%) was obtained at the conditions of 2 h, methanol-tobiomass of 12 mL/g and 1 wt.% of catalyst. The effective factors on the biodiesel production were firstly selected to analyze the results and obtain an appropriate model. Figure 2 shows the normal distribution diagram of the effect of each factor. In fact, the effect of a factor which is far from the line is more important and has a greater impact on the response. The AB factor was the variable that had the greatest impact on biodiesel production efficiency. Subsequently, the factors C (catalyst), A (reaction time) and their interaction (AC) presented the most important parameters that use in the model for prediction of biodiesel yield. However, B, BC and ABC factors are invariant factors. On the other hands, due to influence of AB parameter on the model, B parameter was set in the model to reduce the error rate of the methanol parameter. Table 3 shows the ANOVA analysis, which determines the significance of the parameters in relation to the values of p-value. As concluded from the result of Figure 2, A and C factors have major effects on the response due to their low P-value (lower than 0.05) that are important at 95% probability as well as the interactions of AB and AC. The curvature F-value as measured by difference between the average of the center points and the average of the factorial points in the design space is not significant. It confirms a linear model is sufficient to describe the response. Moreover, the lack of fit F-value is not significant relative to the pure error that is good for proposed model. Figure 3 which indicates the accuracy of predicted and actual values. The data obtained from predicted model has a good agreement with the experimental data. Effect of the independent factors on the conversion of microalgae to biodiesel The effect of the independent parameters on the biodiesel production from Chlorella vulgaris microalgae cultivated in the Moh202 medium is shown in Figure 4. In order to assessment each variable, other parameters were set at the center point. The effect of the reaction time on the yield (Figure 4(a)) presented that the higher conversion can be obtained at the higher reaction time because of the relevant time for reaction the reactants [2,41]. Figure 4(b) shows the effect of methanol Figure 3. The fitting diagram of the actual data and the data obtained by the proposed equation concentration on conversion rate. As shown in the model, methanol has no effect on the biodiesel efficiency. Although the amount of methanol has positive effect on the transesterification reaction [42]. Probably due to the use of raw algae and the choice of methanol based on the weight of algae, a higher amount of methanol than the proposed state has been used relative to the weight of oil present in algae. For this reason, methanol amount presented insignificant impact on biodiesel yield. Figure 4(c) illustrates the effect of the catalyst on the biodiesel yield and confirms its negative effect, which means that the conversion rate decreases with increasing catalyst concentration in the reaction medium. The high amount of catalyst increases the saponification reaction rate and prevents the conversion of FFA and triglycerides to biodiesel [43]. Therefore, the reaction yield will be decreased. Effect of the interaction of parameters on the conversion of microalgae The interaction of the reaction time with the amount of methanol and catalyst concentration, while another parameter is constant at the central point) is shown in Figure 5 as a three-dimensional plot. The interaction effect of the amount of methanol and reaction time, as shown in Figure 5(a), presented that although the yield increased by increasing each parameter at the lowest level of other parameter, the oppositional behavior observes at the highest level of each parameter. In other words, after the center point for the both variables, with increasing of each of the parameters at constant amount of other parameter, a reduction in the biodiesel production yield observes. This can be due to the fact that with increasing reaction time, due to receiving to the equilibrium point, the reaction goes in the backward direction and the excessive increase in methanol causes problems in the separation of the glycerol phase from the biodiesel, which can affect the efficiency [2]. Optimization the in-situ transesterification reaction conditions Optimization the reaction conditions of in-situ transesterification of Chlorella vulgaris microalgae was performed by Design expert 6.0.2 software. After evaluation the suggested conditions to obtain the Figure 5. Three-dimensional plots of the influence of (a) reaction time and methanol amount and (b) reaction time and catalyst concentration as interaction parameters on the biodiesel production maximum FAME content based on selecting the other parameters in the examined range, the operation conditions of 1 wt.% of catalyst, the methanol-to biomass ratio of 9 (mL/g) and the reaction time of 3.99 h were chosen. The biodiesel yield of 95.49% was reported, which is in good agreement with the experiments value (95.5%). The GC-Mass plot and compositions of the produced biodiesel are depicted in Figure 6. The produced biodiesel contained oleic, palmitic, linoleic, and stearic acids as the major components. Assessment the properties of microalgae cultivation medium on the FAME content Chlorella vulgaris microalgae cultivated in the nonsterile and sterile wastewater medium were dried in oven and used in the in-situ transesterification reaction at the optimum conditions. The FAME content of the produced biodiesel from Chlorella vulgaris microalgae cultivated in three cultivation medium is illustrated in Figure 7. The FAME content is higher for microalgae cultivated in the SW medium which could be due to this fact that SW medium has the most growth associated with the Chlorella vulgaris algae species, since this type of algae has grown better in sterile conditions. However, under USW conditions, the lipid content may have decreased, where other species can grow in addition to Chlorella Figure 1. GC-Mass plot of produced biodiesel and fatty acid methyl ester (FAME) profile algae type, leading to not-well growth of Chlorella sp. algae. Moreover, the biodiesel production efficiency has decreased in USW medium. It must be mentioned that there are other bacteria in the wastewater which grown simultaneously with algae, so the biomass concentration is sum of algal and any survival bacteria in the wastewater. Therefore, less amount of microalgae are presented in the mixture and these bacteria can eliminated the well-growth of microalgae [44]. The results of this study were in accordance with the results reported by Li et al. (2011). The FAME yield in autoclaved wastewater is similar to that in tris-acetatephosphate (TAP) media. The most abundant fatty acids obtained from algae were also octadecadienoic acid (C18:2) and hexadecanoic acid (C16:0). Different from algae cultivated in TAP media and autoclaved wastewater, octadecatrienoic acid (C18:3) (18.79% of total FAME) and hexadecanoic acid (C16:0) (16.10% of total FAME) accounted for the majority of the fatty acids for algae cultivated in raw wastewater. Finally, based on biodiesel production efficiency using the SW medium, it can be concluded that wastewater can be used as a medium for algae growing and biodiesel production, associating with significant decreasing in the biodiesel production costs. CONCLUSION In this study, the amount of produced biodiesel from oils of Chlorella vulgaris microalgae cultivated in the three cultivation medium was studied during in-situ transesterification reaction. The Moh202, USW and SW medium were used for cultivating the microalgae. The transesterification reaction conditions were firstly optimized by the microalgae cultivated in the Moh202 medium where three variables including reaction time, methanol amount and catalyst concentration were considered as independent parameters. The full factorial design was utilized to accurately evaluate the variables and the effect of their interactions on the FAME content of produced biodiesel. The results showed that the catalyst concentration, reaction time, and interaction of reaction time with methanol and catalyst amount have a significant effect on the conversion of microalgae. The proposed model showed excellent accuracy for prediction the conversion of microalgae in the in-situ transesterification reaction and the optimum conditions of 1 wt.% of catalyst, 9 mL methanol per gram of biomass, and the duration of 4 h was obtained. Cultivating of microalgae in the SW and USW mediums showed that the specific growth rate of microalgae in the SW in higher than other medium and the produced biodiesel from the microalgae cultivated in this medium has higher FAME content compared to those cultivated in the USW medium. Although the yield of biodiesel produced from microalgae cultivated in SW medium was slightly lower than those cultivated in the Moh202 medium. This medium can be utilized as high potential and cost-effective medium for reduction the cost of biodiesel production process. Moreover, it must be mentioned that microalgae have high ability for removing of wastewater pollutants which is an important environmental issue.
v3-fos-license
2020-06-18T09:03:11.586Z
2020-06-10T00:00:00.000
225705929
{ "extfieldsofstudy": [ "Physics", "Materials Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1088/1361-6595/ab9b31", "pdf_hash": "64f0f0ea8460c98a54c9a8e7ec93edd03b6fe4a8", "pdf_src": "IOP", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2911", "s2fieldsofstudy": [ "Physics" ], "sha1": "31e6787554f3650fa1f9a7767e19f07f734f2252", "year": 2020 }
pes2o/s2orc
The magnetic asymmetry effect in geometrically asymmetric capacitively coupled radio frequency discharges operated in Ar/O2 Previous studies in low pressure magnetized capacitively coupled radio frequency (RF) plasmas operated in argon with optimized geometric reactor symmetry have shown that the magnetic asymmetry effect (MAE) allows to control the particle flux energy distributions at the electrodes, the plasma symmetry, and the DC self-bias voltage by tuning the magnetron-like magnetic field adjacent to one electrode (Oberberg et al 2019 Plasma Sources Sci. Technol. 28 115021; Oberberg et al 2018 Plasma Sources Sci. Technol. 27 105018). In this way non-linear electron resonance heating (NERH) induced via the self-excitation of the plasma series resonance (PSR) was also found to be controllable. Such plasma sources are frequently used for reactive RF magnetron sputtering, but the discharge conditions used for such applications are significantly different compared to those studied previously. A high DC self-bias voltage (generated via a geometric reactor asymmetry) is required to realize a sufficiently high ion bombardment energy at the target electrode and a reactive gas must be added to deposit ceramic compound layers. Thus in this work, the MAE is investigated experimentally in a geometrically asymmetric capacitively coupled RF discharge driven at 13.56 MHz and operated in mixtures of argon and oxygen. The DC self-bias, the symmetry parameter, the time resolved RF current, the plasma density, and the mean ion energy at the grounded electrode are measured as a function of the driving voltage amplitude and the magnetic field at the powered electrode. Results obtained in pure argon discharges are compared to measurements performed in argon with reactive gas admixture. The results reveal a dominance of the geometrical over the magnetic asymmetry. The DC self-bias voltage as well as the symmetry parameter are found to be only weakly influenced by a change of the magnetic field compared to previous results obtained in a geometrically more symmetric reactor. Nevertheless, the magnetic field is found to provide the opportunity to control NERH magnetically also in geometrically asymmetric reactors. Adding oxygen does not alter these discharge properties significantly compared to a pure argon discharge. Previous studies in low pressure magnetized capacitively coupled radio frequency (RF) plasmas operated in argon with optimized geometric reactor symmetry have shown that the magnetic asymmetry effect (MAE) allows to control the particle flux energy distributions at the electrodes, the plasma symmetry, and the DC self-bias voltage by tuning the magnetron-like magnetic field adjacent to one electrode (Oberberg et In this way non-linear electron resonance heating (NERH) induced via the self-excitation of the plasma series resonance (PSR) was also found to be controllable. Such plasma sources are frequently used for reactive RF magnetron sputtering, but the discharge conditions used for such applications are significantly different compared to those studied previously. A high DC self-bias voltage (generated via a geometric reactor asymmetry) is required to realize a sufficiently high ion bombardment energy at the target electrode and a reactive gas must be added to deposit ceramic compound layers. Thus in this work, the MAE is investigated experimentally in a geometrically asymmetric capacitively coupled RF discharge driven at 13.56 MHz and operated in mixtures of argon and oxygen. The DC self-bias, the symmetry parameter, the time resolved RF current, the plasma density, and the mean ion energy at the grounded electrode are measured as a function of the driving voltage amplitude and the magnetic field at the powered electrode. Results obtained in pure argon discharges are compared to measurements performed in argon with reactive gas admixture. The results reveal a dominance of the geometrical over the magnetic asymmetry. The DC self-bias voltage as well as the symmetry parameter are found to be only weakly influenced by a change of the magnetic field compared to previous 6 Author to whom any correspondence should be addressed. Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. Introduction The deposition of thin films is a key procedure for many applications in modern industry [15,40]. A wide range of different technological fields relies on it: for instance, high quality thin films are needed for optical components, microelectronics and medical applications [29,41,56,66]. An important and commonly used thin film deposition process is physical vapor deposition (PVD) [53]. A solid target is put in contact with a plasma at low pressure. Highly energetic ions, e.g. argon ions, bombard this target and are able to break the surface bonds to sputter atoms from the solid. These particles can then condensate on other surfaces in contact with the plasma, for example on a substrate. Commonly used processes are (pulsed) DC or mid-frequency magnetron plasmas or high power pulsed magnetron sputtering (HPPMS) [7,8,31,35]. HPPMS is a fairly new technique that provides lower sputter rates in comparison to classical magnetron sputtering, a high degree of ionization of sputtered atoms, self-sputtering, and gas rarefaction. It is characterized by a strongly non-linear dependence of the sputter yield on the target voltage [7,10,19,42,55]. Magnetron plasma suffer from a poor degree of target material utilization, since the sputtering mainly takes place underneath the magnetized torus region adjacent to the target, i.e. within the racetrack. The use of additional reactive gas admixtures provides the opportunity to deposit ceramic compound layers using a metallic target surface. For instance, oxygen in the gas phase reacts with sputtered aluminum and forms aluminum oxide films at the substrate surface. The production of highquality thin coatings requires a precise control of reactive sputter processes [79][80][81]. The reactive gas also interacts with the target and the formation of ceramic surface layers on the target can lead to arcing, especially in DC magnetrons. Moreover, non-linear hysteresis effects are known to affect surface characteristics and to lead to instabilities [3,4,7,31,50,67]. Pulsing suppresses arcing and still shows high deposition rates [2,6,8,31]. Using higher frequencies, e.g. a radio frequency (RF) of 13.56 MHz, also avoids arcing effects [9,43,71]. Higher frequencies induce higher electron densities in the plasma bulk and, thus, a higher ion flux towards the target [1,34,46,49,75,82]. At low pressures, where the mean free path of the ions becomes larger than the sheath thickness, the DC selfbias voltage is an indicator for the ion bombardment energy at the target surface. A summary of the different applications and the physics of sputter deposition can be found in the review of Greene [26]. In order to facilitate the control of such capacitively coupled RF (CCRF) plasmas a detailed understanding of the electron power absorption dynamics in such magnetized discharges is needed, since it strongly affects process relevant parameters such as the different species densities, fluxes, and energy distribution functions. The magnetic asymmetry effect (MAE) [47,48] provides the opportunity to control process relevant plasma parameters such as the DC self-bias voltage by adjusting the magnetic field at the target. This way to magnetically control the plasma symmetry is conceptionally similar to the concept of the electrical asymmetry effect (EAE), which allows to control the DC self-bias by tailoring the driving voltage waveform [17,18,28,[63][64][65]. Both, the MAE and the EAE, allow to control the plasma symmetry. Recent studies investigated different heating modes in unmagnetized CCRF plasmas: the acceleration of electrons by the expanding sheath (α-mode), the ionization due to secondary electrons (γ-mode) [25,30,36,52,59,60,73], the electron heating due to the plasma series resonance (PSR), and the related non-linear electron resonance heating (NERH) mechanisms [5,12,44,45,54,61,62]. Wilczek et al and Schulze et al studied the spatio-temporally resolved electron dynamics theoretically in these plasmas including the electron power absorption by an analysis of the moments of the Boltzmann equation. The current continuity in the presence of electron beams and electric field reversals generated by the expanding sheath and the interaction with bulk electrons are described in detail in [59,[76][77][78]. For a general understanding of these discharges, Czarnetzki et al introduced an analytical model that describes the DC selfbias voltage η in a low pressure electropositive CCRF plasma [13]. Using a single frequency sinusoidal voltage waveform with an amplitude V 0 and neglecting the voltage drop across the plasma bulk, the DC self-bias voltage is given by: where is the symmetry parameter and corresponds to the absolute value of the ratio of the maximum voltage drops across the grounded (φ sg ) and the powered (φ sp ) sheath: Here, A p and A g are the surface areas of the powered and grounded electrode, respectively.n sp andn sg represent the mean ion densities in the powered and grounded sheath. I sg and I sp are the sheath integrals of the respective sheath. To a good approximation, the ratio of these integrals is typically unity [13]. For the purpose of sputtering, usually a static magnetic field is applied close to the target electrode. This locally enhances the ion density in the adjacent sheath at one of the electrodes. According to equation (2), this also affects the symmetry parameter and the DC self-bias voltage based on equation (1). In previous studies, this effect was introduced as the MAE [47,48]. It can be used to control the DC self-bias voltage, the discharge symmetry, the particle flux energy distribution functions and the heating dynamics in a CCRF plasma by changing the magnetic flux density at one of the electrodes only. In recent works, the MAE was studied computationally [72,83,84] and a strong influence of the magnetic field on the discharge properties by applying a magnetic field parallel to the electrode surfaces, that decreases as a function of distance from this electrode, was revealed. Experimental studies [48] validated these results using a magnetron-like magnetic field configuration in an argon discharge with optimized geometric symmetry. In this reactor, the symmetry parameter could be controlled by adjusting the magnetic field. This was found to strongly affect the mean ion energies at both electrodes. Further results showed the opportunity to control the RF current, i.e. the self-excitation of the PSR, as well as the NERH by adjusting the magnetic field at the powered electrode [47]. Studying the electron heating dynamics in magnetized capacitively coupled plasmas is a hot topic of current research in low temperature plasma science. Turner et al came to the conclusion that even weak magnetic fields (e.g. 1 mT) perpendicular to the electric field lines change the heating mode of a low pressure argon RF plasma. Unmagnetized discharges under these conditions are driven by electron acceleration by the expanding sheath with presheath fields that generate electron 'pressure heating'. In contrast to this, in magnetized CCRF plasmas electrons are heated predominantly by ohmic heating due to a magnetically enhanced plasma resistance [74]. Investigations of the electron power absorption requires numerical modeling, which is not possible in one-dimensional simulations due to the inherently two-dimensional structure of the magnetic field. An example can be found in Gerst et al [24], who described the behaviour of stripe structures formed by electrons due to their interaction with the magnetic field and chamber walls. Two-dimensional magnetic fields and particle drifts invalidate one-dimensional simulations and plasma description. In this work, we investigate the MAE experimentally in a geometrically asymmetric CCRF discharge operated in mixtures of argon and oxygen based on measurements of the DC self-bias voltage, the symmetry parameter, the ion energy, the electron density, and the RF current as a function of the driving voltage amplitude and the magnetic field at the powered electrode. In this way and in contrast to previous work, which investigated symmetric argon discharges [47,48], we investigate a scenario that is relevant for reactive sputtering in industry. We investigate the effects of a geometric reactor asymmetry as well as of the introduction of oxygen as a reactive gas on the MAE. The set-up includes a variable magnetron-like magnetic field configuration, inducing a magnetic asymmetry and closed field lines at the powered electrode. This is different from most other investigations with a magnetic field parallel to the electrodes. The manuscript is structured as follows: the experimental set-up is described in section 2 including the applied diagnostics. Then, the results are presented in pure argon (section 3.1) and argon/oxygen mixtures (section 3.2) in this geometrically asymmetric discharge. The conclusions in section 4 summarize the results. Experimental set-up The experimental set-up used in this work is a modification of the set-up used in references [47,48]. It is shown schematically in figure 1 and consists of a cylindrical vacuum chamber with a height of 400 mm and a diameter of 318 mm. The reactor walls are grounded. The powered electrode is mounted at the top of the chamber and is surrounded by a grounded shield as well as a grounded mesh to prevent parasitic RF coupling to the reactor walls. The powered electrode has a diameter of 100 mm and includes NdFeB permanent magnets, which are arranged in two concentric rings to create an azimuthally symmetric balanced, magnetron-like magnetic field configuration. The magnets are located behind the electrode surface and are not in contact with the plasma. As a reference, the maximum radial component of the magnetic flux density is measured at an axial distance of 8 mm from the powered electrode surface in the absence of a plasma by a Hall probe. By stacking different permanent magnets, magnetic flux densities of 0 mT, 7 mT, 11 mT, 18 mT, and 20 mT can be reached at this reference point. A more detailed description of the configuration and measurements of the magnetic flux density can be found in reference [48]. The set-up is used for RF magnetron sputtering, which requires a high mean sheath voltage at the powered target electrode. This accelerates ions towards the aluminum surface and leads to sputtering of metal atoms. Just like the powered electrode, the grounded electrode is made of aluminum. The gap distance between both electrode surfaces is 52 mm. The powered electrode is driven by a sinusoidal RF voltage waveform at 13.56 MHz with an amplitude ranging between φ 0 = 150 V and φ 0 = 400 V. A VI-probe (Impedans Octiv Suite) is used to measure the driving voltage amplitude and the current. Additionally, the DC self-bias voltage is measured. In the grounded electrode, a self-excited electron resonance spectroscopy (SEERS) sensor is implemented, which measures the RF current as a function of time at the center with nanosecond time resolution within the RF period. According to Klick and Franz [21,33] the measured time resolved current can be used to analyze the electron power absorption. The SEERS sensor measures the current in the center of the grounded electrode only (with a diameter of 1 cm). The magnetic field decreases strongly as a function of the distance to the powered electrode. Thus, at the grounded electrode no magnetic field is present that can influence the SEERS diagnostic. Below the radial edge of the powered electrode, a multipole resonance probe (MRP) is placed to measure the plasma density. It is located at the axial center of the electrode gap with a radial distance of 50 mm from the symmetry axis. At this position, the magnetic field is negligible and does not influence the measurement. For the measurement, a vector network analyzer generates a frequency sweep signal, which is coupled via the probe into the plasma. The system's response shows a resonance close to the electron plasma frequency, i.e. the reflected signal intensity is minimum at this resonance frequency, since power is absorbed efficiently by the plasma. According to Lapke, from this resonance frequency, the electron density can be calculated as described in references [37][38][39]. Further information about the concept of the MRP can be found in references [20,57,58,69,70]. In order to measure the ion flux energy distribution function at the substrate surface a retarding field energy analyzer (RFEA) (impedans semion system) is mounted on the grounded electrode [22,23]. All measurements are performed in pure argon (25 sccm) or in an argon/oxygen mixture (25 sccm + 3 sccm) as a function of the radial magnetic field strength at the reference position and as a function of the driving voltage amplitude. The oxygen mass flow is chosen to be high enough to completely poison the aluminum target surface. This status of the target surface was ensured based on the following approach applied at fixed generator power: for all RF powers that yield the driving voltage amplitudes used in this work under the respective discharge conditions of interest, the oxygen flow was increased and the DC self-bias voltage was monitored. According to Depla et al [16] the absolute value of the DC self-bias decreases as a function of the fraction of the target that is oxidized, since the secondary electron emission coefficient and, thus, the discharge current increases. Thus, at constant power, the DC self-bias voltage decreases. Once the target is fully oxidized increasing the O 2 admixture does not cause any change of the DC self-bias anymore. Here, in all cases this status was reached at oxygen flows of less than 3 sccm, i.e. for an O 2 flow of 3 sccm the target is fully poisoned under all conditions used in this work. In this way the effects of a change of the target surface material induced by the presence of a reactive gas on the plasma characteristics are studied. A neutral gas pressure of 1 Pa is used for all measurements. In order to set the driving voltage amplitude, the generator power is varied. Geometrically asymmetric RF magnetron operated in pure argon Firstly, we investigate the effect of the geometrical reactor asymmetry on the MAE in a pure argon discharge. According to reference [48], where a glass confinement was used to optimize the geometric discharge symmetry, and figures 2(a) and (b), it is possible to control the reactor asymmetry and even reverse it by adjusting the magnetic field in a CCP with optimized geometric reactor symmetry. This is illustrated by the fact that the symmetry parameter, ε, can be changed from less than unity to values above unity by increasing the magnetic field measured at the reference position adjacent to the powered electrode. This means that the voltage drop across the grounded electrode sheath will be higher than the voltage drop across the sheath at the driven electrode and the DC self-bias voltage will get positive, if the symmetry parameter rises above 1. The results obtained for the geometrically asymmetric set-up strongly differ from those obtained in the more symmetric scenario. For the geometrically asymmetric reactor, the measured DC self-bias voltage and the symmetry parameter are shown in figures 2(c) and (d) as a function of the applied voltage amplitude and the magnetic flux density. Due to the higher geometric asymmetry the DC self-bias voltage is negative for all magnetic fields and voltage amplitudes studied here. It decreases linearly as a function of the applied voltage amplitude. Increasing the magnetic flux density from 0 mT to 20 mT leads to a more positive DC self-bias voltage, e.g. at V 0 = 300 V, η will increase from −250 V to −200 V, if the magnetic field is changed from 0 mT to 20 mT. In comparison to the results obtained in the more geometrically symmetric reactor under otherwise identical conditions, this small change of 50 V (vs 200 V in the more symmetric reactor) shows that the effect of the geometrical asymmetry on the DC self-bias prevails over the effect of the MAE on the DC self-bias. This is also illustrated by the symmetry parameters calculated based on equation (1) as illustrated in figures 2(b) and (d). For instance at a driving voltage amplitude of 300 V, increasing the magnetic field from 0 mT to 20 mT results in an increase of the symmetry parameter from about 0.3 to about 1.45 in the more symmetric reactor, while it remains far below unity in the geometrically asymmetric scenario. In contrast to the measurements performed in the more symmetric [48]) and in a strongly asymmetric RF magnetron (c) as well as the calculated symmetry parameter obtained in a RF magnetron with optimized geometric reactor symmetry ((b), results from reference [48]) and in a strongly asymmetric RF magnetron (d) as a function of the driving voltage amplitude for different magnetic flux densities measured at a distance of 8 mm from the powered electrode surface at a lateral position, where the radial component of B is maximum. Discharge conditions: argon, 13.56 MHz, 1 Pa. reactor [47,48], the symmetry parameter decreases as a function of the applied voltage amplitude in the asymmetric case. By increasing the voltage amplitude, the plasma expands more and more towards the grounded chamber walls. According to equation (2), this will enhance the discharge asymmetry, since the ion density in vicinity of the grounded chamber walls will be enhanced. In the more symmetric reactor used for the previous studies, the plasma was shielded from the grounded chamber walls by a glass confinement and, thus, the plasma was not able to expand towards these walls. Generally, the dependence of the DC self-bias on the driving voltage amplitude, V 0 , is more pronounced compared to the dependence of the symmetry parameter on the voltage, since, based on equation (1) and for a given reactor symmetry (a constant value of ), the DC self-bias is proportional to the driving voltage, i.e. its absolute value increases as a function of V 0 . The symmetry parameter, however, corresponds to the ratio of the maximum sheath voltages at both electrodes, which both increase as a function of V 0 , but their ratio and, thus, is much less sensitive to the driving voltage amplitude. Figure 3 shows ion flux-energy distribution functions measured at the grounded electrode of the geometrically asymmetric reactor by a RFEA in pure argon at 1 Pa and V 0 = 300 V for different magnetic flux densities measured at the reference position. Due to the low neutral gas pressure and the small sheath width, the sheath at the grounded electrode is almost collisionless and, thus, a single high energy peak is observed. In agreement with the results shown in figures 2(c) and (d), the shape of the measured distribution functions does not change much as a function of the magnetic field, because the reactor symmetry and the DC self-bias are mostly determined by the geometric asymmetry and only weakly by the magnetic asymmetry. This is strongly different compared to previous measurements in a reactor characterized by an optimized geometric symmetric, where the magnetic asymmetry (controlled by the magnetic field at the powered electrode) had a strong effect on the discharge symmetry, the DC self-bias, and the shape of the IEDF at the grounded electrode [48]. Figure 3 also shows an increase of the ion flux to the grounded electrode as a function of the magnetic field as a consequence of the enhanced magnetic electron confinement and ionization at the powered electrode. The mean ion energies calculated from the IEDFs measured at the grounded electrode as a function of the driving voltage amplitude and the magnetic field (measured at the reference position) in the geometrically asymmetric RF magnetron are shown in figure 4(a). The range in which the mean ion energies can be adjusted by tuning the magnetic field, i.e. via the MAE, is limited to less than 10 eV due to the strong geometric reactor asymmetry, which prevails over the magnetic asymmetry. The low sheath voltages at the grounded electrode do not allow to significantly adjust the ion energies at the substrate. Without the magnetic field, the mean ion energies increase linearly with the driving voltage amplitude. Applying the highest magnetic field of 20 mT, a maximum of the mean ion energy occurs at a voltage amplitude of approximately 225 V, before it decreases again at higher voltages. This might be a consequence of the increased voltage drop across the bulk as a function of the applied magnetic field. Thus, under those conditions the accuracy of the model and, thus, of the calculation of the symmetry parameter is limited at high magnetic fields and driving voltage amplitudes due to the negligence of the voltage drop across the bulk. At higher voltage amplitudes and higher magnetic fields this effect gets stronger. Furthermore, a stronger magnetic field shifts the maximum of the ion energy to lower voltage amplitudes. Thus, the decrease of the mean ion energy as a function of the driving voltage amplitude starts at lower values of V 0 . Hence, the mean ion energy for the highest magnetic flux density of 20 mT is lower compared to the mean ion energy for 18 mT when applying a voltage amplitude higher than 150 V. Results of measurements of the electron density performed by the MRP at an axial position in the middle of the electrode gap and at a radial position located underneath the edge of the powered electrode are shown in figure 4(b). The electron density increases by a factor of 6-10 as a function of the magnetic field at all applied voltages. The dependence on the voltage amplitude is almost linear with and without an applied magnetic field. As the probe position is fixed, the increase of the electron density as a function of the magnetic flux density is a consequence of the enhanced ionization in the magnetized zone due to a better magnetic electron confinement. For a constant voltage amplitude of 300 V, the current density measured at the center of the grounded electrode of the geometrically strongly asymmetric reactor using a SEERS sensor is shown as a function of time within two RF periods in figure 5. The measured current is normalized by its maximum. Due to the enhancement of the plasma density as a function of the magnetic field, this maximum increases as a function of the magnetic flux density. The expansion phase of the sheath adjacent to the powered electrode starts at 0 ns. The unmagnetized case (see figure 5(a)) shows strong high frequency oscillations of the current, which are damped within one RF period. As described in previous studies of unmagnetized CCRF plasmas, these high frequency oscillations are caused by the selfexcitation of the plasma series resonance (PSR) [76]. Electrons are accelerated by the expanding sheath at the powered electrode and form a highly energetic electron beam. When electrons move away from the powered electrode, the measured current density is negative. A positive current corresponds to electrons that move towards the powered electrode. When the first electron beam is formed and propagates away from the expanding sheath edge at the powered electrode, the inert positive ions are left behind and a positive space charge region is formed on the bulk-side of the expanding sheath edge [76]. In this way an electric field is generated that accelerates bulk electrons back towards the powered electrode and, hence, the current direction changes and gets positive after some time. When those electrons hit the expanding sheath edge, they are accelerated towards the plasma bulk by the expanding sheath and these dynamics start again. These high frequency PSR oscillations dominate the RF current waveform. In figure 6(a) the normalized fast Fourier transformation (FFT) of the measured current is shown for 0 mT and the maximum is found at 203.4 MHz, which is identified as the PSR frequency. Applying a magnetron-like magnetic field generates a magnetic discharge asymmetry via the MAE. The plasma density increases significantly in front of the powered electrode. Moreover, in regions, where the magnetic field is parallel to the electrodes and perpendicular to the axial electric field, the mobility of electrons perpendicular to the magnetic field lines and the electrode surfaces is greatly reduced. This results in an enhanced resistance of the plasma at high magnetic fields and, according to Turner et al [74], can induce a transition of the dominant electron power absorption mechanism from pressure to ohmic heating including the generation of reversed electric fields during sheath collapse at the powered electrode during the phase of the local sheath collapse. Overall, increasing the magnetic field strongly affects the shape of the current waveform measured at the grounded electrode as shown in figures 5(b)-(e). The Fourier spectrum shown in figure 6 reveals the attenuation of higher harmonics as a function of the magnetic field. Thus, fewer oscillations occur within each RF period. At low magnetic fields the first minimum of the current corresponds to the global minimum of the waveform similar to the unmagnetized case. However, increasing the magnetic field causes the second minimum to become stronger compared to the first minimum. Overall, the observed effects of the magnetic field on the current waveform and the self-excitation of the PSR are significant, but are clearly not fully understood. Here, we present these experimental findings as a basis for the development and experimental verification of future models that might be able to provide a complete explanation. Qualitatively we expect the following effects to play an important role: when the sheath at the powered electrode is collapsed at 0 ns and starts its expansion phase, electrons are accelerated towards the plasma bulk and the PSR is self-excited. The presence of magnetic field lines parallel to the electrode surface is expected to reduce the electron crossfield transport, so that bulk electrons cannot flow back towards the expanding sheath edge easily. Thus, the positive space charge left behind at the expanding sheath edge shortly after the formation of the first group of energetic electrons due to sheath expansion heating might prevail longer. Thus, the second minimum of the current might occur later and might get stronger as a function of the magnetic field strength, since the electric field required to accelerate bulk electrons back towards the expanding sheath edge might increase due to the enhanced plasma resistance. A more detailed investigation of the spatio-temporal electron power absorption dynamics in RF magnetron plasmas is required to clarify these mechanisms. This is, however, not the scope of this work. Figure 7 shows the normalized accumulated electron power absorption as a function of time within the RF period calculated according to Ziegler et al [85] as: Here, T RF is the duration of one RF period, R P is the plasma resistance, and j is the RF current density. In the unmagnetized case, there is a strong increase of the dissipated power at the beginning of the RF period due to sheath expansion heating of electrons at the powered electrode [85]. Due to the geometric discharge asymmetry the sheath adjacent to the grounded electrode is small and, thus, there is essentially no electron power absorption during the second half of the RF period, when the sheath expands at the grounded electrode. Increasing the magnetic field affects the time resolved accumulated power dissipated to electrons significantly. The changes of the shape of the RF current waveform as a consequence of the modified selfexcitation of the PSR as a function of the magnetic field lead to multiple plateaus of the accumulated power dissipated to electrons during the first half of the RF period, when the sheath expands at the powered electrode. According to figure 2(d) increasing the magnetic field enhances the reactor symmetry. Thus, sheath expansion heating of electrons at the grounded electrode during the second half of the RF period is enhanced by increasing the magnetic field. Moreover due to the magnetically enhanced plasma resistance, electric field reversal during the sheath collapse at the powered electrode, where the magnetic field is high and the electron cross-field transport is reduced, are known to be generated during the second half of the RF period [74]. Thus, electron power absorption is also enhanced at the powered electrode during the second half of the RF period as a function of the magnetic field strength. Consequently, figure 7 shows an increase of the accumulated power dissipated to electrons during the second half of the RF period as a function of the magnetic field at the powered electrode. Geometrically asymmetric RF magnetron operated in Ar/O 2 In this section, measurements of the DC self-bias, the mean ion energy at the grounded electrode, the plasma density (measured 8 mm below the powered electrode at its radial edge), and the RF current waveform as a function of the magnetic field (measured at the reference position) are presented for mixtures of argon with oxygen [25 sccm argon + 3 sccm O 2 ] in a geometrically strongly asymmetric reactor. These results are compared to measurements done in pure argon under otherwise identical discharge conditions (13.56 MHz, 1 Pa, 300 V driving voltage amplitude) to identify the effects of O 2 on the measured parameters. Oxygen is an electronegative and molecular gas. Its presence changes the volume chemistry as well as the electron dynamics. In the magnetized zone close to the powered electrode, the dissociation of molecular oxygen is expected to be larger than in the unmagnetized zone, similar to the situation in inductively coupled plasmas [11,27,32,68]. Depending on the O 2 admixture and the discharge conditions target poisoning can occur, i.e. the aluminum target surface is oxidized and Al 2 O 3 is formed at the plasma facing target surface. Under the conditions studied here, the target is fully poisoned. Such conditions are chosen on purpose in order to maximize the effects of the O 2 admixture on the discharge and, thus, to facilitate identifying them. Depending on the driving voltage amplitude, target poisoning is known to have drastic effects on RF magnetron sputtering applications by modifying the sputter rate as well as the secondary electron emission coefficient. In figure 8 the measured DC self-bias voltage (a) and the calculated symmetry parameter (b) are shown as a function of the magnetic flux density for a driving voltage amplitude of 300 V. The black squares and red dots show results obtained in pure argon and in argon with an admixture of oxygen (25 sccm argon + 3 sccm O 2 ), respectively. For the unmagnetized case, the DC self-bias voltages are almost the same, although the target is completely covered by aluminum oxide in the Ar/O 2 gas mixture. Adding a magnetic field of 20 mT to the Ar/O 2 discharge leads to an increase of the DC self-bias voltage from −250 V to −180 V. For all magnetic flux densities, the measured DC self-bias is significantly higher in Ar/O 2 compared to the pure argon case. A similar behaviour is observed for the symmetry parameters. In the unmagnetized scenario, it is approximately the same in pure argon and in the Ar/O 2 mixture. The discharge gets more symmetric as a function of the magnetic flux density for both gas mixtures. Again the values for the Ar/O 2 mixture are higher than those for pure argon. This is expected to be caused by the presence of a higher secondary electron emission coefficient for the oxidized aluminum surface [14,51]. According to Phelps and Petrovic [51], the difference between heavy particle induced secondary electron emission coefficients of clean and oxidized metal surfaces depends on the incident heavy particle energy. At high bombardment energies above about 150 eV, the secondary electron emission coefficient is higher for oxidized surfaces. Below this threshold, this emission coefficient is higher for clean metal surfaces. For discharge conditions used for sputtering, such as the higher voltage amplitudes studied here, the ion bombardment energy at the target is above this threshold and, thus, oxidized metal surfaces have higher emission coefficients. In the presence of a magnetic field adjacent to the powered target electrode secondary electrons are confined to the magnetized region adjacent to the target and enhance the plasma density close to the target by ionization. This causes the symmetry parameter and the DC self-bias to be higher in Ar/O 2 compared to pure Ar in the presence of a magnetic field. In the unmagnetized cased and at the low pressure of 1 Pa, secondary electrons are not confined to the target region and do not enhance the plasma density via ionization at the powered electrode. Thus, there is no effect of adding O 2 on the symmetry parameter and the DC self-bias in the unmagnetized scenario. In addition to this, the presence of a magnetic field at the target can induce electric field reversals during sheath collapse. This would also enhance the ionization adjacent to the target and would have similar effects on the symmetry parameter and the DC selfbias. Clearly, these are only hypotheses to explain the experimentally observed trends. Simulation and/or model work is required to ultimately clarify these issues in future work. The RFEA measurements of the ion flux energy distribution functions at the grounded electrode as a function of the magnetic field for the Ar/O 2 mixture show qualitatively similar results compared to those shown in figure 3 for pure argon, i.e. a single high energy peak, which indicates the presence of a collisionless sheath, is observed. Thus similar to pure argon, the geometric reactor asymmetry dominates also in Ar/O 2 and does not allow to control the IEDF efficiently by tuning the magnetic field. The mean ion energy at the grounded electrode is shown in figure 9(a) as a function of the magnetic field for both gas mixtures. It shows a complex trend characterized by an initial increase up to a maximum, which is then followed by a decrease of the mean ion energy. This maximum is reached at a low magnetic field of 7 mT in Ar/O 2 , while it is reached at a higher magnetic field of 18 mT in pure argon. For the measurements of the mean ion energy at the grounded electrode in Ar/O 2 , a second peak occurs at 18 mT. However, the difference to the values measured with a magnetic flux density of 11 mT and 20 mT lies in the range of the accuracy of the used RFEA. The dependence of the mean ion energy at the grounded electrode does not simply follow the trend of the DC self-bias and the symmetry parameter as a function of the magnetic field (see figure 8). Note that the symmetry parameter is calculated based on neglecting the voltage drop across the plasma bulk. The DC self-bias will correspond to the difference of the time average sheath voltages at the powered and grounded electrode, if the voltage drop across the plasma bulk is neglected. However, as the magnetic field increases, the voltage drop across the magnetized bulk region also increases due the enhanced magnetic resistance as a consequence of the magnetic electron confinement. This voltage drop across the magnetized bulk region increases as a function of the discharge current, which, in turn, depends on the target conditions. For a constant driving voltage amplitude, this effect will lead to reduced sheath voltages. Our measurements of the mean ion energy at the grounded electrode indicate that this effect leads to a decrease of the mean ion energy at the grounded electrode above different magnetic fields depending on the gas mixture. For Ar/O 2 , this decrease might be observed at lower magnetic fields compared to pure argon, since the discharge current is higher compared to the pure argon discharge due to the oxidized target surface and the higher secondary electron emission coefficient [14,51]. Again, simulation and/or modeling studies are required in the future to clarify this. Such studies are not part of this experimental work, which, however, provides the basis for model/simulation verification in the future. In figure 10 the current densities measured by the SEERS sensor at the center of the grounded electrode time resolved within two RF periods in the mixture of Ar/O 2 are shown for different magnetic flux densities. Figure 11 shows the corresponding Fourier spectra. Compared to the current measurements in pure Ar (see figure 5), the unmagnetized case (a) shows a stronger damping of the high frequency PSR current oscillations due to the admixture of a more collisional molecular gas. Correspondingly compared to pure argon, the Fourier spectrum of the unmagnetized case shows more pronounced amplitudes at low frequencies around the driving frequency of 13.56 MHz relative to the high frequency part of the spectrum. For the unmagnetized Ar/O 2 case, a maximum of the Fourier spectrum is observed at 203.4 MHz, which is identified as the PSR frequency. Introducing a magnetic field in the Ar/O 2 case has similar consequences on the current waveform as in pure argon. For a magnetic field of 7 mT, the strong negative extremum of the current density indicates the acceleration of electrons by the expanding sheath at the powered electrode. A second negative extremum occurs for higher magnetic flux densities a few nanosecond after the first peak and its amplitude increases relative to the first negative extremum as a function of the magnetic field similar to the discharge operated in pure argon under otherwise identical conditions (see figure 5). Overall, the magnetic field clearly affects the PSR oscillations of the RF current waveform. Thus, it can be used to control NERH also in mixture of Ar/O 2 . Figure 12 shows the accumulated power dissipated to electrons as a function of time within the RF period calculated from the current waveform based on equation (3) for the Ar/O 2 gas mixture. The results are similar to those obtained for pure argon under otherwise identical discharge conditions (see figure 7). A strong initial increase of the power dissipated to electrons is observed at the beginning of the RF period due to the sheath expansion heating of electrons at the powered electrode. Due to the presence of the PSR current oscillations plateaus are observed within this initial increase of P e . For low magnetic fields, this initial increase is slightly weaker compared to the pure argon scenario, since the PSR current oscillations are damped more strongly in the presence of a molecular gas admixture. During the second half of the RF period the accumulated power dissipated to electrons increases again during the phase of sheath expansion at the grounded electrode. This increase is stronger for higher magnetic fields, since the discharge is more symmetric at higher magnetic fields (see figure 8(b)) and, thus, sheath expansion heating at the grounded electrode is stronger. Moreover, electron power absorption at the powered electrode during the second half of the RF period might be present due to electric field reversal during the local sheath collapse in the presence of high magnetic fields. Conclusions The magnetic asymmetry effect (MAE) was investigated experimentally in a strongly geometrically asymmetric capacitively coupled RF magnetron plasma operated at 13.56 MHz and at low pressure (1 Pa) in pure argon as well as an in an Ar/O 2 mixture (25 sccm + 3 sccm). Such low temperature plasma sources are highly relevant for thin film deposition via sputtering and are characterized by a strong magnetic field only adjacent to the target electrode, but not at the opposite electrode. The DC self-bias voltage, the plasma symmetry, the time resolved RF current, the plasma density, and the ion fluxenergy distribution function at the grounded electrode were measured as a function of the driving voltage amplitude and the magnetic field measured at a reference position. By comparing the experimental results obtained in Ar/O 2 to those obtained in pure argon under otherwise identical discharge conditions the effects of adding O 2 on the MAE were identified. Similarly, by comparing results obtained in a geometrically strongly asymmetric reactor to those obtained in a reactor with optimized geometrical symmetry under otherwise identical discharge conditions, the effects of the geometric reactor symmetry on the MAE were studied. The geometric reactor asymmetry was found to prevail over the magnetic discharge asymmetry in the geometrically asymmetric reactor. While adjusting the magnetic field at the target electrode allows to tune the DC self-bias and the discharge symmetry over wide ranges in a geometrically relatively symmetric reactor, this magnetic symmetry control is attenuated in geometrically asymmetric reactors. While the plasma density and the ion flux to the grounded electrode are found to be strongly enhanced as a function of the magnetic field, the mean ion energy increases only slightly at the grounded electrode, since the discharge symmetry and the DC self-bias are not affected significantly by tuning the magnetic field. Increasing the driving voltage amplitude is found to enhance the DC selfbias and to reduce the plasma symmetry, since the plasma expands towards the grounded chamber walls. For high magnetic fields and high driving voltage amplitudes, the voltage drop across the magnetized plasma bulk region seems to be enhanced due to an increase of the magnetic resistance and the RF current as a function of the magnetic field. This might cause the sheath voltage and, thus, the mean ion energy at the grounded electrode to decrease as a function of the driving voltage amplitude for high magnetic fields. In the strongly geometrically asymmetric reactor, strong high frequency oscillations of the RF current waveform are observed due to the self-excitation of the plasma series resonance (PSR) during the sheath expansion phase at the powered target electrode. These PSR oscillations cause non-linear electron resonance heating (NERH) and are found to be significantly affected by the magnetic field adjacent to the powered electrode. Thus, the magnetic field can be used as a control parameter for NERH. Admixing 12% O 2 to argon causes an oxidation of the aluminum target surface and an increase of the secondary electron emission coefficient at the powered target electrode [14,51]. In the unmagnetized low pressure scenario, no effect of adding O 2 on the DC self-bias voltage, the plasma symmetry, and the mean ion energy at the grounded electrode is observed, since the secondary electrons generated at the target electrode and accelerated towards the plasma bulk are not confined to the discharge and do not cause significant ionization at the low neutral gas pressure of 1 Pa. Increasing the magnetic field at the powered electrode, however, leads to a better confinement of these electrons. Due to the higher secondary electron yield in the presence of the O 2 admixture, the plasma is found to be more symmetric in Ar/O 2 compared to pure argon under otherwise identical discharge conditions. The mean ion energy at the grounded electrode is found to follow a complex trend as a function of the magnetic field. This is explained qualitatively by an increase of the voltage drop across the plasma bulk as a function of the magnetic field due to an enhanced magnetic resistance. This bulk voltage drop also depends on the discharge current, which is higher for Ar/O 2 compared to pure Ar due to the higher secondary electron yield. The PSR oscillations of the RF current waveform are found to be damped more quickly in the presence of the more collisional molecular gas. This leads to a small attenuation of NERH during the sheath expansion at the powered electrode. These experimental finding are expected to play an important role for knowledge based optimization and control of RF magnetron sputtering applications. They yield insights into the fundamental physics of such low temperature plasmas and provide the basis for the experimental verification of future model/simulation studies of RF magnetrons, which could ultimately reveal the charged particle dynamics in such discharges. Industrial sputter applications are typically based on strongly geometrically asymmetric plasma reactors, i.e. the ratio of the powered surface to the grounded surface is small. Higher magnetic flux densities of 100 mT might be required to realize magnetic control of the ion flux-energy distribution at boundary surfaces via the MAE under such discharge conditions. Constructing more geometrically symmetric reactors will also lead to conditions where the MAE is not suppressed by the geometrical asymmetry. In any case, a tunable magnetic field is needed.
v3-fos-license